18 Fangle is a tool for fangled literate programming. Newfangled is defined as New and often needlessly novel by TheFreeDictionary.com.
19 In this case, fangled means yet another not-so-new1. but improved. ^1 method for literate programming.
20 Literate Programming has a long history starting with the great Donald Knuth himself, whose literate programming tools seem to make use of as many escape sequences for semantic markup as TeX (also by Donald Knuth).
21 Norman Ramsey wrote the Noweb set of tools (notangle, noweave and noroots) and helpfully reduced the amount of magic character sequences to pretty much just <<, >> and @, and in doing so brought the wonders of literate programming within my reach.
22 While using the L Y X editor for LaTeX editing I had various troubles with the noweb tools, some of which were my fault, some of which were noweb's fault and some of which were L Y X's fault.
23 Noweb generally brought literate programming to the masses through removing some of the complexity of the original literate programming, but this would be of no advantage to me if the L Y X / LaTeX combination brought more complications in their place.
24 Fangle was thus born (originally called Newfangle) as an awk replacement for notangle, adding some important features, like better integration with L Y X and LaTeX (and later TeXmacs), multiple output format conversions, and fixing notangle bugs like indentation when using -L for line numbers.
25 Significantly, fangle is just one program which replaces various programs in Noweb. Noweave is done away with and implemented directly as LaTeX macros, and noroots is implemented as a function of the untangler fangle.
26 Fangle is written in awk for portability reasons, awk being available for most platforms. A Python version2. hasn't anyone implemented awk in python yet? ^2 was considered for the benefit of L Y X but a scheme version for TeXmacs will probably materialise first; as TeXmacs macro capabilities help make edit-time and format-time rendering of fangle chunks simple enough for my weak brain.
27 As an extension to many literate-programming styles, Fangle permits code chunks to take parameters and thus operate somewhat like C pre-processor macros, or like C++ templates. Name parameters (or even local variables in the callers scope) are anticipated, as parameterized chunks — useful though they are — are hard to comprehend in the literate document.
29 Fangle is licensed under the GPL 3 (or later).
30 This doesn't mean that sources generated by fangle must be licensed under the GPL 3.
31 This doesn't mean that you can't use or distribute fangle with sources of an incompatible license, but it means you must make the source of fangle available too.
32 As fangle is currently written in awk, an interpreted language, this should not be too hard.
34 4a <gpl3-copyright[1](
\v), lang=text> ≡
35 ________________________________________________________________________
36 1 | fangle - fully featured notangle replacement in awk
38 3 | Copyright (C) 2009-2010 Sam Liddicott <sam@liddicott.com>
40 5 | This program is free software: you can redistribute it and/or modify
41 6 | it under the terms of the GNU General Public License as published by
42 7 | the Free Software Foundation, either version 3 of the License, or
43 8 | (at your option) any later version.
45 10 | This program is distributed in the hope that it will be useful,
46 11 | but WITHOUT ANY WARRANTY; without even the implied warranty of
47 12 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
48 13 | GNU General Public License for more details.
50 15 | You should have received a copy of the GNU General Public License
51 16 | along with this program. If not, see <http://www.gnu.org/licenses/>.
52 |________________________________________________________________________
59 1 Introduction to Literate Programming 11
62 2.2 Extracting roots 13
63 2.3 Formatting the document 13
64 3 Using Fangle with L^ A T_ E X 15
65 4 Using Fangle with L Y X 17
66 4.1 Installing the L Y X module 17
67 4.2 Obtaining a decent mono font 17
71 4.3 Formatting your Lyx document 18
72 4.3.1 Customising the listing appearance 18
73 4.3.2 Global customisations 18
74 4.4 Configuring the build script 19
76 5 Using Fangle with T_ E X_( M A CS) 21
77 6 Fangle with Makefiles 23
78 6.1 A word about makefiles formats 23
79 6.2 Extracting Sources 23
80 6.2.1 Converting from L Y X to L^ A T_ E X 24
81 6.2.2 Converting from T_ E X_( M A CS) 24
82 6.3 Extracting Program Source 25
83 6.4 Extracting Source Files 25
84 6.5 Extracting Documentation 28
85 6.5.1 Formatting T_ E X 28
86 6.5.1.1 Running pdflatex 28
87 6.5.2 Formatting T_ E X_( M A CS) 28
88 6.5.3 Building the Documentation as a Whole 28
90 6.7 Boot-strapping the extraction 29
91 6.8 Incorporating Makefile.inc into existing projects 30
95 8 Fangle awk source code 37
97 8.2 Catching errors 38
98 9 T_ E X_( M A CS) args 39
99 10 L^ A T_ E X and lstlistings 41
100 10.1 Additional lstlstings parameters 41
101 10.2 Parsing chunk arguments 43
102 10.3 Expanding parameters in the text 44
103 11 Language Modes & Quoting 47
104 11.1 Modes explanation 47
105 11.2 Modes affect included chunks 47
106 11.3 Language Mode Definitions 48
109 11.3.3 Parentheses, Braces and Brackets 50
110 11.3.4 Customizing Standard Modes 51
116 11.4 Quoting scenarios 56
117 11.4.1 Direct quoting 56
119 11.6 A non-recursive mode tracker 58
120 11.6.1 Constructor 58
123 11.6.3.1 One happy chunk 62
125 11.7 Escaping and Quoting 63
126 12 Recognizing Chunks 65
128 12.1.1 T_ E X_( M A CS) 65
129 12.1.2 lstlistings 66
131 12.2.1 T_ E X_( M A CS) 67
134 12.3.1 lstlistings 68
136 12.4 Chunk contents 69
137 12.4.1 lstlistings 70
138 13 Processing Options 73
139 14 Generating the Output 75
140 14.1 Assembling the Chunks 76
141 14.1.1 Chunk Parts 76
144 17 Fangle LaTeX source code 87
145 17.1 fangle module 87
146 17.1.1 The Chunk style 87
147 17.1.2 The chunkref style 88
149 17.2.1 The chunk command 89
150 17.2.1.1 Chunk parameters 90
151 17.2.2 The noweb styled caption 90
152 17.2.3 The chunk counter 90
153 17.2.4 Cross references 93
155 18 Extracting fangle 95
156 18.1 Extracting from Lyx 95
157 18.2 Extracting documentation 95
158 18.3 Extracting from the command line 96
161 20 Chunk Parameters 101
163 20.2 T_ E X_( M A CS) 101
164 21 Compile-log-lyx 103
166 Chapter 1Introduction to Literate Programming
167 Todo: Should really follow on from a part-0 explanation of what literate programming is.
168 Chapter 2Running Fangle
169 Fangle is a replacement for noweb, which consists of notangle, noroots and noweave.
170 Like notangle and noroots, fangle can read multiple named files, or from stdin.
172 The -r option causes fangle to behave like noroots.
173 fangle -r filename.tex
174 will print out the fangle roots of a tex file.
175 Unlike the noroots command, the printed roots are not enclosed in angle brackets e.g. <<name>>, unless at least one of the roots is defined using the notangle notation <<name>>=.
176 Also, unlike noroots, it prints out all roots --- not just those that are not used elsewhere. I find that a root not being used doesn't make it particularly top level — and so-called top level roots could also be included in another root as well.
177 My convention is that top level roots to be extracted begin with ./ and have the form of a filename.
178 Makefile.inc, discussed in 6, can automatically extract all such sources prefixed with ./
180 notangle's -R and -L options are supported.
181 If you are using L Y X or LaTeX, the standard way to extract a file would be:
182 fangle -R./Makefile.inc fangle.tex > ./Makefile.inc
183 If you are using TeXmacs, the standard way to extract a file would similarly be:
184 fangle -R./Makefile.inc fangle.txt > ./Makefile.inc
185 TeXmacs users would obtain the text file with a verbatim export from TeXmacs which can be done on the command line with texmacs -s -c fangle.tm fangle.txt -q
186 Unlike the noroots command, the -L option to generate C pre-preocessor #file style line-number directives,does not break indenting of the generated file..
187 Also, thanks to mode tracking (described in 11) the -L option does not interrupt (and break) multi-line C macros either.
188 This does mean that sometimes the compiler might calculate the source line wrongly when generating error messages in such cases, but there isn't any other way around if multi-line macros include other chunks.
189 Future releases will include a mapping file so that line/character references from the C compiler can be converted to the correct part of the source document.
190 2.3 Formatting the document
191 The noweave replacement built into the editing and formatting environment for TeXmacs, L Y X (which uses LaTeX), and even for raw LaTeX.
192 Use of fangle with TeXmacs, L Y X and LaTeX are explained the the next few chapters.
193 Chapter 3Using Fangle with LaTeX
194 Because the noweave replacement is impemented in LaTeX, there is no processing stage required before running the LaTeX command. Of course, LaTeX may need running two or more times, so that the code chunk references can be fully calculated.
195 The formatting is managed by a set of macros shown in 17, and can be included with:
196 \usepackage{fangle.sty}
197 Norman Ramsay's origial noweb.sty package is currently required as it is used for formatting the code chunk captions.
198 The listings.sty package is required, and is used for formatting the code chunks and syntax highlighting.
199 The xargs.sty package is also required, and makes writing LaTeX macro so much more pleasant.
200 To do: Add examples of use of Macros
202 Chapter 4Using Fangle with L Y X
203 L Y X uses the same LaTeX macros shown in 17 as part of a L Y X module file fangle.module, which automatically includes the macros in the document pre-amble provided that the fangle L Y X module is used in the document.
204 4.1 Installing the L Y X module
205 Copy fangle.module to your L Y X layouts directory, which for unix users will be ~/.lyx/layouts
206 In order to make the new literate styles availalble, you will need to reconfigure L Y X by clicking Tools->Reconfigure, and then re-start L Y X.
207 4.2 Obtaining a decent mono font
208 The syntax high-lighting features of lstlistings makes use of bold; however a mono-space tt font is used to typeset the listings. Obtaining a bold tt font can be impossibly difficult and amazingly easy. I spent many hours at it, following complicated instructions from those who had spend many hours over it, and was finally delivered the simple solution on the lyx mailing list.
210 The simple way was to add this to my preamble:
212 \renewcommand{\ttdefault}{txtt}
215 The next simplest way was to use ams poor-mans-bold, by adding this to the pre-amble:
217 %\renewcommand{\ttdefault}{txtt}
218 %somehow make \pmb be the command for bold, forgot how, sorry, above line not work
219 It works, but looks wretched on the dvi viewer.
221 The lstlistings documention suggests using Luximono.
222 Luximono was installed according to the instructions in Ubuntu Forums thread 11591811. http://ubuntuforums.org/showthread.php?t=1159181 ^1 with tips from miknight2. http://miknight.blogspot.com/2005/11/how-to-install-luxi-mono-font-in.html ^2 stating that sudo updmap --enable MixedMap ul9.map is required. It looks fine in PDF and PS view but still looks rotten in dvi view.
223 4.3 Formatting your Lyx document
224 It is not necessary to base your literate document on any of the original L Y X literate classes; so select a regular class for your document type.
225 Add the new module Fangle Literate Listings and also Logical Markup which is very useful.
226 In the drop-down style listbox you should notice a new style defined, called Chunk.
227 When you wish to insert a literate chunk, you enter it's plain name in the Chunk style, instead of the old noweb method that uses <<name>>= type tags. In the line (or paragraph) following the chunk name, you insert a listing with: Insert->Program Listing.
228 Inside the white listing box you can type (or paste using shift+ctrl+V) your listing. There is no need to use ctrl+enter at the end of lines as with some older L Y X literate techniques --- just press enter as normal.
229 4.3.1 Customising the listing appearance
230 The code is formatted using the lstlistings package. The chunk style doesn't just define the chunk name, but can also define any other chunk options supported by the lstlistings package \lstset command. In fact, what you type in the chunk style is raw latex. If you want to set the chunk language without having to right-click the listing, just add ,lanuage=C after the chunk name. (Currently the language will affect all subsequent listings, so you may need to specify ,language= quite a lot).
231 To do: so fix the bug
233 Of course you can do this by editing the listings box advanced properties by right-clicking on the listings box, but that takes longer, and you can't see at-a-glance what the advanced settings are while editing the document; also advanced settings apply only to that box --- the chunk settings apply through the rest of the document3. It ought to apply only to subsequent chunks of the same name. I'll fix that later ^3.
234 To do: So make sure they only apply to chunks of that name
236 4.3.2 Global customisations
237 As lstlistings is used to set the code chunks, it's \lstset command can be used in the pre-amble to set some document wide settings.
238 If your source has many words with long sequences of capital letters, then columns=fullflexible may be a good idea, or the capital letters will get crowded. (I think lstlistings ought to use a slightly smaller font for captial letters so that they still fit).
239 The font family \ttfamily looks more normal for code, but has no bold (an alternate typewriter font is used).
240 With \ttfamily, I must also specify columns=fullflexible or the wrong letter spacing is used.
241 In my LaTeX pre-amble I usually specialise my code format with:
243 19a <document-preamble[1](
\v), lang=tex> ≡
244 ________________________________________________________________________
246 2 | numbers=left, stepnumber=1, numbersep=5pt,
247 3 | breaklines=false,
248 4 | basicstyle=\footnotesize\ttfamily,
249 5 | numberstyle=\tiny,
251 7 | columns=fullflexible,
252 8 | numberfirstline=true
254 |________________________________________________________________________
258 4.4 Configuring the build script
259 You can invoke code extraction and building from the L Y X menu option Document->Build Program.
260 First, make sure you don't have a conversion defined for Lyx->Program
261 From the menu Tools->Preferences, add a conversion from Latex(Plain)->Program as:
262 set -x ; fangle -Rlyx-build $$i |
263 env LYX_b=$$b LYX_i=$$i LYX_o=$$o LYX_p=$$p LYX_r=$$r bash
264 (But don't cut-n-paste it from this document or you may be be pasting a multi-line string which will break your lyx preferences file).
265 I hope that one day, L Y X will set these into the environment when calling the build script.
266 You may also want to consider adding options to this conversion...
267 parselog=/usr/share/lyx/scripts/listerrors
268 ...but if you do you will lose your stderr4. There is some bash plumbing to get a copy of stderr but this footnote is too small ^4.
269 Now, a shell script chunk called lyx-build will be extracted and run whenever you choose the Document->Build Program menu item.
270 This document was originally managed using L Y X and lyx-build script for this document is shown here for historical reference.
271 lyx -e latex fangle.lyx && \
272 fangle fangle.lyx > ./autoboot
273 This looks simple enough, but as mentioned, fangle has to be had from somewhere before it can be extracted.
275 When the lyx-build chunk is executed, the current directory will be a temporary directory, and LYX_SOURCE will refer to the tex file in this temporary directory. This is unfortunate as our makefile wants to run from the project directory where the Lyx file is kept.
276 We can extract the project directory from $$r, and derive the probable Lyx filename from the noweb file that Lyx generated.
278 19b <lyx-build-helper[1](
\v), lang=sh> ≡ 95b⊳
279 ________________________________________________________________________
280 1 | PROJECT_DIR="$LYX_r"
281 2 | LYX_SRC="$PROJECT_DIR/${LYX_i%.tex}.lyx"
283 4 | TEX_SRC="$TEX_DIR/$LYX_i"
284 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
285 And then we can define a lyx-build fragment similar to the autoboot fragment
287 20a <lyx-build[1](
\v), lang=sh> ≡ 95a⊳
288 ________________________________________________________________________
290 2 | «lyx-build-helper 19b»
291 3 | cd $PROJECT_DIR || exit 1
293 5 | #/usr/bin/fangle -filter ./notanglefix-filter \
294 6 | # -R./Makefile.inc "../../noweb-lyx/noweb-lyx3.lyx" \
295 7 | # | sed '/NOWEB_SOURCE=/s/=.*/=samba4-dfs.lyx/' \
296 8 | # > ./Makefile.inc
298 10 | #make -f ./Makefile.inc fangle_sources
299 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
301 Chapter 5Using Fangle with TeXmacs
302 To do: Write this chapter
304 Chapter 6Fangle with Makefiles
305 Here we describe a Makefile.inc that you can include in your own Makefiles, or glue as a recursive make to other projects.
306 Makefile.inc will cope with extracting all the other source files from this or any specified literate document and keeping them up to date.
307 It may also be included by a Makefile or Makefile.am defined in a literate document to automatically deal with the extraction of source files and documents during normal builds.
308 Thus, if Makefile.inc is included into a main project makefile it add rules for the source files, capable of extracting the source files from the literate document.
309 6.1 A word about makefiles formats
310 Whitespace formatting is very important in a Makefile. The first character of each action line must be a TAB.
311 target: pre-requisite
314 This requires that the literate programming environment have the ability to represent a TAB character in a way that fangle will generate an actual TAB character.
315 We also adopt a convention that code chunks whose names beginning with ./ should always be automatically extracted from the document. Code chunks whose names do not begin with ./ are for internal reference. Such chunks may be extracted directly, but will not be automatically extracted by this Makefile.
316 6.2 Extracting Sources
317 Our makefile has two parts; variables must be defined before the targets that use them.
318 As we progress through this chapter, explaining concepts, we will be adding lines to <Makefile.inc-vars 23b> and <Makefile.inc-targets 24c> which are included in <./Makefile.inc 23a> below.
320 23a <./Makefile.inc[1](
\v), lang=make> ≡
321 ________________________________________________________________________
322 1 | «Makefile.inc-vars 23b»
323 2 | «Makefile.inc-default-targets 28a»
324 3 | «Makefile.inc-targets 24c»
325 |________________________________________________________________________
328 We first define a placeholder for the tool fangle in case it cannot be found in the path.
330 23b <Makefile.inc-vars[1](
\v), lang=make> ≡ 24a⊳
331 ________________________________________________________________________
334 3 | RUN_FANGLE=$(AWK) -f $(FANGLE)
335 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
336 We also define a placeholder for LITERATE_SOURCE to hold the name of this document. This will normally be passed on the command line or set by the including makefile.
338 24a <Makefile.inc-vars[2](
\v) ⇑23b, lang=> +≡ ⊲23b 24b▿
339 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
340 4 | #LITERATE_SOURCE=
341 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
342 Fangle cannot process L Y X or TeXmacs documents directly, so the first stage is to convert these to more suitable text based formats1. L Y X and TeXmacs formats are text-based, but not suitable for fangle ^1.
343 6.2.1 Converting from L Y X to LaTeX
344 The first stage will always be to convert the L Y X file to a LaTeX file. Fangle must run on a TeX file because the L Y X command server-goto-file-line2. The Lyx command server-goto-file-line is used to position the Lyx cursor at the compiler errors. ^2 requries that the line number provided be a line of the TeX file and always maps this the line in the L Y X docment. We use server-goto-file-line when moving the cursor to error lines during compile failures.
345 The command lyx -e literate fangle.lyx will produce fangle.tex, a TeX file; so we define a make target to be the same as the L Y X file but with the .tex extension.
346 The EXTRA_DIST is for automake support so that the TeX files will automaticaly be distributed with the source, to help those who don't have L Y X installed.
348 24b <Makefile.inc-vars[3](
\v) ⇑23b, lang=> +≡ ▵24a 24d▿
349 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
350 5 | LYX_SOURCE=$(LITERATE_SOURCE) # but only the .lyx files
351 6 | TEX_SOURCE=$(LYX_SOURCE:.lyx=.tex)
352 7 | EXTRA_DIST+=$(TEX_SOURCE)
353 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
354 We then specify that the TeX source is to be generated from the L Y X source.
356 24c <Makefile.inc-targets[1](
\v), lang=make> ≡ 25a⊳
357 ________________________________________________________________________
358 1 | .SUFFIXES: .tex .lyx
362 5 | ↦rm -f -- $(TEX_SOURCE)
364 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
365 6.2.2 Converting from TeXmacs
366 Fangle cannot process TeXmacs files directly3. but this is planned when TeXmacs uses xml as it's native format ^3, but must first convert them to text files.
367 The command texmacs -c fangle.tm fangle.txt -q will produce fangle.txt, a text file; so we define a make target to be the same as the TeXmacs file but with the .txt extension.
368 The EXTRA_DIST is for automake support so that the TeX files will automaticaly be distributed with the source, to help those who don't have L Y X installed.
370 24d <Makefile.inc-vars[4](
\v) ⇑23b, lang=> +≡ ▵24b 25b⊳
371 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
372 8 | TEXMACS_SOURCE=$(LITERATE_SOURCE) # but only the .tm files
373 9 | TXT_SOURCE=$(LITERATE_SOURCE:.tm=.txt)
374 10 | EXTRA_DIST+=$(TXT_SOURCE)
375 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
376 To do: Add loop around each $< so multiple targets can be specified
379 25a <Makefile.inc-targets[2](
\v) ⇑24c, lang=> +≡ ⊲24c 25d▿
380 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
381 7 | .SUFFIXES: .txt .tm
383 9 | ↦texmacs -s -c $< $@ -q
384 10 | .PHONEY: clean_txt
386 12 | ↦rm -f -- $(TXT_SOURCE)
387 13 | clean: clean_txt
388 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
389 6.3 Extracting Program Source
390 The program source is extracted using fangle, which is designed to operate on text or a LaTeX documents4. LaTeX documents are just slightly special text documents ^4.
392 25b <Makefile.inc-vars[5](
\v) ⇑23b, lang=> +≡ ⊲24d 25c▿
393 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
394 11 | FANGLE_SOURCE=$(TXT_SOURCE)
395 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
396 The literate document can result in any number of source files, but not all of these will be changed each time the document is updated. We certainly don't want to update the timestamps of these files and cause the whole source tree to be recompiled just because the literate explanation was revised. We use CPIF from the Noweb tools to avoid updating the file if the content has not changed, but should probably write our own.
397 However, if a source file is not updated, then the fangle file will always have a newer time-stamp and the makefile would always re-attempt to extact a newer source file which would be a waste of time.
398 Because of this, we use a stamp file which is always updated each time the sources are fully extracted from the LaTeX document. If the stamp file is newer than the document, then we can avoid an attempt to re-extract any of the sources. Because this stamp file is only updated when extraction is complete, it is safe for the user to interrupt the build-process mid-extraction.
399 We use echo rather than touch to update the stamp file beause the touch command does not work very well over an sshfs mount that I was using.
401 25c <Makefile.inc-vars[6](
\v) ⇑23b, lang=> +≡ ▵25b 26a⊳
402 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
403 12 | FANGLE_SOURCE_STAMP=$(FANGLE_SOURCE).stamp
404 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
406 25d <Makefile.inc-targets[3](
\v) ⇑24c, lang=> +≡ ▵25a 26b⊳
407 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
408 14 | $(FANGLE_SOURCE_STAMP): $(FANGLE_SOURCE) \
409 15 | ↦ $(FANGLE_SOURCES) ; \
410 16 | ↦echo -n > $(FANGLE_SOURCE_STAMP)
412 18 | ↦rm -f $(FANGLE_SOURCE_STAMP)
413 19 | clean: clean_stamp
414 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
415 6.4 Extracting Source Files
416 We compute FANGLE_SOURCES to hold the names of all the source files defined in the document. We compute this only once, by means of := in assignent. The sed deletes the any << and >> which may surround the roots names (for compatibility with Noweb's noroots command).
417 As we use chunk names beginning with ./ to denote top level fragments that should be extracted, we filter out all fragments that do not begin with ./
418 Note 1. FANGLE_PREFIX is set to ./ by default, but whatever it may be overridden to, the prefix is replaced by a literal ./ before extraction so that files will be extracted in the current directory whatever the prefix. This helps namespace or sub-project prefixes like documents: for chunks like documents:docbook/intro.xml
419 To do: This doesn't work though, because it loses the full name and doesn't know what to extact!
422 26a <Makefile.inc-vars[7](
\v) ⇑23b, lang=> +≡ ⊲25c 26e▿
423 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
424 13 | FANGLE_PREFIX:=\.\/
425 14 | FANGLE_SOURCES:=$(shell \
426 15 | $(RUN_FANGLE) -r $(FANGLE_SOURCE) |\
427 16 | sed -e 's/^[<][<]//;s/[>][>]$$//;/^$(FANGLE_PREFIX)/!d' \
428 17 | -e 's/^$(FANGLE_PREFIX)/\.\//' )
429 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
430 The target below, echo_fangle_sources is a helpful debugging target and shows the names of the files that would be extracted.
432 26b <Makefile.inc-targets[4](
\v) ⇑24c, lang=> +≡ ⊲25d 26c▿
433 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
434 20 | .PHONY: echo_fangle_sources
435 21 | echo_fangle_sources: ; @echo $(FANGLE_SOURCES)
436 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
437 We define a convenient target called fangle_sources so that make -f fangle_sources will re-extract the source if the literate document has been updated.
439 26c <Makefile.inc-targets[5](
\v) ⇑24c, lang=> +≡ ▵26b 26d▿
440 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
441 22 | .PHONY: fangle_sources
442 23 | fangle_sources: $(FANGLE_SOURCE_STAMP)
443 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
444 And also a convenient target to remove extracted sources.
446 26d <Makefile.inc-targets[6](
\v) ⇑24c, lang=> +≡ ▵26c 27e⊳
447 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
448 24 | .PHONY: clean_fangle_sources
449 25 | clean_fangle_sources: ; \
450 26 | rm -f -- $(FANGLE_SOURCE_STAMP) $(FANGLE_SOURCES)
451 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
452 We now look at the extraction of the source files.
453 This makefile macro if_extension takes 4 arguments: the filename $(1), some extensions to match $(2) and a shell command to return if the filename does match the exensions $(3), and a shell command to return if it does not match the extensions $(4).
455 26e <Makefile.inc-vars[8](
\v) ⇑23b, lang=> +≡ ▵26a 26f⊳
456 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
457 18 | if_extension=$(if $(findstring $(suffix $(1)),$(2)),$(3),$(4))
458 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
459 For some source files like C files, we want to output the line number and filename of the original LaTeX document from which the source came5. I plan to replace this option with a separate mapping file so as not to pollute the generated source, and also to allow a code pretty-printing reformatter like indent be able to re-format the file and adjust for changes through comparing the character streams. ^5.
460 To make this easier we define the file extensions for which we want to do this.
462 27a <Makefile.inc-vars[9](
\v) ⇑23b, lang=> +≡ ⊲26e 27a▿
463 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
464 19 | C_EXTENSIONS=.c .h
465 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
466 We can then use the if_extensions macro to define a macro which expands out to the -L option if fangle is being invoked in a C source file, so that C compile errors will refer to the line number in the TeX document.
468 27b <Makefile.inc-vars[10](
\v) ⇑23b, lang=> +≡ ▵26f 27b▿
469 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
471 21 | nf_line=-L -T$(TABS)
472 22 | fangle=$(RUN_FANGLE) $(call if_extension,$(2),$(C_EXTENSIONS),$(nf_line)) -R"$(2)" $(1)
473 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
474 We can use a similar trick to define an indent macro which takes just the filename as an argument and can return a pipeline stage calling the indent command. Indent can be turned off with make fangle_sources indent=
476 27c <Makefile.inc-vars[11](
\v) ⇑23b, lang=> +≡ ▵27a 27c▿
477 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
478 23 | indent_options=-npro -kr -i8 -ts8 -sob -l80 -ss -ncs
479 24 | indent=$(call if_extension,$(1),$(C_EXTENSIONS), | indent $(indent_options))
480 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
481 We now define the pattern for extracting a file. The files are written using noweb's cpif so that the file timestamp will not be touched if the contents haven't changed. This avoids the need to rebuild the entire project because of a typographical change in the documentation, or if none or a few C source files have changed.
483 27d <Makefile.inc-vars[12](
\v) ⇑23b, lang=> +≡ ▵27b 27d▿
484 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
485 25 | fangle_extract=@mkdir -p $(dir $(1)) && \
486 26 | $(call fangle,$(2),$(1)) > "$(1).tmp" && \
487 27 | cat "$(1).tmp" $(indent) | cpif "$(1)" \
488 28 | && rm -f -- "$(1).tmp" || \
489 29 | (echo error fangling $(1) from $(2) ; exit 1)
490 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
491 We define a target which will extract or update all sources. To do this we first defined a makefile template that can do this for any source file in the LaTeX document.
493 27e <Makefile.inc-vars[13](
\v) ⇑23b, lang=> +≡ ▵27c 28b⊳
494 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
495 30 | define FANGLE_template
497 32 | ↦$$(call fangle_extract,$(1),$(2))
498 33 | FANGLE_TARGETS+=$(1)
500 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
501 We then enumerate the discovered FANGLE_SOURCES to generate a makefile rule for each one using the makefile template we defined above.
503 27f <Makefile.inc-targets[7](
\v) ⇑24c, lang=> +≡ ⊲26d 27f▿
504 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
505 27 | $(foreach source,$(FANGLE_SOURCES),\
506 28 | $(eval $(call FANGLE_template,$(source),$(FANGLE_SOURCE))) \
508 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
509 These will all be built with FANGLE_SOURCE_STAMP.
510 We also remove the generated sources on a make distclean.
512 27g <Makefile.inc-targets[8](
\v) ⇑24c, lang=> +≡ ▵27e 28c⊳
513 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
514 30 | _distclean: clean_fangle_sources
515 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
516 6.5 Extracting Documentation
517 We then identify the intermediate stages of the documentation and their build and clean targets.
519 28a <Makefile.inc-default-targets[1](
\v), lang=> ≡
520 ________________________________________________________________________
521 1 | .PHONEY : clean_pdf
522 |________________________________________________________________________
526 6.5.1.1 Running pdflatex
527 We produce a pdf file from the tex file.
529 28b <Makefile.inc-vars[14](
\v) ⇑23b, lang=> +≡ ⊲27d 28d▿
530 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
531 35 | FANGLE_PDF+=$(TEX_SOURCE:.tex=.pdf)
532 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
533 We run pdflatex twice to be sure that the contents and aux files are up to date. We certainly are required to run pdflatex at least twice if these files do not exist.
535 28c <Makefile.inc-targets[9](
\v) ⇑24c, lang=> +≡ ⊲27f 28e▿
536 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
537 31 | .SUFFIXES: .tex .pdf
539 33 | ↦pdflatex $< && pdflatex $<
542 36 | ↦rm -f -- $(FANGLE_PDF) $(TEX_SOURCE:.tex=.toc) \
543 37 | ↦ $(TEX_SOURCE:.tex=.log) $(TEX_SOURCE:.tex=.aux)
544 38 | clean_pdf: clean_pdf_tex
545 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
546 6.5.2 Formatting TeXmacs
547 TeXmacs can produce a PDF file directly.
549 28d <Makefile.inc-vars[15](
\v) ⇑23b, lang=> +≡ ▵28b 28f⊳
550 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
551 36 | FANGLE_PDF+=$(LITERATE_SOURCE:.tm=.pdf)
552 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
553 To do: Outputting the PDF may not be enough to update the links and page references. I think
554 we need to update twice, generate a pdf, update twice mode and generate a new PDF.
555 Basically the PDF export of TeXmacs is pretty rotten and doesn't work properly from the CLI
558 28e <Makefile.inc-targets[10](
\v) ⇑24c, lang=> +≡ ▵28c 29a⊳
559 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
560 39 | .SUFFIXES: .tm .pdf
562 41 | ↦texmacs -s -c $< $@ -q
564 43 | clean_pdf_texmacs:
565 44 | ↦rm -f -- $(FANGLE_PDF)
566 45 | clean_pdf: clean_pdf_texmacs
567 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
568 6.5.3 Building the Documentation as a Whole
569 Currently we only build pdf as a final format, but FANGLE_DOCS may later hold other output formats.
571 29a <Makefile.inc-vars[16](
\v) ⇑23b, lang=> +≡ ⊲28d
572 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
573 37 | FANGLE_DOCS=$(FANGLE_PDF)
574 |________________________________________________________________________
577 We also define fangle_docs as a convenient phony target.
579 29b <Makefile.inc-targets[11](
\v) ⇑24c, lang=> +≡ ⊲28e 29b▿
580 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
581 46 | .PHONY: fangle_docs
582 47 | fangle_docs: $(FANGLE_DOCS)
583 48 | docs: fangle_docs
584 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
585 And define a convenient clean_fangle_docs which we add to the regular clean target
587 29c <Makefile.inc-targets[12](
\v) ⇑24c, lang=> +≡ ▵29a
588 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
589 49 | .PHONEY: clean_fangle_docs
590 50 | clean_fangle_docs: clean_tex clean_pdf
591 51 | clean: clean_fangle_docs
593 53 | distclean_fangle_docs: clean_tex clean_fangle_docs
594 54 | distclean: clean distclean_fangle_docs
595 |________________________________________________________________________
599 If Makefile.inc is included into Makefile, then extracted files can be updated with this command:
602 make -f Makefile.inc fangle_sources
603 6.7 Boot-strapping the extraction
604 As well as having the makefile extract or update the source files as part of it's operation, it also seems convenient to have the makefile re-extracted itself from this document.
605 It would also be convenient to have the code that extracts the makefile from this document to also be part of this document, however we have to start somewhere and this unfortunately requires us to type at least a few words by hand to start things off.
606 Therefore we will have a minimal root fragment, which, when extracted, can cope with extracting the rest of the source. This shell script fragment can do that. It's name is * — out of regard for Noweb, but when extracted might better be called autoupdate.
610 29d <*[1](
\v), lang=sh> ≡
611 ________________________________________________________________________
614 3 | MAKE_SRC="${1:-${NW_LYX:-../../noweb-lyx/noweb-lyx3.lyx}}"
615 4 | MAKE_SRC=‘dirname "$MAKE_SRC"‘/‘basename "$MAKE_SRC" .lyx‘
616 5 | NOWEB_SRC="${2:-${NOWEB_SRC:-$MAKE_SRC.lyx}}"
617 6 | lyx -e latex $MAKE_SRC
619 8 | fangle -R./Makefile.inc ${MAKE_SRC}.tex \
620 9 | | sed "/FANGLE_SOURCE=/s/^/#/;T;aNOWEB_SOURCE=$FANGLE_SRC" \
621 10 | | cpif ./Makefile.inc
623 12 | make -f ./Makefile.inc fangle_sources
624 |________________________________________________________________________
627 The general Makefile can be invoked with ./autoboot and can also be included into any automake file to automatically re-generate the source files.
628 The autoboot can be extracted with this command:
629 lyx -e latex fangle.lyx && \
630 fangle fangle.lyx > ./autoboot
631 This looks simple enough, but as mentioned, fangle has to be had from somewhere before it can be extracted.
632 On a unix system this will extract fangle.module and the fangle awk script, and run some basic tests.
633 To do: cross-ref to test chapter when it is a chapter all on its own
635 6.8 Incorporating Makefile.inc into existing projects
636 If you are writing a literate module of an existing non-literate program you may find it easier to use a slight recursive make instead of directly including Makefile.inc in the projects makefile.
637 This way there is less chance of definitions in Makefile.inc interfering with definitions in the main makefile, or with definitions in other Makefile.inc from other literate modules of the same project.
638 To do this we add some glue to the project makefile that invokes Makefile.inc in the right way. The glue works by adding a .PHONY target to call the recursive make, and adding this target as an additional pre-requisite to the existing targets.
639 Example Sub-module of existing system
640 In this example, we are building module.so as a literate module of a larger project.
641 We will show the sort glue that can be inserted into the projects Makefile — or more likely — a regular Makefile included in or invoked by the projects Makefile.
643 30a <makefile-glue[1](
\v), lang=> ≡ 30b▿
644 ________________________________________________________________________
645 1 | module_srcdir=modules/module
646 2 | MODULE_SOURCE=module.tm
647 3 | MODULE_STAMP=$(MODULE_SOURCE).stamp
648 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
649 The existing build system may already have a build target for module.o, but we just add another pre-requisite to that. In this case we use module.tm.stamp as a pre-requisite, the stamp file's modified time indicating when all sources were extracted6. If the projects build system does not know how to build the module from the extracted sources, then just add build actions here as normal. ^6.
651 30b <makefile-glue[2](
\v) ⇑30a, lang=make> +≡ ▵30a 30c▿
652 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
653 4 | $(module_srcdir)/module.o: $(module_srcdir)/$(MODULE_STAMP)
654 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
655 The target for this new pre-requisite will be generated by a recursive make using Makefile.inc which will make sure that the source is up to date, before it is built by the main projects makefile.
657 30c <makefile-glue[3](
\v) ⇑30a, lang=> +≡ ▵30b 31a⊳
658 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
659 5 | $(module_srcdir)/$(MODULE_STAMP): $(module_srcdir)/$(MODULE_SOURCE)
660 6 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc fangle_sources LITERATE_SOURCE=$(MODULE_SOURCE)
661 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
662 We can do similar glue for the docs, clean and distclean targets. In this example the main prject was using a double colon for these targets, so we must use the same in our glue.
664 31a <makefile-glue[4](
\v) ⇑30a, lang=> +≡ ⊲30c
665 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
666 7 | docs:: docs_module
667 8 | .PHONY: docs_module
669 10 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc docs LITERATE_SOURCE=$(MODULE_SOURCE)
671 12 | clean:: clean_module
672 13 | .PHONEY: clean_module
674 15 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc clean LITERATE_SOURCE=$(MODULE_SOURCE)
676 17 | distclean:: distclean_module
677 18 | .PHONY: distclean_module
678 19 | distclean_module:
679 20 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc distclean LITERATE_SOURCE=$(MODULE_SOURCE)
680 |________________________________________________________________________
683 We could do similarly for install targets to install the generated docs.
685 Chapter 7Fangle Makefile
686 We use the copyright notice from chapter 2, and the Makefile.inc from chapter 6
688 35a <./Makefile[1](
\v), lang=make> ≡
689 ________________________________________________________________________
690 1 | # «gpl3-copyright 4a»
692 3 | «make-fix-make-shell 55c»
694 5 | LITERATE_SOURCE=fangle.tm
696 7 | all: fangle_sources
697 8 | include Makefile.inc
703 14 | test: fangle.txt
704 15 | ↦$(RUN_FANGLE) -R"test:*" fangle.txt > test.sh
705 16 | ↦bash test.sh ; echo pass $$?
706 |________________________________________________________________________
709 Chapter 8Fangle awk source code
710 We use the copyright notice from chapter 2.
712 37a <./fangle[1](
\v), lang=awk> ≡ 37b▿
713 ________________________________________________________________________
714 1 | #! /usr/bin/awk -f
715 2 | # «gpl3-copyright 4a»
716 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
717 We also use code from Arnold Robbins public domain getopt (1993 revision) defined in 85a, and naturally want to attribute this appropriately.
719 37b <./fangle[2](
\v) ⇑37a, lang=> +≡ ▵37a 37c▿
720 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
721 3 | # NOTE: Arnold Robbins public domain getopt for awk is also used:
722 4 | «getopt.awk-header 83a»
723 5 | «getopt.awk-getopt() 83c»
725 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
726 And include the following chunks (which are explained further on) to make up the program:
728 37c <./fangle[3](
\v) ⇑37a, lang=> +≡ ▵37b 42a⊳
729 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
730 7 | «helper-functions 38d»
731 8 | «mode-tracker 62b»
732 9 | «parse_chunk_args 44a»
733 10 | «chunk-storage-functions 81b»
734 11 | «output_chunk_names() 75d»
735 12 | «output_chunks() 75e»
736 13 | «write_chunk() 76a»
737 14 | «expand_chunk_args() 44b»
740 17 | «recognize-chunk 65a»
742 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
744 The portable way to erase an array in awk is to split the empty string, so we define a fangle macro that can split an array, like this:
746 37d <awk-delete-array[1](ARRAY
\v\v), lang=awk> ≡
747 ________________________________________________________________________
748 1 | split("", ${ARRAY});
749 |________________________________________________________________________
752 For debugging it is sometimes convenient to be able to dump the contents of an array to stderr, and so this macro is also useful.
754 37e <dump-array[1](ARRAY
\v\v), lang=awk> ≡
755 ________________________________________________________________________
756 1 | print "\nDump: ${ARRAY}\n--------\n" > "/dev/stderr";
757 2 | for (_x in ${ARRAY}) {
758 3 | print _x "=" ${ARRAY}[_x] "\n" > "/dev/stderr";
760 5 | print "========\n" > "/dev/stderr";
761 |________________________________________________________________________
765 Fatal errors are issued with the error function:
767 38a <error()[1](
\v), lang=awk> ≡ 38b▿
768 ________________________________________________________________________
769 1 | function error(message)
771 3 | print "ERROR: " FILENAME ":" FNR " " message > "/dev/stderr";
774 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
775 and likewise for non-fatal warnings:
777 38b <error()[2](
\v) ⇑38a, lang=awk> +≡ ▵38a 38c▿
778 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
779 6 | function warning(message)
781 8 | print "WARNING: " FILENAME ":" FNR " " message > "/dev/stderr";
784 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
785 and debug output too:
787 38c <error()[3](
\v) ⇑38a, lang=awk> +≡ ▵38b
788 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
789 11 | function debug_log(message)
791 13 | print "DEBUG: " FILENAME ":" FNR " " message > "/dev/stderr";
793 |________________________________________________________________________
796 To do: append=helper-functions
799 38d <helper-functions[1](
\v), lang=> ≡
800 ________________________________________________________________________
802 |________________________________________________________________________
805 Chapter 9TeXmacs args
806 TeXmacs functions with arguments1. or function declarations with parameters ^1 appear like this:
807 blah((I came, I saw, I conquered)<wide-overbrace>^(argument 1)(^K, )<wide-overbrace>^(sep.)(and then went home asd)<wide-overbrace>^(argument 3)(^K))<wide-overbrace>^(term.)_arguments
808 Arguments commence after the opening parenthesis. The first argument runs up till the next ^K.
809 If the following character is a , then another argument follows. If the next character after the , is a space character, then it is also eaten. The fangle stylesheet emits ^K,Space as separators, but the fangle untangler will forgive a missing space.
810 If the following character is ) then this is a terminator and there are no more arguments.
812 39a <constants[1](
\v), lang=> ≡ 81a⊳
813 ________________________________________________________________________
814 1 | ARG_SEPARATOR=sprintf("%c", 11);
815 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
816 To process the text in this fashion, we split the string on ^K
819 39b <get_chunk_args[1](
\v), lang=> ≡
820 ________________________________________________________________________
821 1 | function get_texmacs_chunk_args(text, args, a, done) {
822 2 | split(text, args, ARG_SEPARATOR);
825 5 | for (a=1; (a in args); a++) if (a>1) {
826 6 | if (args[a] == "" || substr(args[a], 1, 1) == ")") done=1;
832 12 | if (substr(args[a], 1, 2) == ", ") args[a]=substr(args[a], 3);
833 13 | else if (substr(args[a], 1, 1) == ",") args[a]=substr(args[a], 2);
836 |________________________________________________________________________
839 Chapter 10LaTeX and lstlistings
840 To do: Split LyX and TeXmacs parts
842 For L Y X and LaTeX, the lstlistings package is used to format the lines of code chunks. You may recal from chapter XXX that arguments to a chunk definition are pure LaTeX code. This means that fangle needs to be able to parse LaTeX a little.
843 LaTeX arguments to lstlistings macros are a comma seperated list of key-value pairs, and values containing commas are enclosed in { braces } (which is to be expected for LaTeX).
844 A sample expressions is:
845 name=thomas, params={a, b}, something, something-else
846 but we see that this is just a simpler form of this expression:
847 name=freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
848 We may consider that we need a function that can parse such LaTeX expressions and assign the values to an AWK associated array, perhaps using a recursive parser into a multi-dimensional hash1. as AWK doesn't have nested-hash support ^1, resulting in:
853 a[foo, quux, a] fleeg
856 Yet, also, on reflection it seems that sometimes such nesting is not desirable, as the braces are also used to delimit values that contain commas --- we may consider that
857 name={williamson, freddie}
858 should assign williamson, freddie to name.
859 In fact we are not so interested in the detail so as to be bothered by this, which turns out to be a good thing for two reasons. Firstly TeX has a malleable parser with no strict syntax, and secondly whether or not williamson and freddie should count as two items will be context dependant anyway.
860 We need to parse this latex for only one reason; which is that we are extending lstlistings to add some additional arguments which will be used to express chunk parameters and other chunk options.
861 10.1 Additional lstlstings parameters
862 Further on we define a \Chunk LaTeX macro whose arguments will consist of a the chunk name, optionally followed by a comma and then a comma separated list of arguments. In fact we will just need to prefix name= to the arguments to in order to create valid lstlistings arguments.
863 There will be other arguments supported too;
864 params.As an extension to many literate-programming styles, fangle permits code chunks to take parameters and thus operate somewhat like C pre-processor macros, or like C++ templates. Chunk parameters are declared with a chunk argument called params, which holds a semi-colon separated list of parameters, like this:
865 achunk,language=C,params=name;address
866 addto.a named chunk that this chunk is to be included into. This saves the effort of having to declare another listing of the named chunk merely to include this one.
867 Function get_chunk_args() will accept two paramters, text being the text to parse, and values being an array to receive the parsed values as described above. The optional parameter path is used during recursion to build up the multi-dimensional array path.
869 42a <./fangle[4](
\v) ⇑37a, lang=> +≡ ⊲37c
870 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
871 19 | «get_chunk_args() 42b»
872 |________________________________________________________________________
876 42b <get_chunk_args()[1](
\v), lang=> ≡ 42c▿
877 ________________________________________________________________________
878 1 | function get_tex_chunk_args(text, values,
879 2 | # optional parameters
880 3 | path, # hierarchical precursors
883 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
884 The strategy is to parse the name, and then look for a value. If the value begins with a brace {, then we recurse and consume as much of the text as necessary, returning the remaining text when we encounter a leading close-brace }. This being the strategy --- and executed in a loop --- we realise that we must first look for the closing brace (perhaps preceded by white space) in order to terminate the recursion, and returning remaining text.
886 42c <get_chunk_args()[2](
\v) ⇑42b, lang=> +≡ ▵42b
887 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
889 7 | split("", values);
890 8 | while(length(text)) {
891 9 | if (match(text, "^ *}(.*)", a)) {
894 12 | «parse-chunk-args 42d»
898 |________________________________________________________________________
901 We can see that the text could be inspected with this regex:
903 42d <parse-chunk-args[1](
\v), lang=> ≡ 43a⊳
904 ________________________________________________________________________
905 1 | if (! match(text, " *([^,=]*[^,= ]) *(([,=]) *(([^,}]*) *,* *(.*))|)$", a)) {
908 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
909 and that a will have the following values:
912 2 =freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
914 4 freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
916 6 , foo={bar=baz, quux={quirk, a=fleeg}}, etc
918 a[3] will be either = or , and signify whether the option named in a[1] has a value or not (respectively).
919 If the option does have a value, then if the expression substr(a[4],1,1) returns a brace { it will signify that we need to recurse:
921 43a <parse-chunk-args[2](
\v) ⇑42d, lang=> +≡ ⊲42d
922 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
924 5 | if (a[3] == "=") {
925 6 | if (substr(a[4],1,1) == "{") {
926 7 | text = get_tex_chunk_args(substr(a[4],2), values, path name SUBSEP);
928 9 | values[path name]=a[5];
932 13 | values[path name]="";
935 |________________________________________________________________________
938 We can test this function like this:
940 43b <gca-test.awk[1](
\v), lang=> ≡
941 ________________________________________________________________________
942 1 | «get_chunk_args() 42b»
946 5 | print get_tex_chunk_args("name=freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc", a);
948 7 | print "a[" b "] => " a[b];
951 |________________________________________________________________________
954 which should give this output:
956 43c <gca-test.awk-results[1](
\v), lang=> ≡
957 ________________________________________________________________________
958 1 | a[foo.quux.quirk] =>
959 2 | a[foo.quux.a] => fleeg
960 3 | a[foo.bar] => baz
962 5 | a[name] => freddie
963 |________________________________________________________________________
966 10.2 Parsing chunk arguments
967 Arguments to paramterized chunks are expressed in round brackets as a comma separated list of optional arguments. For example, a chunk that is defined with:
968 \Chunk{achunk, params=name ; address}
970 \chunkref{achunk}(John Jones, jones@example.com)
971 An argument list may be as simple as in \chunkref{pull}(thing, otherthing) or as complex as:
972 \chunkref{pull}(things[x, y], get_other_things(a, "(all)"))
973 --- which for all it's commas and quotes and parenthesis represents only two parameters: things[x, y] and get_other_things(a, "(all)").
974 If we simply split parameter list on commas, then the comma in things[x,y] would split into two seperate arguments: things[x and y]--- neither of which make sense on their own.
975 One way to prevent this would be by refusing to split text between matching delimiters, such as [, ], (, ), {, } and most likely also ", " and ', '. Of course this also makes it impossible to pass such mis-matched code fragments as parameters, but I think that it would be hard for readers to cope with authors who would pass such code unbalanced fragments as chunk parameters2. I know that I couldn't cope with users doing such things, and although the GPL3 license prevents me from actually forbidding anyone from trying, if they want it to work they'll have to write the code themselves and not expect any support from me. ^2.
976 Unfortunately, the full set of matching delimiters may vary from language to language. In certain C++ template contexts, < and > would count as delimiters, and yet in other contexts they would not.
977 This puts me in the unfortunate position of having to parse-somewhat all programming languages without knowing what they are!
978 However, if this universal mode-tracking is possible, then parsing the arguments would be trivial. Such a mode tracker is described in chapter 11 and used here with simplicity.
980 44a <parse_chunk_args[1](
\v), lang=> ≡
981 ________________________________________________________________________
982 1 | function parse_chunk_args(language, text, values, mode,
984 3 | c, context, rest)
986 5 | «new-mode-tracker
\v(context
\v, language
\v, mode
\v) 58b»
987 6 | rest = mode_tracker(context, text, values);
989 8 | for(c=1; c <= context[0, "values"]; c++) {
990 9 | values[c] = context[0, "values", c];
994 |________________________________________________________________________
997 10.3 Expanding parameters in the text
998 Within the body of the chunk, the parameters are referred to with: ${name} and ${address}. There is a strong case that a LaTeX style notation should be used, like \param{name} which would be expressed in the listing as =<\param{name}> and be rendered as ${name}. Such notation would make me go blind, but I do intend to adopt it.
999 We therefore need a function expand_chunk_args which will take a block of text, a list of permitted parameters, and the arguments which must substitute for the parameters.
1000 Here we split the text on ${ which means that all parts except the first will begin with a parameter name which will be terminated by }. The split function will consume the literal ${ in each case.
1002 44b <expand_chunk_args()[1](
\v), lang=> ≡
1003 ________________________________________________________________________
1004 1 | function expand_chunk_args(text, params, args,
1005 2 | p, text_array, next_text, v, t, l)
1007 4 | if (split(text, text_array, "\\${")) {
1008 5 | «substitute-chunk-args 45a»
1013 |________________________________________________________________________
1016 First, we produce an associative array of substitution values indexed by parameter names. This will serve as a cache, allowing us to look up the replacement values as we extract each name.
1018 45a <substitute-chunk-args[1](
\v), lang=> ≡ 45b▿
1019 ________________________________________________________________________
1020 1 | for(p in params) {
1021 2 | v[params[p]]=args[p];
1023 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1024 We accumulate substituted text in the variable text. As the first part of the split function is the part before the delimiter --- which is ${ in our case --- this part will never contain a parameter reference, so we assign this directly to the result kept in $text.
1026 45b <substitute-chunk-args[2](
\v) ⇑45a, lang=> +≡ ▵45a 45c▿
1027 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1028 4 | text=text_array[1];
1029 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1030 We then iterate over the remaining values in the array, and substitute each reference for it's argument.
1032 45c <substitute-chunk-args[3](
\v) ⇑45a, lang=> +≡ ▵45b
1033 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1034 5 | for(t=2; t in text_array; t++) {
1035 6 | «substitute-chunk-arg 45d»
1037 |________________________________________________________________________
1040 After the split on ${ a valid parameter reference will consist of valid parameter name terminated by a close-brace }. A valid character name begins with the underscore or a letter, and may contain letters, digits or underscores.
1041 A valid looking reference that is not actually the name of a parameter will be and not substituted. This is good because there is nothing to substitute anyway, and it avoids clashes when writing code for languages where ${...} is a valid construct --- such constructs will not be interfered with unless the parameter name also matches.
1043 45d <substitute-chunk-arg[1](
\v), lang=> ≡
1044 ________________________________________________________________________
1045 1 | if (match(text_array[t], "^([a-zA-Z_][a-zA-Z0-9_]*)}", l) &&
1048 4 | text = text v[l[1]] substr(text_array[t], length(l[1])+2);
1050 6 | text = text "${" text_array[t];
1052 |________________________________________________________________________
1055 Chapter 11Language Modes & Quoting
1056 lstlistings and fangle both recognize source languages, and perform some basic parsing and syntax highlighting in the rendered document1. although lstlisting supports many more languages ^1. lstlistings can detect strings and comments within a language definition and perform suitable rendering, such as italics for comments, and visible-spaces within strings.
1057 Fangle similarly can recognize strings, and comments, etc, within a language, so that any chunks included with \chunkref{a-chunk} or <a-chunk ?> can be suitably escape or quoted.
1058 11.1 Modes explanation
1059 As an example, the C language has a few parse modes, which affect the interpretation of characters.
1060 One parse mode is the string mode. The string mode is commenced by an un-escaped quotation mark " and terminated by the same. Within the string mode, only one additional mode can be commenced, it is the backslash mode \, which is always terminated by the following character.
1061 Another mode is [ which is terminated by a ] (unless it occurs in a string).
1062 Consider this fragment of C code:
1063 do_something((things([x, y])<wide-overbrace>^(2. [ mode), get_other_things((a, "(all)"_(4. " mode)))<wide-overbrace>^(3. ( mode)))<wide-overbrace>^(1. ( mode)
1065 Mode nesting prevents the close parenthesis in the quoted string (part 4) from terminating the parenthesis mode (part 3).
1066 Each language has a set of modes, the default mode being the null mode. Each mode can lead to other modes.
1067 11.2 Modes affect included chunks
1068 For instance, consider this chunk with language=perl:
1070 47a <test:example-perl[1](
\v), lang=perl> ≡
1071 ________________________________________________________________________
1072 1 | print "hello world $0\n";
1073 |________________________________________________________________________
1076 If it were included in a chunk with language=sh, like this:
1078 47b <test:example-sh[1](
\v), lang=sh> ≡
1079 ________________________________________________________________________
1080 1 | perl -e "«test:example-perl 47a»"
1081 |________________________________________________________________________
1084 we might want fangle would to generate output like this:
1086 48a <test:example-sh.result[1](
\v), lang=sh> ≡
1087 ________________________________________________________________________
1088 1 | perl -e "print \"hello world \$0\\n\";"
1089 |________________________________________________________________________
1092 See that the double quote ", back-slash \ and $ have been quoted with a back-slash to protect them from shell interpretation.
1093 If that were then included in a chunk with language=make, like this:
1095 48b <test:example-makefile[1](
\v), lang=make> ≡
1096 ________________________________________________________________________
1098 2 | ↦«test:example-sh 47b»
1099 |________________________________________________________________________
1102 We would need the output to look like this --- note the $$ as the single $ has been makefile-quoted with another $.
1104 48c <test:example-makefile.result[1](
\v), lang=make> ≡
1105 ________________________________________________________________________
1107 2 | ↦perl -e "print \"hello world \$$0\\n\";"
1108 |________________________________________________________________________
1111 11.3 Language Mode Definitions
1112 In order to make this work, we must define a mode-tracker supporting each language, that can detect the various quoting modes, and provide a transformation that may be applied to any included text so that included text will be interpreted correctly after any interpolation that it may be subject to at run-time.
1113 For example, the sed transformation for text to be inserted into shell double-quoted strings would be something like:
1114 s/\\/\\\\/g;s/$/\\$/g;s/"/\\"/g;
1115 which would protect \ $ "
1116 All modes definitions are stored in a single multi-dimensional hash called modes:
1117 modes[language, mode, properties]
1118 The first index is the language, and the second index is the mode. The third indexes hold properties such as terminators, possible submodes, transformations, and so forth.
1120 48d <xmode:set-terminators[1](language
\v, mode
\v, terminators
\v\v), lang=> ≡
1121 ________________________________________________________________________
1122 1 | modes["${language}", "${mode}", "terminators"]="${terminators}";
1123 |________________________________________________________________________
1127 48e <xmode:set-submodes[1](language
\v, mode
\v, submodes
\v\v), lang=> ≡
1128 ________________________________________________________________________
1129 1 | modes["${language}", "${mode}", "submodes"]="${submodes}";
1130 |________________________________________________________________________
1133 A useful set of mode definitions for a nameless general C-type language is shown here.
1134 Don't be confused by the double backslash escaping needed in awk. One set of escaping is for the string, and the second set of escaping is for the regex.
1135 To do: TODO: Add =<\mode{}> command which will allow us to signify that a string is
1136 regex and thus fangle will quote it for us.
1138 Sub-modes are identified by a backslash, a double or single quote, various bracket styles or a /* comment; specifically: \ " ' { ( [ /*
1139 For each of these sub-modes modes we must also identify at a mode terminator, and any sub-modes or delimiters that may be entered2. Because we are using the sub-mode characters as the mode identifier it means we can't currently have a mode character dependant on it's context; i.e. { can't behave differently when it is inside [. ^2.
1141 49a <common-mode-definitions[1](language
\v\v), lang=> ≡ 49b▿
1142 ________________________________________________________________________
1143 1 | modes[${language}, "", "submodes"]="\\\\|\"|'|{|\\(|\\[";
1144 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1145 In the default mode, a comma surrounded by un-important white space is a delimiter of language items3. whatever a language item might be ^3. Delimiters are used so that fangle can parse and recognise arguments individually.
1147 49b <common-mode-definitions[2](language
\v\v) ⇑49a, lang=> +≡ ▵49a 49d▿
1148 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1149 2 | modes[${language}, "", "delimiters"]=" *, *";
1150 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1151 and should pass this test:
1152 To do: Why do the tests run in ?(? mode and not ?? mode
1155 49c <test:mode-definitions[1](
\v), lang=> ≡ 50g⊳
1156 ________________________________________________________________________
1157 1 | parse_chunk_args("c-like", "1,2,3", a, "");
1158 2 | if (a[1] != "1") e++;
1159 3 | if (a[2] != "2") e++;
1160 4 | if (a[3] != "3") e++;
1161 5 | if (length(a) != 3) e++;
1162 6 | «pca-test.awk:summary 62d»
1164 8 | parse_chunk_args("c-like", "joe, red", a, "");
1165 9 | if (a[1] != "joe") e++;
1166 10 | if (a[2] != "red") e++;
1167 11 | if (length(a) != 2) e++;
1168 12 | «pca-test.awk:summary 62d»
1170 14 | parse_chunk_args("c-like", "${colour}", a, "");
1171 15 | if (a[1] != "${colour}") e++;
1172 16 | if (length(a) != 1) e++;
1173 17 | «pca-test.awk:summary 62d»
1174 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1176 The backslash mode has no submodes or delimiters, and is terminated by any character. Note that we are not so much interested in evaluating or interpolating content as we are in delineating content. It is no matter that a double backslash (\\) may represent a single backslash while a backslash-newline may represent white space, but it does matter that the newline in a backslash newline should not be able to terminate a C pre-processor statement; and so the newline will be consumed by the backslash terminator however it may uultimately be interpreted.
1178 49d <common-mode-definitions[3](language
\v\v) ⇑49a, lang=> +≡ ▵49b 50f⊳
1179 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1180 3 | modes[${language}, "\\", "terminators"]=".";
1181 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1183 Common languages support two kinds of strings quoting, double quotes and single quotes.
1184 In a string we have one special mode, which is the backslash. This may escape an embedded quote and prevent us thinking that it should terminate the string.
1186 50a <mode:common-string[1](language
\v, quote
\v\v), lang=> ≡ 50b▿
1187 ________________________________________________________________________
1188 1 | modes[${language}, ${quote}, "submodes"]="\\\\";
1189 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1190 Otherwise, the string will be terminated by the same character that commenced it.
1192 50b <mode:common-string[2](language
\v, quote
\v\v) ⇑50a, lang=> +≡ ▵50a 50c▿
1193 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1194 2 | modes[${language}, ${quote}, "terminators"]=${quote};
1195 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1196 In C type languages, certain escape sequences exist in strings. We need to define mechanism to enclode any chunks included in this mode using those escape sequences. These are expressed in two parts, s meaning search, and r meaning replace.
1197 The first substitution is to replace a backslash with a double backslash. We do this first as other substitutions may introduce a backslash which we would not then want to escape again here.
1198 Note: Backslashes need double-escaping in the search pattern but not in the replacement string, hence we are replacing a literal \ with a literal \\.
1200 50c <mode:common-string[3](language
\v, quote
\v\v) ⇑50a, lang=> +≡ ▵50b 50d▿
1201 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1202 3 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\\\\";
1203 4 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\\\";
1204 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1205 If the quote character occurs in the text, it should be preceded by a backslash, otherwise it would terminate the string unexpectedly.
1207 50d <mode:common-string[4](language
\v, quote
\v\v) ⇑50a, lang=> +≡ ▵50c 50e▿
1208 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1209 5 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]=${quote};
1210 6 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\" ${quote};
1211 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1212 Any newlines in the string, must be replaced by \n.
1214 50e <mode:common-string[5](language
\v, quote
\v\v) ⇑50a, lang=> +≡ ▵50d
1215 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1216 7 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\n";
1217 8 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\n";
1218 |________________________________________________________________________
1221 For the common modes, we define this string handling for double and single quotes.
1223 50f <common-mode-definitions[4](language
\v\v) ⇑49a, lang=> +≡ ⊲49d 51b⊳
1224 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1225 4 | «mode:common-string
\v(${language}
\v, "\""
\v) 50a»
1226 5 | «mode:common-string
\v(${language}
\v, "'"
\v) 50a»
1227 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1228 Working strings should pass this test:
1230 50g <test:mode-definitions[2](
\v) ⇑49c, lang=> +≡ ⊲49c 57c⊳
1231 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1232 18 | parse_chunk_args("c-like", "say \"I said, \\\"Hello, how are you\\\".\", for me", a, "");
1233 19 | if (a[1] != "say \"I said, \\\"Hello, how are you\\\".\"") e++;
1234 20 | if (a[2] != "for me") e++;
1235 21 | if (length(a) != 2) e++;
1236 22 | «pca-test.awk:summary 62d»
1237 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1238 11.3.3 Parentheses, Braces and Brackets
1239 Where quotes are closed by the same character, parentheses, brackets and braces are closed by an alternate character.
1241 51a <mode:common-brackets[1](language
\v, open
\v, close
\v\v), lang=> ≡
1242 ________________________________________________________________________
1243 1 | modes[${language}, ${open}, "submodes" ]="\\\\|\"|{|\\(|\\[|'|/\\*";
1244 2 | modes[${language}, ${open}, "delimiters"]=" *, *";
1245 3 | modes[${language}, ${open}, "terminators"]=${close};
1246 |________________________________________________________________________
1249 Note that the open is NOT a regex but the close token IS.
1250 To do: When we can quote regex we won't have to put the slashes in here
1253 51b <common-mode-definitions[5](language
\v\v) ⇑49a, lang=> +≡ ⊲50f
1254 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1255 6 | «mode:common-brackets
\v(${language}
\v, "{"
\v, "}"
\v) 51a»
1256 7 | «mode:common-brackets
\v(${language}
\v, "["
\v, "\\]"
\v) 51a»
1257 8 | «mode:common-brackets
\v(${language}
\v, "("
\v, "\\)"
\v) 51a»
1258 |________________________________________________________________________
1261 11.3.4 Customizing Standard Modes
1263 51c <mode:add-submode[1](language
\v, mode
\v, submode
\v\v), lang=> ≡
1264 ________________________________________________________________________
1265 1 | modes[${language}, ${mode}, "submodes"] = modes[${language}, ${mode}, "submodes"] "|" ${submode};
1266 |________________________________________________________________________
1270 51d <mode:add-escapes[1](language
\v, mode
\v, search
\v, replace
\v\v), lang=> ≡
1271 ________________________________________________________________________
1272 1 | escapes[${language}, ${mode}, ++escapes[${language}, ${mode}], "s"]=${search};
1273 2 | escapes[${language}, ${mode}, escapes[${language}, ${mode}], "r"]=${replace};
1274 |________________________________________________________________________
1279 We can define /* comment */ style comments and //comment style comments to be added to any language:
1281 51e <mode:multi-line-comments[1](language
\v\v), lang=> ≡
1282 ________________________________________________________________________
1283 1 | «mode:add-submode
\v(${language}
\v, ""
\v, "/\\*"
\v) 51c»
1284 2 | modes[${language}, "/*", "terminators"]="\\*/";
1285 |________________________________________________________________________
1289 51f <mode:single-line-slash-comments[1](language
\v\v), lang=> ≡
1290 ________________________________________________________________________
1291 1 | «mode:add-submode
\v(${language}
\v, ""
\v, "//"
\v) 51c»
1292 2 | modes[${language}, "//", "terminators"]="\n";
1293 3 | «mode:add-escapes
\v(${language}
\v, "//"
\v, "\n"
\v, "\n//"
\v) 51d»
1294 |________________________________________________________________________
1297 We can also define # comment style comments (as used in awk and shell scripts) in a similar manner.
1298 To do: I'm having to use # for hash and ¯extbackslash{} for and have hacky work-arounds in the parser for now
1301 51g <mode:add-hash-comments[1](language
\v\v), lang=> ≡
1302 ________________________________________________________________________
1303 1 | «mode:add-submode
\v(${language}
\v, ""
\v, "#"
\v) 51c»
1304 2 | modes[${language}, "#", "terminators"]="\n";
1305 3 | «mode:add-escapes
\v(${language}
\v, "#"
\v, "\n"
\v, "\n#"
\v) 51d»
1306 |________________________________________________________________________
1309 In C, the # denotes pre-processor directives which can be multi-line
1311 51h <mode:add-hash-defines[1](language
\v\v), lang=> ≡
1312 ________________________________________________________________________
1313 1 | «mode:add-submode
\v(${language}
\v, ""
\v, "#"
\v) 51c»
1314 2 | modes[${language}, "#", "submodes" ]="\\\\";
1315 3 | modes[${language}, "#", "terminators"]="\n";
1316 4 | «mode:add-escapes
\v(${language}
\v, "#"
\v, "\n"
\v, "\\\\\n"
\v) 51d»
1317 |________________________________________________________________________
1321 52a <mode:quote-dollar-escape[1](language
\v, quote
\v\v), lang=> ≡
1322 ________________________________________________________________________
1323 1 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\\$";
1324 2 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\$";
1325 |________________________________________________________________________
1328 We can add these definitions to various languages
1330 52b <mode-definitions[1](
\v), lang=> ≡ 53a⊳
1331 ________________________________________________________________________
1332 1 | «common-mode-definitions
\v("c-like"
\v) 49a»
1334 3 | «common-mode-definitions
\v("c"
\v) 49a»
1335 4 | «mode:multi-line-comments
\v("c"
\v) 51e»
1336 5 | «mode:single-line-slash-comments
\v("c"
\v) 51f»
1337 6 | «mode:add-hash-defines
\v("c"
\v) 51h»
1339 8 | «common-mode-definitions
\v("awk"
\v) 49a»
1340 9 | «mode:add-hash-comments
\v("awk"
\v) 51g»
1341 10 | «mode:add-naked-regex
\v("awk"
\v) 52g»
1342 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1343 The awk definitions should allow a comment block like this:
1345 52c <test:comment-quote[1](
\v), lang=awk> ≡
1346 ________________________________________________________________________
1347 1 | # Comment: «test:comment-text 52d»
1348 |________________________________________________________________________
1352 52d <test:comment-text[1](
\v), lang=> ≡
1353 ________________________________________________________________________
1354 1 | Now is the time for
1355 2 | the quick brown fox to bring lemonade
1357 |________________________________________________________________________
1360 to come out like this:
1362 52e <test:comment-quote:result[1](
\v), lang=> ≡
1363 ________________________________________________________________________
1364 1 | # Comment: Now is the time for
1365 2 | #the quick brown fox to bring lemonade
1367 |________________________________________________________________________
1370 The C definition for such a block should have it come out like this:
1372 52f <test:comment-quote:C-result[1](
\v), lang=> ≡
1373 ________________________________________________________________________
1374 1 | # Comment: Now is the time for\
1375 2 | the quick brown fox to bring lemonade\
1377 |________________________________________________________________________
1381 This pattern is incomplete, but meant to detect naked regular expressions in awk and perl; e.g. /.*$/, however required capabilities are not present.
1382 Current it only detects regexes anchored with ^ as used in fangle.
1383 For full regex support, modes need to be named not after their starting character, but some other more fully qualified name.
1385 52g <mode:add-naked-regex[1](language
\v\v), lang=> ≡
1386 ________________________________________________________________________
1387 1 | «mode:add-submode
\v(${language}
\v, ""
\v, "/\\^"
\v) 51c»
1388 2 | modes[${language}, "/^", "terminators"]="/";
1389 |________________________________________________________________________
1394 53a <mode-definitions[2](
\v) ⇑52b, lang=> +≡ ⊲52b 53b▿
1395 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1396 11 | «common-mode-definitions
\v("perl"
\v) 49a»
1397 12 | «mode:multi-line-comments
\v("perl"
\v) 51e»
1398 13 | «mode:add-hash-comments
\v("perl"
\v) 51g»
1399 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1400 Still need to add add s/, submode /, terminate both with //. This is likely to be impossible as perl regexes can contain perl.
1402 Shell single-quote strings are different to other strings and have no escape characters. The only special character is the single quote ' which always closes the string. Therefore we cannot use <common-mode-definitions
\v("sh"
\v) 49a> but we will invoke most of it's definition apart from single-quote strings.
1404 53b <mode-definitions[3](
\v) ⇑52b, lang=awk> +≡ ▵53a 54a⊳
1405 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1406 14 | modes["sh", "", "submodes"]="\\\\|\"|'|{|\\(|\\[|\\$\\(";
1407 15 | modes["sh", "\\", "terminators"]=".";
1409 17 | modes["sh", "\"", "submodes"]="\\\\|\\$\\(";
1410 18 | modes["sh", "\"", "terminators"]="\"";
1411 19 | escapes["sh", "\"", ++escapes["sh", "\""], "s"]="\\\\";
1412 20 | escapes["sh", "\"", escapes["sh", "\""], "r"]="\\\\";
1413 21 | escapes["sh", "\"", ++escapes["sh", "\""], "s"]="\"";
1414 22 | escapes["sh", "\"", escapes["sh", "\""], "r"]="\\" "\"";
1415 23 | escapes["sh", "\"", ++escapes["sh", "\""], "s"]="\n";
1416 24 | escapes["sh", "\"", escapes["sh", "\""], "r"]="\\n";
1418 26 | modes["sh", "'", "terminators"]="'";
1419 27 | escapes["sh", "'", ++escapes["sh", "'"], "s"]="'";
1420 28 | escapes["sh", "'", escapes["sh", "'"], "r"]="'\\'" "'";
1421 29 | «mode:common-brackets
\v("sh"
\v, "$("
\v, "\\)"
\v) 51a»
1422 30 | «mode:add-tunnel
\v("sh"
\v, "$("
\v, ""
\v) 53c»
1423 31 | «mode:common-brackets
\v("sh"
\v, "{"
\v, "}"
\v) 51a»
1424 32 | «mode:common-brackets
\v("sh"
\v, "["
\v, "\\]"
\v) 51a»
1425 33 | «mode:common-brackets
\v("sh"
\v, "("
\v, "\\)"
\v) 51a»
1426 34 | «mode:add-hash-comments
\v("sh"
\v) 51g»
1427 35 | «mode:quote-dollar-escape
\v("sh"
\v, "\""
\v) 52a»
1428 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1429 The definition of add-tunnel is:
1431 53c <mode:add-tunnel[1](language
\v, mode
\v, tunnel
\v\v), lang=> ≡
1432 ________________________________________________________________________
1433 1 | escapes[${language}, ${mode}, ++escapes[${language}, ${mode}], "tunnel"]=${tunnel};
1434 |________________________________________________________________________
1438 BUGS: makefile tab mode is terminated by newline, but chunks never end in a newline! So tab mode is never closed unless there is a trailing blank line!
1439 For makefiles, we currently recognize 2 modes: the null mode and ↦ mode, which is tabbed mode and contains the makefile recipie.
1442 54a <mode-definitions[4](
\v) ⇑52b, lang=awk> +≡ ⊲53b 54b▿
1443 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1444 36 | modes["make", "", "submodes"]="↦";
1445 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1446 In the null mode the only escape is $ which must be converted to $$, and hash-style comments. POSIX requires that line-continuations extend hash-style comments and so fangle-style transformations to replicate the hash at the start of each line is not strictly required, however it is harmless, easier to read, and required by some implementations of make which do not implement POSIX requirements correctly.
1448 54b <mode-definitions[5](
\v) ⇑52b, lang=awk> +≡ ▵54a 56a⊳
1449 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1450 37 | escapes["make", "", ++escapes["make", ""], "s"]="\\$";
1451 38 | escapes["make", "", escapes["make", ""], "r"]="$$";
1452 39 | «mode:add-hash-comments
\v("make"
\v) 51g»
1453 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1454 Tabbed mode is harder to manage, as the GNU Make Manual says in the section on splitting lines4. http://www.gnu.org/s/hello/manual/make/Splitting-Lines.html ^4. There is no obvious way to escape a multi-line text that occurs as part of a makefile recipe.
1455 Traditionally, if the newline's in the shell script all occur at points of top-level shell syntax, then we could replace them with ;\n↦and largely get the right effect.
1457 54c <test:make:1[1](
\v), lang=make> ≡
1458 ________________________________________________________________________
1461 3 | ↦«test:make:1-inc
\v($@) 54d»
1462 |________________________________________________________________________
1468 54d <test:make:1-inc[1](target
\v\v), lang=sh> ≡
1469 ________________________________________________________________________
1470 1 | if test "${target}" = "all"
1471 2 | then echo yes, all
1472 3 | else echo "${target}" | sed -e '/^\//{
1476 |________________________________________________________________________
1479 The two chunks above could reasonably produce something like this:
1481 54e <test:make:1.result.bad[1](
\v), lang=make> ≡
1482 ________________________________________________________________________
1485 3 | ↦if test "$@" = "all" ;\
1486 4 | ↦then echo yes, all ;\
1487 5 | ↦else echo "$@" | sed -e '/^\//{ ;\
1491 |________________________________________________________________________
1494 However ;\ is not a proper continuation inside a multi-line sed script. There is no simple continuation that fangle could use — and in any case it would depend on what type of quote marks were used in the bash that contained the sed.
1495 We would prefer to use a more intuitive single backslash at the end of the line, giving these results.
1497 54f <test:make:1.result[1](
\v), lang=make> ≡
1498 ________________________________________________________________________
1501 3 | ↦if test "$$@" = "all"\
1502 4 | ↦ then echo yes, all\
1503 5 | ↦ else echo "$$@" | sed -e '/^\//{\
1507 |________________________________________________________________________
1510 The difficulty lies in the way that make handles the recipe. Each line of the recipe is invoked as a separate shell command (using $(SHELL) -c) unless the last character of the line was a backslash. In such a case, the backslash and the newline and the nextline are handed to the shell (although the tab character that prefixes the next line is stripped).
1511 This behaviour makes it impossible to hand a newline character to the shell unless it is prefixed by a backslash. If an included shell fragment contained strings with literal newline characters then there would be no easy way to escape these and preserve the value of the string.
1512 A different style of makefile construction might be used — the recipe could be stored in a target specific variable5. http://www.gnu.org/s/hello/manual/make/Target_002dspecific.html ^5 which contains the recipe with a more normal escape mechanism.
1513 A better solution is to use a shell helper that strips the back-slash which precedes the newline character and then passes the arguments to the normal shell.
1514 Because this is a simple operation and because bash is so flexible, this can be managed in a single line within the makefile itself.
1515 As a newline will only exist when preceded by the backslash, and as the purpose of the backash is to protect th newline, that is needed is to remove any backslash that is followed by a newline.
1516 Bash is capable of doing this with its pattern substitution. If A=123:=456:=789 then ${A//:=/=} will be 123=456=789. We don't want to just perform the substitution in a single variable but in fact in all of $@'', however bash will repeat substitution over all members of an array, so this is done automatically.
1517 In bash, $'\012' represents the newline character (expressed as an octal escape sequence), so this expression will replace backslash-newline with a single newline.
1519 55a <fix-requote-newline[1](
\v), lang=sh> ≡
1520 ________________________________________________________________________
1521 1 | "${@//\\$'\012'/$'\012'}"
1522 |________________________________________________________________________
1525 We use this as part of a larger statement which will invoke such a transformed command ine using any particular shell. The trailing -- prevents any options in the command line from being interpreted as options to our bash command — instead they will be transformed and passed to the inner shell which is invoked with exec so that our fixup-shell does not hang around longer than is needed.
1527 55b <fix-make-shell[1](shell
\v\v), lang=sh> ≡
1528 ________________________________________________________________________
1529 1 | bash -c 'exec ${shell} «fix-requote-newline 55a»' --
1530 |________________________________________________________________________
1533 We can then cinlude a line like this in our makefiles. We should rather pass $(SHELL) as the chunk argument than bash, but currently fangle will not track which nested-inclusion level the argument comes from and will quote the $ in $(SHELL) in the same way it quotes a $ that may occur in the bash script, so this would come out as $$(SHELL) and have the wrong effect.
1535 55c <make-fix-make-shell[1](
\v), lang=> ≡
1536 ________________________________________________________________________
1537 1 | SHELL:=«fix-make-shell
\v(bash
\v) 55b»
1538 |________________________________________________________________________
1541 The full escaped and quoted text with $(SHELL) and suitale for inclusion in a Makefile is:
1542 SHELL:=bash -c 'exec $(SHELL) "$${@//\\$$'\''\012'\''/$$'\''\012'\''}"' --
1543 Based on this, we just need to escape newlines (in tabbed mode) with a regular backslash:
1544 Note that terminators applies to literal, not included text, escapes apply to included, not literal text; also that the tab character is hard-wired into the pattern, and that the make variable .RECIPEPREFIX might change this to something else.
1546 56a <mode-definitions[6](
\v) ⇑52b, lang=awk> +≡ ⊲54b
1547 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1548 40 | modes["make", "↦", "terminators"]="\\n";
1549 41 | escapes["make", "↦", ++escapes["make", "↦"], "s"]="\\n";
1550 42 | escapes["make", "↦", escapes["make", "↦"], "r"]="\\\n↦";
1551 |________________________________________________________________________
1554 With this improved quoting, the test on 54c will actually produce this:
1556 56b <test:make:1.result-actual[1](
\v), lang=make> ≡
1557 ________________________________________________________________________
1560 3 | ↦if test "$$@" = "all"\
1561 4 | ↦ then echo yes, all\
1562 5 | ↦ else echo not all\
1564 |________________________________________________________________________
1567 The chunk argument $@ has been quoted (which would have been fine if we were passing the name of a shell variable), and the other shell lines are (harmlessly) indented by 1 space as part of fangle indent-matching which should have taken into account the expanded tab size, and should generally take into account the expanded prefix of the line whose indent it is trying to match, but which in this case we want to have no effect at all!
1568 To do: The $@ was passed from a make fragment. In what cases should it be converted to $$@?
1569 Do we need to track the language of sources of arguments?
1571 A more ugly work-around until this problem can be solved would be to use this notation:
1573 56c <test:make:2[1](
\v), lang=make> ≡
1574 ________________________________________________________________________
1577 3 | ↦ARG="$@"; «test:make:1-inc
\v($ARG) 54d»
1578 |________________________________________________________________________
1581 which produces this output which is more useful (because it works):
1583 56d <test:make:2.result[1](
\v), lang=make> ≡
1584 ________________________________________________________________________
1587 3 | ↦ARG="$@"; if test "$$ARG" = "all"\
1588 4 | ↦ then echo yes, all\
1589 5 | ↦ else echo "$$ARG" | sed -e '/^\//{\
1593 |________________________________________________________________________
1596 11.4 Quoting scenarios
1597 11.4.1 Direct quoting
1598 He we give examples of various quoting scenarios and discuss what the expected outcome might be and how this could be obtained.
1600 56e <test:q:1[1](
\v), lang=sh> ≡
1601 ________________________________________________________________________
1602 1 | echo "$(«test:q:1-inc 57a»)"
1603 |________________________________________________________________________
1607 57a <test:q:1-inc[1](
\v), lang=sh> ≡
1608 ________________________________________________________________________
1610 |________________________________________________________________________
1613 Should this examples produce echo "$(echo "hello")" or echo "$(echo \"hello\")" ?
1614 This depends on what the author intended, but we must provde a way to express that intent.
1615 We might argue that as both chunks have lang=sh the intent must have been to quote the included chunk — but consider that this might be shell script that writes shell script.
1616 If <test:q:1-inc 57a> had lang=text then it certainly would have been right to quote it, which leads us to ask: in what ways can we reduce quoting if lang of the included chunk is compatible with the lang of the including chunk?
1617 If we take a completely nested approach then even though $( mode might do no quoting of it's own, " mode will still do it's own quoting. We need a model where the nested $( mode will prevent " from quoting.
1618 This leads rise to the tunneling feature. In bash, the $( gives rise to a new top-level parsing scenario, so we need to enter the null mode, and also ignore any quoting and then undo-this when the $( mode is terminated by the corresponding close ).
1619 We shall say that tunneling is when a mode in a language ignores other modes in the same language and arrives back at an earlier null mode of the same language.
1620 In example <test:q:1 56e> above, the nesting of modes is: null, ", $(
1621 When mode $( is commenced, the stack of nest modes will be traversed. If the null mode can be found in the same language, without the language varying, then a tunnel will be established so that the intervening modes, " in this case, can be skipped when the modes are enumerated to quote the texted being emitted.
1622 In such a case, the correct result would be:
1624 57b <test:q:1.result[1](
\v), lang=sh> ≡
1625 ________________________________________________________________________
1626 1 | echo "$(echo "hello")"
1627 |________________________________________________________________________
1631 Also, the parser must return any spare text at the end that has not been processed due to a mode terminator being found.
1633 57c <test:mode-definitions[3](
\v) ⇑49c, lang=> +≡ ⊲50g 57d▿
1634 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1635 23 | rest = parse_chunk_args("c-like", "1, 2, 3) spare", a, "(");
1636 24 | if (a[1] != 1) e++;
1637 25 | if (a[2] != 2) e++;
1638 26 | if (a[3] != 3) e++;
1639 27 | if (length(a) != 3) e++;
1640 28 | if (rest != " spare") e++;
1641 29 | «pca-test.awk:summary 62d»
1642 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1643 We must also be able to parse the example given earlier.
1645 57d <test:mode-definitions[4](
\v) ⇑49c, lang=> +≡ ▵57c
1646 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1647 30 | parse_chunk_args("c-like", "things[x, y], get_other_things(a, \"(all)\"), 99", a, "(");
1648 31 | if (a[1] != "things[x, y]") e++;
1649 32 | if (a[2] != "get_other_things(a, \"(all)\")") e++;
1650 33 | if (a[3] != "99") e++;
1651 34 | if (length(a) != 3) e++;
1652 35 | «pca-test.awk:summary 62d»
1653 |________________________________________________________________________
1656 11.6 A non-recursive mode tracker
1657 As each chunk is output a new mode tracker for that language is initialized in it's normal state. As text is output for that chunk the output mode is tracked. When a new chunk is included, a transformation appropriate to that mode is selected and pushed onto a stack of transformations. Any text to be output is passed through this stack of transformations.
1658 It remains to consider if the chunk-include function should return it's generated text so that the caller can apply any transformations (and formatting), or if it should apply the stack of transformations itself.
1659 Note that the transformed included text should have the property of not being able to change the mode in the current chunk.
1660 To do: Note chunk parameters should probably also be transformed
1663 The mode tracker holds its state in a stack based on a numerically indexed hash. This function, when passed an empty hash, will intialize it.
1665 58a <new_mode_tracker()[1](
\v), lang=> ≡
1666 ________________________________________________________________________
1667 1 | function new_mode_tracker(context, language, mode) {
1668 2 | context[""] = 0;
1669 3 | context[0, "language"] = language;
1670 4 | context[0, "mode"] = mode;
1672 |________________________________________________________________________
1675 Awk functions cannot return an array, but arrays are passed by reference. Because of this we must create the array first and pass it in, so we have a fangle macro to do this:
1677 58b <new-mode-tracker[1](context
\v, language
\v, mode
\v\v), lang=awk> ≡
1678 ________________________________________________________________________
1679 1 | «awk-delete-array
\v(${context}
\v) 37d»
1680 2 | new_mode_tracker(${context}, ${language}, ${mode});
1681 |________________________________________________________________________
1685 And for tracking modes, we dispatch to a mode-tracker action based on the current language
1687 58c <mode_tracker[1](
\v), lang=awk> ≡ 59a⊳
1688 ________________________________________________________________________
1689 1 | function push_mode_tracker(context, language, mode,
1693 5 | if (! ("" in context)) {
1694 6 | «new-mode-tracker
\v(context
\v, language
\v, mode
\v) 58b»
1697 9 | top = context[""];
1698 10 | # if (context[top, "language"] == language && mode=="") mode = context[top, "mode"];
1699 11 | if (context[top, "language"] == language && context[top, "mode"] == mode) return top - 1;
1702 14 | context[top, "language"] = language;
1703 15 | context[top, "mode"] = mode;
1704 16 | context[""] = top;
1706 18 | return old_top;
1708 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1710 59a <mode_tracker[2](
\v) ⇑58c, lang=> +≡ ⊲58c 59b▿
1711 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1712 20 | function dump_mode_tracker(context,
1715 23 | for(c=0; c <= context[""]; c++) {
1716 24 | printf(" %2d %s:%s\n", c, context[c, "language"], context[c, "mode"]) > "/dev/stderr";
1717 25 | # for(d=1; ( (c, "values", d) in context); d++) {
1718 26 | # printf(" %2d %s\n", d, context[c, "values", d]) > "/dev/stderr";
1722 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1724 59b <mode_tracker[3](
\v) ⇑58c, lang=> +≡ ▵59a 63b⊳
1725 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1726 30 | function pop_mode_tracker(context, context_origin)
1728 32 | if ( (context_origin) && ("" in context) && context[""] != (1+context_origin) && context[""] != context_origin) {
1729 33 | print "Context level: " context[""] ", origin: " context_origin "\n" > "/dev/stderr"
1732 36 | context[""] = context_origin;
1735 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1736 This implies that any chunk must be syntactically whole; for instance, this is fine:
1738 59c <test:whole-chunk[1](
\v), lang=> ≡
1739 ________________________________________________________________________
1741 2 | «test:say-hello 59d»
1743 |________________________________________________________________________
1747 59d <test:say-hello[1](
\v), lang=> ≡
1748 ________________________________________________________________________
1750 |________________________________________________________________________
1753 But this is not fine; the chunk <test:hidden-else 59f> is not properly cromulent.
1755 59e <test:partial-chunk[1](
\v), lang=> ≡
1756 ________________________________________________________________________
1758 2 | «test:hidden-else 59f»
1760 |________________________________________________________________________
1764 59f <test:hidden-else[1](
\v), lang=> ≡
1765 ________________________________________________________________________
1766 1 | print "I'm fine";
1768 3 | print "I'm not";
1769 |________________________________________________________________________
1772 These tests will check for correct behaviour:
1774 59g <test:cromulence[1](
\v), lang=> ≡
1775 ________________________________________________________________________
1776 1 | echo Cromulence test
1777 2 | passtest $FANGLE -Rtest:whole-chunk $TXT_SRC &>/dev/null || ( echo "Whole chunk failed" && exit 1 )
1778 3 | failtest $FANGLE -Rtest:partial-chunk $TXT_SRC &>/dev/null || ( echo "Partial chunk failed" && exit 1 )
1779 |________________________________________________________________________
1783 We must avoid recursion as a language construct because we intend to employ mode-tracking to track language mode of emitted code, and the code is emitted from a function which is itself recursive, so instead we implement psuedo-recursion using our own stack based on a hash.
1785 60a <mode_tracker()[1](
\v), lang=awk> ≡ 60b▿
1786 ________________________________________________________________________
1787 1 | function mode_tracker(context, text, values,
1788 2 | # optional parameters
1790 4 | mode, submodes, language,
1791 5 | cindex, c, a, part, item, name, result, new_values, new_mode,
1792 6 | delimiters, terminators)
1794 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1795 We could be re-commencing with a valid context, so we need to setup the state according to the last context.
1797 60b <mode_tracker()[2](
\v) ⇑60a, lang=> +≡ ▵60a 60e▿
1798 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1799 8 | cindex = context[""] + 0;
1800 9 | mode = context[cindex, "mode"];
1801 10 | language = context[cindex, "language" ];
1802 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1803 First we construct a single large regex combining the possible sub-modes for the current mode along with the terminators for the current mode.
1805 60c <parse_chunk_args-reset-modes[1](
\v), lang=> ≡ 60d▿
1806 ________________________________________________________________________
1807 1 | submodes=modes[language, mode, "submodes"];
1809 3 | if ((language, mode, "delimiters") in modes) {
1810 4 | delimiters = modes[language, mode, "delimiters"];
1811 5 | if (length(submodes)>0) submodes = submodes "|";
1812 6 | submodes=submodes delimiters;
1813 7 | } else delimiters="";
1814 8 | if ((language, mode, "terminators") in modes) {
1815 9 | terminators = modes[language, mode, "terminators"];
1816 10 | if (length(submodes)>0) submodes = submodes "|";
1817 11 | submodes=submodes terminators;
1818 12 | } else terminators="";
1819 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1820 If we don't find anything to match on --- probably because the language is not supported --- then we return the entire text without matching anything.
1822 60d <parse_chunk_args-reset-modes[2](
\v) ⇑60c, lang=> +≡ ▵60c
1823 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1824 13 | if (! length(submodes)) return text;
1825 |________________________________________________________________________
1829 60e <mode_tracker()[3](
\v) ⇑60a, lang=> +≡ ▵60b 60f▿
1830 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1831 11 | «parse_chunk_args-reset-modes 60c»
1832 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1833 We then iterate the text (until there is none left) looking for sub-modes or terminators in the regex.
1835 60f <mode_tracker()[4](
\v) ⇑60a, lang=> +≡ ▵60e 61a⊳
1836 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1837 12 | while((cindex >= 0) && length(text)) {
1838 13 | if (match(text, "(" submodes ")", a)) {
1839 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1840 A bug that creeps in regularly during development is bad regexes of zero length which result in an infinite loop (as no text is consumed), so I catch that right away with this test.
1842 61a <mode_tracker()[5](
\v) ⇑60a, lang=> +≡ ⊲60f 61b▿
1843 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1844 14 | if (RLENGTH<1) {
1845 15 | error(sprintf("Internal error, matched zero length submode, should be impossible - likely regex computation error\n" \
1846 16 | "Language=%s\nmode=%s\nmatch=%s\n", language, mode, submodes));
1848 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1849 part is defined as the text up to the sub-mode or terminator, and this is appended to item --- which is the current text being gathered. If a mode has a delimiter, then item is reset each time a delimiter is found.
1850 ("hello_item, there_item")<wide-overbrace>^item, (he said.)<wide-overbrace>^item
1852 61b <mode_tracker()[6](
\v) ⇑60a, lang=> +≡ ▵61a 61c▿
1853 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1854 18 | part = substr(text, 1, RSTART -1);
1855 19 | item = item part;
1856 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1857 We must now determine what was matched. If it was a terminator, then we must restore the previous mode.
1859 61c <mode_tracker()[7](
\v) ⇑60a, lang=> +≡ ▵61b 61d▿
1860 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1861 20 | if (match(a[1], "^" terminators "$")) {
1862 21 | #printf("%2d EXIT MODE [%s] by [%s] [%s]\n", cindex, mode, a[1], text) > "/dev/stderr"
1863 22 | context[cindex, "values", ++context[cindex, "values"]] = item;
1864 23 | delete context[cindex];
1865 24 | context[""] = --cindex;
1866 25 | if (cindex>=0) {
1867 26 | mode = context[cindex, "mode"];
1868 27 | language = context[cindex, "language"];
1869 28 | «parse_chunk_args-reset-modes 60c»
1871 30 | item = item a[1];
1872 31 | text = substr(text, 1 + length(part) + length(a[1]));
1874 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1875 If a delimiter was matched, then we must store the current item in the parsed values array, and reset the item.
1877 61d <mode_tracker()[8](
\v) ⇑60a, lang=> +≡ ▵61c 61e▿
1878 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1879 33 | else if (match(a[1], "^" delimiters "$")) {
1880 34 | if (cindex==0) {
1881 35 | context[cindex, "values", ++context[cindex, "values"]] = item;
1884 38 | item = item a[1];
1886 40 | text = substr(text, 1 + length(part) + length(a[1]));
1888 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1889 otherwise, if a new submode is detected (all submodes have terminators), we must create a nested parse context until we find the terminator for this mode.
1891 61e <mode_tracker()[9](
\v) ⇑60a, lang=> +≡ ▵61d 62a⊳
1892 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1893 42 | else if ((language, a[1], "terminators") in modes) {
1894 43 | #check if new_mode is defined
1895 44 | item = item a[1];
1896 45 | #printf("%2d ENTER MODE [%s] in [%s]\n", cindex, a[1], text) > "/dev/stderr"
1897 46 | text = substr(text, 1 + length(part) + length(a[1]));
1898 47 | context[""] = ++cindex;
1899 48 | context[cindex, "mode"] = a[1];
1900 49 | context[cindex, "language"] = language;
1902 51 | «parse_chunk_args-reset-modes 60c»
1904 53 | error(sprintf("Submode '%s' set unknown mode in text: %s\nLanguage %s Mode %s\n", a[1], text, language, mode));
1905 54 | text = substr(text, 1 + length(part) + length(a[1]));
1908 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1909 In the final case, we parsed to the end of the string. If the string was entire, then we should have no nested mode context, but if the string was just a fragment we may have a mode context which must be preserved for the next fragment. Todo: Consideration ought to be given if sub-mode strings are split over two fragments.
1911 62a <mode_tracker()[10](
\v) ⇑60a, lang=> +≡ ⊲61e
1912 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1914 58 | context[cindex, "values", ++context[cindex, "values"]] = item text;
1920 64 | context["item"] = item;
1922 66 | if (length(item)) context[cindex, "values", ++context[cindex, "values"]] = item;
1925 |________________________________________________________________________
1928 11.6.3.1 One happy chunk
1929 All the mode tracker chunks are referred to here:
1931 62b <mode-tracker[1](
\v), lang=> ≡
1932 ________________________________________________________________________
1933 1 | «new_mode_tracker() 58a»
1934 2 | «mode_tracker() 60a»
1935 |________________________________________________________________________
1939 We can test this function like this:
1941 62c <pca-test.awk[1](
\v), lang=awk> ≡
1942 ________________________________________________________________________
1944 2 | «mode-tracker 62b»
1945 3 | «parse_chunk_args() ?»
1948 6 | «mode-definitions 52b»
1950 8 | «test:mode-definitions 49c»
1952 |________________________________________________________________________
1956 62d <pca-test.awk:summary[1](
\v), lang=awk> ≡
1957 ________________________________________________________________________
1959 2 | printf "Failed " e
1961 4 | print "a[" b "] => " a[b];
1968 |________________________________________________________________________
1971 which should give this output:
1973 63a <pca-test.awk-results[1](
\v), lang=> ≡
1974 ________________________________________________________________________
1975 1 | a[foo.quux.quirk] =>
1976 2 | a[foo.quux.a] => fleeg
1977 3 | a[foo.bar] => baz
1979 5 | a[name] => freddie
1980 |________________________________________________________________________
1983 11.7 Escaping and Quoting
1984 For the time being and to get around TeXmacs inability to export a TAB character, the right arrow ↦ whose UTF-8 sequence is ...
1987 Another special character is used, the left-arrow ↤ with UTF-8 sequence 0xE2 0x86 0xA4 is used to strip any preceding white space as a way of un-tabbing and removing indent that has been applied — this is important for bash here documents, and the like. It's a filthy hack.
1988 To do: remove the hack
1991 63b <mode_tracker[4](
\v) ⇑58c, lang=> +≡ ⊲59b 63c▿
1992 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1994 39 | function untab(text) {
1995 40 | gsub("[[:space:]]*\xE2\x86\xA4","", text);
1998 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1999 Each nested mode can optionally define a set of transforms to be applied to any text that is included from another language.
2000 This code can perform transforms from index c downwards.
2002 63c <mode_tracker[5](
\v) ⇑58c, lang=awk> +≡ ▵63b
2003 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2004 43 | function transform_escape(context, text, top,
2005 44 | c, cp, cpl, s, r)
2007 46 | for(c = top; c >= 0; c--) {
2008 47 | if ( (context[c, "language"], context[c, "mode"]) in escapes) {
2009 48 | cpl = escapes[context[c, "language"], context[c, "mode"]];
2010 49 | for (cp = 1; cp <= cpl; cp ++) {
2011 50 | s = escapes[context[c, "language"], context[c, "mode"], cp, "s"];
2012 51 | r = escapes[context[c, "language"], context[c, "mode"], cp, "r"];
2013 52 | if (length(s)) {
2014 53 | gsub(s, r, text);
2016 55 | if ( (context[c, "language"], context[c, "mode"], cp, "t") in escapes ) {
2017 56 | quotes[src, "t"] = escapes[context[c, "language"], context[c, "mode"], cp, "t"];
2024 63 | function dump_escaper(quotes, r, cc) {
2025 64 | for(cc=1; cc<=c; cc++) {
2026 65 | printf("%2d s[%s] r[%s]\n", cc, quotes[cc, "s"], quotes[cc, "r"]) > "/dev/stderr"
2029 |________________________________________________________________________
2033 64a <test:escapes[1](
\v), lang=sh> ≡
2034 ________________________________________________________________________
2035 1 | echo escapes test
2036 2 | passtest $FANGLE -Rtest:comment-quote $TXT_SRC &>/dev/null || ( echo "Comment-quote failed" && exit 1 )
2037 |________________________________________________________________________
2040 Chapter 12Recognizing Chunks
2041 Fangle recognizes noweb chunks, but as we also want better LaTeX integration we will recognize any of these:
2042 • notangle chunks matching the pattern ^<<.*?>>=
2043 • chunks beginning with \begin{lstlistings}, possibly with \Chunk{...} on the previous line
2044 • an older form I have used, beginning with \begin{Chunk}[options] --- also more suitable for plain LaTeX users1. Is there such a thing as plain LaTeX? ^1.
2046 The variable chunking is used to signify that we are processing a code chunk and not document. In such a state, input lines will be assigned to the current chunk; otherwise they are ignored.
2048 We don't handle TeXmacs files natively yet, but rather instead emit unicode character sequences to mark up the text-export file which we do process.
2049 These hacks detect the unicode character sequences and retro-fit in the old TeX parsing.
2050 We convert ↦ into a tab character.
2052 65a <recognize-chunk[1](
\v), lang=> ≡ 65b▿
2053 ________________________________________________________________________
2056 2 | # gsub("\n*$","");
2057 3 | # gsub("\n", " ");
2060 6 | /\xE2\x86\xA6/ {
2061 7 | gsub("\\xE2\\x86\\xA6", "\x09");
2063 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2064 TeXmacs back-tick handling is obscure, and a cut-n-paste back-tick from a shell window comes out as a unicode sequence2. that won't export to html, except as a NULL character (literal 0x00) ^2 that is fixed-up here.
2066 65b <recognize-chunk[2](
\v) ⇑65a, lang=> +≡ ▵65a 66a⊳
2067 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2069 10 | /\xE2\x80\x98/ {
2070 11 | gsub("\\xE2\\x80\\x98", "‘");
2072 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2073 In the TeXmacs output, the start of a chunk will appear like this:
2074 5b<example-chunk^K[1](arg1,^K arg2^K^K), lang=C> ≡
2075 We detect the the start of a TeXmacs chunk by detecting the ≡ symbol which occurs near the end of the line. We obtain the chunk name, the chunk parameters, and the chunk language.
2077 66a <recognize-chunk[3](
\v) ⇑65a, lang=> +≡ ⊲65b 66b▿
2078 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2080 14 | /\xE2\x89\xA1/ {
2081 15 | if (match($0, "^ *([^[ ]* |)<([^[ ]*)\\[[0-9]*\\][(](.*)[)].*, lang=([^ ]*)>", line)) {
2082 16 | next_chunk_name=line[2];
2083 17 | get_texmacs_chunk_args(line[3], next_chunk_params);
2084 18 | gsub(ARG_SEPARATOR ",? ?", ";", line[3]);
2085 19 | params = "params=" line[3];
2086 20 | if ((line[4])) {
2087 21 | params = params ",language=" line[4]
2089 23 | get_tex_chunk_args(params, next_chunk_opts);
2090 24 | new_chunk(next_chunk_name, next_chunk_opts, next_chunk_params);
2091 25 | texmacs_chunking = 1;
2093 27 | # warning(sprintf("Unexpected chunk match: %s\n", $_))
2097 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2099 Our current scheme is to recognize the new lstlisting chunks, but these may be preceded by a \Chunk command which in L Y X is a more convenient way to pass the chunk name to the \begin{lstlistings} command, and a more visible way to specify other lstset settings.
2100 The arguments to the \Chunk command are a name, and then a comma-seperated list of key-value pairs after the manner of \lstset. (In fact within the LaTeX \Chunk macro (section 17.2.1) the text name= is prefixed to the argument which is then literally passed to \lstset).
2102 66b <recognize-chunk[4](
\v) ⇑65a, lang=awk> +≡ ▵66a 66c▿
2103 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2105 32 | if (match($0, "^\\\\Chunk{ *([^ ,}]*),?(.*)}", line)) {
2106 33 | next_chunk_name = line[1];
2107 34 | get_tex_chunk_args(line[2], next_chunk_opts);
2111 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2112 We also make a basic attempt to parse the name out of the \lstlistings[name=chunk-name] text, otherwise we fall back to the name found in the previous chunk command. This attempt is very basic and doesn't support commas or spaces or square brackets as part of the chunkname. We also recognize \begin{Chunk} which is convenient for some users3. but not yet supported in the LaTeX macros ^3.
2114 66c <recognize-chunk[5](
\v) ⇑65a, lang=> +≡ ▵66b 67a⊳
2115 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2116 38 | /^\\begin{lstlisting}|^\\begin{Chunk}/ {
2117 39 | if (match($0, "}.*[[,] *name= *{? *([^], }]*)", line)) {
2118 40 | new_chunk(line[1]);
2120 42 | new_chunk(next_chunk_name, next_chunk_opts);
2125 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2128 A chunk body in TeXmacs ends with |________... if it is the final chunklet of a chunk, or if there are further chunklets it ends with |\/\/\/... which is a depiction of a jagged line of torn paper.
2130 67a <recognize-chunk[6](
\v) ⇑65a, lang=> +≡ ⊲66c 67b▿
2131 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2132 47 | /^ *\|____________*/ && texmacs_chunking {
2133 48 | active_chunk="";
2134 49 | texmacs_chunking=0;
2137 52 | /^ *\|\/\\/ && texmacs_chunking {
2138 53 | texmacs_chunking=0;
2140 55 | active_chunk="";
2142 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2143 It has been observed that not every line of output when a TeXmacs chunk is active is a line of chunk. This may no longer be true, but we set a variable texmacs_chunk if the current line is a chunk line.
2144 Initially we set this to zero...
2146 67b <recognize-chunk[7](
\v) ⇑65a, lang=> +≡ ▵67a 67c▿
2147 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2148 57 | texmacs_chunk=0;
2149 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2150 ...and then we look to see if the current line is a chunk line.
2151 TeXmacs lines look like this: 3 | main() { so we detect the lines by leading white space, digits, more whiter space and a vertical bar followed by at least once space.
2152 If we find such a line, we remove this line-header and set texmacs_chunk=1 as well as chunking=1
2154 67c <recognize-chunk[8](
\v) ⇑65a, lang=> +≡ ▵67b 67d▿
2155 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2156 58 | /^ *[1-9][0-9]* *\| / {
2157 59 | if (texmacs_chunking) {
2159 61 | texmacs_chunk=1;
2160 62 | gsub("^ *[1-9][0-9]* *\\| ", "")
2163 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2164 When TeXmacs chunking, lines that commence with \/ or __ are not chunk content but visual framing, and are skipped.
2166 67d <recognize-chunk[9](
\v) ⇑65a, lang=> +≡ ▵67c 68a⊳
2167 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2168 65 | /^ *\.\/\\/ && texmacs_chunking {
2171 68 | /^ *__*$/ && texmacs_chunking {
2174 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2175 Any other line when TeXmacs chunking is considered to be a line-wrapped line.
2177 68a <recognize-chunk[10](
\v) ⇑65a, lang=> +≡ ⊲67d 68b▿
2178 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2179 71 | texmacs_chunking {
2180 72 | if (! texmacs_chunk) {
2181 73 | # must be a texmacs continued line
2183 75 | texmacs_chunk=1;
2186 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2187 This final chunklet seems bogus and probably stops L Y X working.
2189 68b <recognize-chunk[11](
\v) ⇑65a, lang=> +≡ ▵68a 68c▿
2190 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2191 78 | ! texmacs_chunk {
2192 79 | # texmacs_chunking=0;
2195 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2197 We recognize notangle style chunks too:
2199 68c <recognize-chunk[12](
\v) ⇑65a, lang=awk> +≡ ▵68b 68d▿
2200 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2201 82 | /^[<]<.*[>]>=/ {
2202 83 | if (match($0, "^[<]<(.*)[>]>= *$", line)) {
2204 85 | notangle_mode=1;
2205 86 | new_chunk(line[1]);
2209 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2211 Likewise, we need to recognize when a chunk ends.
2213 The e in [e]nd{lislisting} is surrounded by square brackets so that when this document is processed, this chunk doesn't terminate early when the lstlistings package recognizes it's own end-string!4. This doesn't make sense as the regex is anchored with ^, which this line does not begin with! ^4
2215 68d <recognize-chunk[13](
\v) ⇑65a, lang=> +≡ ▵68c 69a⊳
2216 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2217 90 | /^\\[e]nd{lstlisting}|^\\[e]nd{Chunk}/ {
2219 92 | active_chunk="";
2222 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2225 69a <recognize-chunk[14](
\v) ⇑65a, lang=> +≡ ⊲68d 69b▿
2226 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2229 97 | active_chunk="";
2231 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2232 All other recognizers are only of effect if we are chunking; there's no point in looking at lines if they aren't part of a chunk, so we just ignore them as efficiently as we can.
2234 69b <recognize-chunk[15](
\v) ⇑65a, lang=> +≡ ▵69a 69c▿
2235 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2236 99 | ! chunking { next; }
2237 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2239 Chunk contents are any lines read while chunking is true. Some chunk contents are special in that they refer to other chunks, and will be replaced by the contents of these chunks when the file is generated.
2240 We add the output record separator ORS to the line now, because we will set ORS to the empty string when we generate the output5. So that we can partial print lines using print instead of printf.
2241 To do: This does't make sense
2244 69c <recognize-chunk[16](
\v) ⇑65a, lang=> +≡ ▵69b
2245 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2246 100 | length(active_chunk) {
2247 101 | «process-chunk-tabs 69e»
2248 102 | «process-chunk 70b»
2250 |________________________________________________________________________
2253 If a chunk just consisted of plain text, we could handle the chunk like this:
2255 69d <process-chunk-simple[1](
\v), lang=> ≡
2256 ________________________________________________________________________
2257 1 | chunk_line(active_chunk, $0 ORS);
2258 |________________________________________________________________________
2261 but in fact a chunk can include references to other chunks. Chunk includes are traditionally written as <<chunk-name>> but we support other variations, some of which are more suitable for particular editing systems.
2262 However, we also process tabs at this point. A tab at input can be replaced by a number of spaces defined by the tabs variable, set by the -T option. Of course this is poor tab behaviour, we should probably have the option to use proper counted tab-stops and process this on output.
2264 69e <process-chunk-tabs[1](
\v), lang=> ≡
2265 ________________________________________________________________________
2266 1 | if (length(tabs)) {
2267 2 | gsub("\t", tabs);
2269 |________________________________________________________________________
2273 If \lstset{escapeinside={=<}{>}} is set, then we can use <chunk-name ?> in listings. The sequence =< was chosen because:
2274 1.it is a better mnemonic than <<chunk-name>> in that the = sign signifies equivalence or substitutability.
2275 2.and because =< is not valid in C or any language I can think of.
2276 3.and also because lstlistings doesn't like >> as an end delimiter for the texcl escape, so we must make do with a single > which is better complemented by =< than by <<.
2277 Unfortunately the =<...> that we use re-enters a LaTeX parsing mode in which some characters are special, e.g. # \ and so these cause trouble if used in arguments to \chunkref. At some point I must fix the LaTeX command \chunkref so that it can accept these literally, but until then, when writing chunkref argumemts that need these characters, I must use the forms \textbackslash{} and \#; so I also define a hacky chunk delatex to be used further on whose purpose it is to remove these from any arguments parsed by fangle.
2279 70a <delatex[1](text
\v\v), lang=> ≡
2280 ________________________________________________________________________
2282 2 | gsub("\\\\#", "#", ${text});
2283 3 | gsub("\\\\textbackslash{}", "\\", ${text});
2284 4 | gsub("\\\\\\^", "^", ${text});
2285 |________________________________________________________________________
2288 As each chunk line may contain more than one chunk include, we will split out chunk includes in an iterative fashion6. Contrary to our use of split when substituting parameters in chapter ? ^6.
2289 First, as long as the chunk contains a \chunkref command we take as much as we can up to the first \chunkref command.
2290 TeXmacs text output uses ⟨...⟩ which comes out as unicode sequences 0xC2 0xAB ... 0xC2 0xBB. Modern awk will interpret [^\xC2\xBB] as a single unicode character if LANG is set correctly to the sub-type UTF-8, e.g. LANG=en_GB.UTF-8, otherwise [^\xC2\xBB] will be treated as a two character negated match — but this should not interfere with the function.
2292 70b <process-chunk[1](
\v), lang=> ≡ 70c▿
2293 ________________________________________________________________________
2296 3 | while(match(chunk,"(\xC2\xAB)([^\xC2\xBB]*) [^\xC2\xBB]*\xC2\xBB", line) ||
2298 5 | "([=]<\\\\chunkref{([^}>]*)}(\\(.*\\)|)>|<<([a-zA-Z_][-a-zA-Z0-9_]*)>>)",
2301 8 | chunklet = substr(chunk, 1, RSTART - 1);
2302 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2303 We keep track of the indent count, by counting the number of literal characters found. We can then preserve this indent on each output line when multi-line chunks are expanded.
2304 We then process this first part literal text, and set the chunk which is still to be processed to be the text after the \chunkref command, which we will process next as we continue around the loop.
2306 70c <process-chunk[2](
\v) ⇑70b, lang=> +≡ ▵70b 71a⊳
2307 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2308 9 | indent += length(chunklet);
2309 10 | chunk_line(active_chunk, chunklet);
2310 11 | chunk = substr(chunk, RSTART + RLENGTH);
2311 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2312 We then consider the type of chunk command we have found, whether it is the fangle style command beginning with =< the older notangle style beginning with <<.
2313 Fangle chunks may have parameters contained within square brackets. These will be matched in line[3] and are considered at this stage of processing to be part of the name of the chunk to be included.
2315 71a <process-chunk[3](
\v) ⇑70b, lang=> +≡ ⊲70c 71b▿
2316 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2317 12 | if (substr(line[1], 1, 1) == "=") {
2318 13 | # chunk name up to }
2319 14 | «delatex
\v(line[3]
\v) 70a»
2320 15 | chunk_include(active_chunk, line[2] line[3], indent);
2321 16 | } else if (substr(line[1], 1, 1) == "<") {
2322 17 | chunk_include(active_chunk, line[4], indent);
2323 18 | } else if (line[1] == "\xC2\xAB") {
2324 19 | chunk_include(active_chunk, line[2], indent);
2326 21 | error("Unknown chunk fragment: " line[1]);
2328 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2329 The loop will continue until there are no more chunkref statements in the text, at which point we process the final part of the chunk.
2331 71b <process-chunk[4](
\v) ⇑70b, lang=> +≡ ▵71a 71c▿
2332 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2334 24 | chunk_line(active_chunk, chunk);
2335 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2336 We add the newline character as a chunklet on it's own, to make it easier to detect new lines and thus manage indentation when processing the output.
2338 71c <process-chunk[5](
\v) ⇑70b, lang=> +≡ ▵71b
2339 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2340 25 | chunk_line(active_chunk, "\n");
2341 |________________________________________________________________________
2344 We will also permit a chunk-part number to follow in square brackets, so that <chunk-name[1] ?> will refer to the first part only. This can make it easy to include a C function prototype in a header file, if the first part of the chunk is just the function prototype without the trailing semi-colon. The header file would include the prototype with the trailing semi-colon, like this:
2346 This is handled in section 14.1.1
2347 We should perhaps introduce a notion of language specific chunk options; so that perhaps we could specify:
2348 =<\chunkref{chunk-name[function-declaration]}
2349 which applies a transform function-declaration to the chunk --- which in this case would extract a function prototype from a function.
2352 Chapter 13Processing Options
2353 At the start, first we set the default options.
2355 73a <default-options[1](
\v), lang=> ≡
2356 ________________________________________________________________________
2359 3 | notangle_mode=0;
2362 |________________________________________________________________________
2365 Then we use getopt the standard way, and null out ARGV afterwards in the normal AWK fashion.
2367 73b <read-options[1](
\v), lang=> ≡
2368 ________________________________________________________________________
2369 1 | Optind = 1 # skip ARGV[0]
2370 2 | while(getopt(ARGC, ARGV, "R:LdT:hr")!=-1) {
2371 3 | «handle-options 73c»
2373 5 | for (i=1; i<Optind; i++) { ARGV[i]=""; }
2374 |________________________________________________________________________
2377 This is how we handle our options:
2379 73c <handle-options[1](
\v), lang=> ≡
2380 ________________________________________________________________________
2381 1 | if (Optopt == "R") root = Optarg;
2382 2 | else if (Optopt == "r") root="";
2383 3 | else if (Optopt == "L") linenos = 1;
2384 4 | else if (Optopt == "d") debug = 1;
2385 5 | else if (Optopt == "T") tabs = indent_string(Optarg+0);
2386 6 | else if (Optopt == "h") help();
2387 7 | else if (Optopt == "?") help();
2388 |________________________________________________________________________
2391 We do all of this at the beginning of the program
2393 73d <begin[1](
\v), lang=> ≡
2394 ________________________________________________________________________
2397 3 | «mode-definitions 52b»
2398 4 | «default-options 73a»
2400 6 | «read-options 73b»
2402 |________________________________________________________________________
2405 And have a simple help function
2407 73e <help()[1](
\v), lang=> ≡
2408 ________________________________________________________________________
2409 1 | function help() {
2411 3 | print " fangle [-L] -R<rootname> [source.tex ...]"
2412 4 | print " fangle -r [source.tex ...]"
2413 5 | print " If the filename, source.tex is not specified then stdin is used"
2415 7 | print "-L causes the C statement: #line <lineno> \"filename\"" to be issued"
2416 8 | print "-R causes the named root to be written to stdout"
2417 9 | print "-r lists all roots in the file (even those used elsewhere)"
2420 |________________________________________________________________________
2423 Chapter 14Generating the Output
2424 We generate output by calling output_chunk, or listing the chunk names.
2426 75a <generate-output[1](
\v), lang=> ≡
2427 ________________________________________________________________________
2428 1 | if (length(root)) output_chunk(root);
2429 2 | else output_chunk_names();
2430 |________________________________________________________________________
2433 We also have some other output debugging:
2435 75b <debug-output[1](
\v), lang=> ≡
2436 ________________________________________________________________________
2438 2 | print "------ chunk names "
2439 3 | output_chunk_names();
2440 4 | print "====== chunks"
2441 5 | output_chunks();
2442 6 | print "++++++ debug"
2443 7 | for (a in chunks) {
2444 8 | print a "=" chunks[a];
2447 |________________________________________________________________________
2450 We do both of these at the end. We also set ORS="" because each chunklet is not necessarily a complete line, and we already added ORS to each input line in section 12.4.
2452 75c <end[1](
\v), lang=> ≡
2453 ________________________________________________________________________
2455 2 | «debug-output 75b»
2457 4 | «generate-output 75a»
2459 |________________________________________________________________________
2462 We write chunk names like this. If we seem to be running in notangle compatibility mode, then we enclose the name like this <<name>> the same way notangle does:
2464 75d <output_chunk_names()[1](
\v), lang=> ≡
2465 ________________________________________________________________________
2466 1 | function output_chunk_names( c, prefix, suffix)
2468 3 | if (notangle_mode) {
2472 7 | for (c in chunk_names) {
2473 8 | print prefix c suffix "\n";
2476 |________________________________________________________________________
2479 This function would write out all chunks
2481 75e <output_chunks()[1](
\v), lang=> ≡
2482 ________________________________________________________________________
2483 1 | function output_chunks( a)
2485 3 | for (a in chunk_names) {
2486 4 | output_chunk(a);
2490 8 | function output_chunk(chunk) {
2492 10 | lineno_needed = linenos;
2494 12 | write_chunk(chunk);
2497 |________________________________________________________________________
2500 14.1 Assembling the Chunks
2501 chunk_path holds a string consisting of the names of all the chunks that resulted in this chunk being output. It should probably also contain the source line numbers at which each inclusion also occured.
2502 We first initialize the mode tracker for this chunk.
2504 76a <write_chunk()[1](
\v), lang=awk> ≡ 76b▿
2505 ________________________________________________________________________
2506 1 | function write_chunk(chunk_name) {
2507 2 | «awk-delete-array
\v(context
\v) 37d»
2508 3 | return write_chunk_r(chunk_name, context);
2511 6 | function write_chunk_r(chunk_name, context, indent, tail,
2513 8 | chunk_path, chunk_args,
2515 10 | context_origin,
2516 11 | chunk_params, part, max_part, part_line, frag, max_frag, text,
2517 12 | chunklet, only_part, call_chunk_args, new_context)
2519 14 | if (debug) debug_log("write_chunk_r(" chunk_name ")");
2520 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2522 As mentioned in section ?, a chunk name may contain a part specifier in square brackets, limiting the parts that should be emitted.
2524 76b <write_chunk()[2](
\v) ⇑76a, lang=> +≡ ▵76a 76c▿
2525 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2526 15 | if (match(chunk_name, "^(.*)\\[([0-9]*)\\]$", chunk_name_parts)) {
2527 16 | chunk_name = chunk_name_parts[1];
2528 17 | only_part = chunk_name_parts[2];
2530 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2531 We then create a mode tracker
2533 76c <write_chunk()[3](
\v) ⇑76a, lang=> +≡ ▵76b 77a⊳
2534 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2535 19 | context_origin = context[""];
2536 20 | new_context = push_mode_tracker(context, chunks[chunk_name, "language"], "");
2537 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2538 We extract into chunk_params the names of the parameters that this chunk accepts, whose values were (optionally) passed in chunk_args.
2540 77a <write_chunk()[4](
\v) ⇑76a, lang=> +≡ ⊲76c 77b▿
2541 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2542 21 | split(chunks[chunk_name, "params"], chunk_params, " *; *");
2543 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2544 To assemble a chunk, we write out each part.
2546 77b <write_chunk()[5](
\v) ⇑76a, lang=> +≡ ▵77a
2547 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2548 22 | if (! (chunk_name in chunk_names)) {
2549 23 | error(sprintf(_"The root module <<%s>> was not defined.\nUsed by: %s",\
2550 24 | chunk_name, chunk_path));
2553 27 | max_part = chunks[chunk_name, "part"];
2554 28 | for(part = 1; part <= max_part; part++) {
2555 29 | if (! only_part || part == only_part) {
2556 30 | «write-part 77c»
2559 33 | if (! pop_mode_tracker(context, context_origin)) {
2560 34 | dump_mode_tracker(context);
2561 35 | error(sprintf(_"Module %s did not close context properly.\nUsed by: %s\n", chunk_name, chunk_path));
2564 |________________________________________________________________________
2567 A part can either be a chunklet of lines, or an include of another chunk.
2568 Chunks may also have parameters, specified in LaTeX style with braces after the chunk name --- looking like this in the document: chunkname{param1, param2}. Arguments are passed in square brackets: \chunkref{chunkname}[arg1, arg2].
2569 Before we process each part, we check that the source position hasn't changed unexpectedly, so that we can know if we need to output a new file-line directive.
2571 77c <write-part[1](
\v), lang=> ≡
2572 ________________________________________________________________________
2573 1 | «check-source-jump 79d»
2575 3 | chunklet = chunks[chunk_name, "part", part];
2576 4 | if (chunks[chunk_name, "part", part, "type"] == part_type_chunk) {
2577 5 | «write-included-chunk 77d»
2578 6 | } else if (chunklet SUBSEP "line" in chunks) {
2579 7 | «write-chunklets 78a»
2581 9 | # empty last chunklet
2583 |________________________________________________________________________
2586 To write an included chunk, we must detect any optional chunk arguments in parenthesis. Then we recurse calling write_chunk().
2588 77d <write-included-chunk[1](
\v), lang=> ≡
2589 ________________________________________________________________________
2590 1 | if (match(chunklet, "^([^\\[\\(]*)\\((.*)\\)$", chunklet_parts)) {
2591 2 | chunklet = chunklet_parts[1];
2593 4 | gsub(sprintf("%c",11), "", chunklet);
2594 5 | gsub(sprintf("%c",11), "", chunklet_parts[2]);
2595 6 | parse_chunk_args("c-like", chunklet_parts[2], call_chunk_args, "(");
2596 7 | for (c in call_chunk_args) {
2597 8 | call_chunk_args[c] = expand_chunk_args(call_chunk_args[c], chunk_params, chunk_args);
2600 11 | split("", call_chunk_args);
2603 14 | write_chunk_r(chunklet, context,
2604 15 | chunks[chunk_name, "part", part, "indent"] indent,
2605 16 | chunks[chunk_name, "part", part, "tail"],
2606 17 | chunk_path "\n " chunk_name,
2607 18 | call_chunk_args);
2608 |________________________________________________________________________
2611 Before we output a chunklet of lines, we first emit the file and line number if we have one, and if it is safe to do so.
2612 Chunklets are generally broken up by includes, so the start of a chunklet is a good place to do this. Then we output each line of the chunklet.
2613 When it is not safe, such as in the middle of a multi-line macro definition, lineno_suppressed is set to true, and in such a case we note that we want to emit the line statement when it is next safe.
2615 78a <write-chunklets[1](
\v), lang=> ≡ 78b▿
2616 ________________________________________________________________________
2617 1 | max_frag = chunks[chunklet, "line"];
2618 2 | for(frag = 1; frag <= max_frag; frag++) {
2619 3 | «write-file-line 79c»
2620 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2621 We then extract the chunklet text and expand any arguments.
2623 78b <write-chunklets[2](
\v) ⇑78a, lang=> +≡ ▵78a 78c▿
2624 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2626 5 | text = chunks[chunklet, frag];
2628 7 | /* check params */
2629 8 | text = expand_chunk_args(text, chunk_params, chunk_args);
2630 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2631 If the text is a single newline (which we keep separate - see 6) then we increment the line number. In the case where this is the last line of a chunk and it is not a top-level chunk we replace the newline with an empty string --- because the chunk that included this chunk will have the newline at the end of the line that included this chunk.
2632 We also note by newline = 1 that we have started a new line, so that indentation can be managed with the following piece of text.
2634 78c <write-chunklets[3](
\v) ⇑78a, lang=> +≡ ▵78b 78d▿
2635 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2637 10 | if (text == "\n") {
2639 12 | if (part == max_part && frag == max_frag && length(chunk_path)) {
2645 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2646 If this text does not represent a newline, but we see that we are the first piece of text on a newline, then we prefix our text with the current indent.
2647 Note 1. newline is a global output-state variable, but the indent is not.
2649 78d <write-chunklets[4](
\v) ⇑78a, lang=> +≡ ▵78c 79a⊳
2650 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2651 18 | } else if (length(text) || length(tail)) {
2652 19 | if (newline) text = indent text;
2656 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2657 Tail will soon no longer be relevant once mode-detection is in place.
2659 79a <write-chunklets[5](
\v) ⇑78a, lang=> +≡ ⊲78d 79b▿
2660 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2661 23 | text = text tail;
2662 24 | mode_tracker(context, text);
2663 25 | print untab(transform_escape(context, text, new_context));
2664 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2665 If a line ends in a backslash --- suggesting continuation --- then we supress outputting file-line as it would probably break the continued lines.
2667 79b <write-chunklets[6](
\v) ⇑78a, lang=> +≡ ▵79a
2668 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2670 27 | lineno_suppressed = substr(lastline, length(lastline)) == "\\";
2673 |________________________________________________________________________
2676 Of course there is no point in actually outputting the source filename and line number (file-line) if they don't say anything new! We only need to emit them if they aren't what is expected, or if we we not able to emit one when they had changed.
2678 79c <write-file-line[1](
\v), lang=> ≡
2679 ________________________________________________________________________
2680 1 | if (newline && lineno_needed && ! lineno_suppressed) {
2681 2 | filename = a_filename;
2682 3 | lineno = a_lineno;
2683 4 | print "#line " lineno " \"" filename "\"\n"
2684 5 | lineno_needed = 0;
2686 |________________________________________________________________________
2689 We check if a new file-line is needed by checking if the source line matches what we (or a compiler) would expect.
2691 79d <check-source-jump[1](
\v), lang=> ≡
2692 ________________________________________________________________________
2693 1 | if (linenos && (chunk_name SUBSEP "part" SUBSEP part SUBSEP "FILENAME" in chunks)) {
2694 2 | a_filename = chunks[chunk_name, "part", part, "FILENAME"];
2695 3 | a_lineno = chunks[chunk_name, "part", part, "LINENO"];
2696 4 | if (a_filename != filename || a_lineno != lineno) {
2697 5 | lineno_needed++;
2700 |________________________________________________________________________
2703 Chapter 15Storing Chunks
2704 Awk has pretty limited data structures, so we will use two main hashes. Uninterrupted sequences of a chunk will be stored in chunklets and the chunklets used in a chunk will be stored in chunks.
2706 81a <constants[2](
\v) ⇑39a, lang=> +≡ ⊲39a
2707 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2708 2 | part_type_chunk=1;
2710 |________________________________________________________________________
2713 The params mentioned are not chunk parameters for parameterized chunks, as mentioned in 10.2, but the lstlistings style parameters used in the \Chunk command1. The params parameter is used to hold the parameters for parameterized chunks ^1.
2715 81b <chunk-storage-functions[1](
\v), lang=> ≡ 81c▿
2716 ________________________________________________________________________
2717 1 | function new_chunk(chunk_name, opts, args,
2721 5 | # HACK WHILE WE CHANGE TO ( ) for PARAM CHUNKS
2722 6 | gsub("\\(\\)$", "", chunk_name);
2723 7 | if (! (chunk_name in chunk_names)) {
2724 8 | if (debug) print "New chunk " chunk_name;
2725 9 | chunk_names[chunk_name];
2726 10 | for (p in opts) {
2727 11 | chunks[chunk_name, p] = opts[p];
2728 12 | if (debug) print "chunks[" chunk_name "," p "] = " opts[p];
2730 14 | for (p in args) {
2731 15 | chunks[chunk_name, "params", p] = args[p];
2733 17 | if ("append" in opts) {
2734 18 | append=opts["append"];
2735 19 | if (! (append in chunk_names)) {
2736 20 | warning("Chunk " chunk_name " is appended to chunk " append " which is not defined yet");
2737 21 | new_chunk(append);
2739 23 | chunk_include(append, chunk_name);
2740 24 | chunk_line(append, ORS);
2743 27 | active_chunk = chunk_name;
2744 28 | prime_chunk(chunk_name);
2746 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2748 81c <chunk-storage-functions[2](
\v) ⇑81b, lang=> +≡ ▵81b 82a⊳
2749 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2751 31 | function prime_chunk(chunk_name)
2753 33 | chunks[chunk_name, "part", ++chunks[chunk_name, "part"] ] = \
2754 34 | chunk_name SUBSEP "chunklet" SUBSEP "" ++chunks[chunk_name, "chunklet"];
2755 35 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "FILENAME"] = FILENAME;
2756 36 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "LINENO"] = FNR + 1;
2759 39 | function chunk_line(chunk_name, line){
2760 40 | chunks[chunk_name, "chunklet", chunks[chunk_name, "chunklet"],
2761 41 | ++chunks[chunk_name, "chunklet", chunks[chunk_name, "chunklet"], "line"] ] = line;
2764 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2765 Chunk include represents a chunkref statement, and stores the requirement to include another chunk. The parameter indent represents the quanity of literal text characters that preceded this chunkref statement and therefore by how much additional lines of the included chunk should be indented.
2767 82a <chunk-storage-functions[3](
\v) ⇑81b, lang=> +≡ ⊲81c 82b▿
2768 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2769 44 | function chunk_include(chunk_name, chunk_ref, indent, tail)
2771 46 | chunks[chunk_name, "part", ++chunks[chunk_name, "part"] ] = chunk_ref;
2772 47 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "type" ] = part_type_chunk;
2773 48 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "indent" ] = indent_string(indent);
2774 49 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "tail" ] = tail;
2775 50 | prime_chunk(chunk_name);
2778 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2779 The indent is calculated by indent_string, which may in future convert some spaces into tab characters. This function works by generating a printf padded format string, like %22s for an indent of 22, and then printing an empty string using that format.
2781 82b <chunk-storage-functions[4](
\v) ⇑81b, lang=> +≡ ▵82a
2782 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2783 53 | function indent_string(indent) {
2784 54 | return sprintf("%" indent "s", "");
2786 |________________________________________________________________________
2790 I use Arnold Robbins public domain getopt (1993 revision). This is probably the same one that is covered in chapter 12 of âĂIJEdition 3 of GAWK: Effective AWK Programming: A User's Guide for GNU AwkâĂİ but as that is licensed under the GNU Free Documentation License, Version 1.3, which conflicts with the GPL3, I can't use it from there (or it's accompanying explanations), so I do my best to explain how it works here.
2791 The getopt.awk header is:
2793 83a <getopt.awk-header[1](
\v), lang=> ≡
2794 ________________________________________________________________________
2795 1 | # getopt.awk --- do C library getopt(3) function in awk
2797 3 | # Arnold Robbins, arnold@skeeve.com, Public Domain
2799 5 | # Initial version: March, 1991
2800 6 | # Revised: May, 1993
2802 |________________________________________________________________________
2805 The provided explanation is:
2807 83b <getopt.awk-notes[1](
\v), lang=> ≡
2808 ________________________________________________________________________
2809 1 | # External variables:
2810 2 | # Optind -- index in ARGV of first nonoption argument
2811 3 | # Optarg -- string value of argument to current option
2812 4 | # Opterr -- if nonzero, print our own diagnostic
2813 5 | # Optopt -- current option letter
2816 8 | # -1 at end of options
2817 9 | # ? for unrecognized option
2818 10 | # <c> a character representing the current option
2820 12 | # Private Data:
2821 13 | # _opti -- index in multi-flag option, e.g., -abc
2823 |________________________________________________________________________
2826 The function follows. The final two parameters, thisopt and i are local variables and not parameters --- as indicated by the multiple spaces preceding them. Awk doesn't care, the multiple spaces are a convention to help us humans.
2828 83c <getopt.awk-getopt()[1](
\v), lang=> ≡ 84a⊳
2829 ________________________________________________________________________
2830 1 | function getopt(argc, argv, options, thisopt, i)
2832 3 | if (length(options) == 0) # no options given
2834 5 | if (argv[Optind] == "--") { # all done
2838 9 | } else if (argv[Optind] !~ /^-[^: \t\n\f\r\v\b]/) {
2842 13 | if (_opti == 0)
2844 15 | thisopt = substr(argv[Optind], _opti, 1)
2845 16 | Optopt = thisopt
2846 17 | i = index(options, thisopt)
2849 20 | printf("%c -- invalid option\n",
2850 21 | thisopt) > "/dev/stderr"
2851 22 | if (_opti >= length(argv[Optind])) {
2858 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2859 At this point, the option has been found and we need to know if it takes any arguments.
2861 84a <getopt.awk-getopt()[2](
\v) ⇑83c, lang=> +≡ ⊲83c
2862 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2863 29 | if (substr(options, i + 1, 1) == ":") {
2864 30 | # get option argument
2865 31 | if (length(substr(argv[Optind], _opti + 1)) > 0)
2866 32 | Optarg = substr(argv[Optind], _opti + 1)
2868 34 | Optarg = argv[++Optind]
2872 38 | if (_opti == 0 || _opti >= length(argv[Optind])) {
2879 |________________________________________________________________________
2882 A test program is built in, too
2884 84b <getopt.awk-begin[1](
\v), lang=> ≡
2885 ________________________________________________________________________
2887 2 | Opterr = 1 # default is to diagnose
2888 3 | Optind = 1 # skip ARGV[0]
2890 5 | if (_getopt_test) {
2891 6 | while ((_go_c = getopt(ARGC, ARGV, "ab:cd")) != -1)
2892 7 | printf("c = <%c>, optarg = <%s>\n",
2894 9 | printf("non-option arguments:\n")
2895 10 | for (; Optind < ARGC; Optind++)
2896 11 | printf("\tARGV[%d] = <%s>\n",
2897 12 | Optind, ARGV[Optind])
2900 |________________________________________________________________________
2903 The entire getopt.awk is made out of these chunks in order
2905 84c <getopt.awk[1](
\v), lang=> ≡
2906 ________________________________________________________________________
2907 1 | «getopt.awk-header 83a»
2909 3 | «getopt.awk-notes 83b»
2910 4 | «getopt.awk-getopt() 83c»
2911 5 | «getopt.awk-begin 84b»
2912 |________________________________________________________________________
2915 Although we only want the header and function:
2917 85a <getopt[1](
\v), lang=> ≡
2918 ________________________________________________________________________
2919 1 | # try: locate getopt.awk for the full original file
2920 2 | # as part of your standard awk installation
2921 3 | «getopt.awk-header 83a»
2923 5 | «getopt.awk-getopt() 83c»
2924 |________________________________________________________________________
2927 Chapter 17Fangle LaTeX source code
2929 Here we define a L Y X .module file that makes it convenient to use L Y X for writing such literate programs.
2930 This file ./fangle.module can be installed in your personal .lyx/layouts folder. You will need to Tools Reconfigure so that L Y X notices it. It adds a new format Chunk, which should precede every listing and contain the chunk name.
2932 87a <./fangle.module[1](
\v), lang=lyx-module> ≡
2933 ________________________________________________________________________
2934 1 | #\DeclareLyXModule{Fangle Literate Listings}
2935 2 | #DescriptionBegin
2936 3 | # Fangle literate listings allow one to write
2937 4 | # literate programs after the fashion of noweb, but without having
2938 5 | # to use noweave to generate the documentation. Instead the listings
2939 6 | # package is extended in conjunction with the noweb package to implement
2940 7 | # to code formating directly as latex.
2941 8 | # The fangle awk script
2944 11 | «gpl3-copyright.hashed 87b»
2949 16 | «./fangle.sty 88d»
2952 19 | «chunkstyle 88a»
2955 |________________________________________________________________________
2958 Because L Y X modules are not yet a language supported by fangle or lstlistings, we resort to this fake awk chunk below in order to have each line of the GPL3 license commence with a #
2960 87b <gpl3-copyright.hashed[1](
\v), lang=awk> ≡
2961 ________________________________________________________________________
2962 1 | #«gpl3-copyright 4a»
2964 |________________________________________________________________________
2967 17.1.1 The Chunk style
2968 The purpose of the chunk style is to make it easier for L Y X users to provide the name to lstlistings. Normally this requires right-clicking on the listing, choosing settings, advanced, and then typing name=chunk-name. This has the further disadvantage that the name (and other options) are not generally visible during document editing.
2969 The chunk style is defined as a LaTeX command, so that all text on the same line is passed to the LaTeX command Chunk. This makes it easy to parse using fangle, and easy to pass these options on to the listings package. The first word in a chunk section should be the chunk name, and will have name= prepended to it. Any other words are accepted arguments to lstset.
2970 We set PassThru to 1 because the user is actually entering raw latex.
2972 88a <chunkstyle[1](
\v), lang=> ≡ 88b▿
2973 ________________________________________________________________________
2975 2 | LatexType Command
2977 4 | Margin First_Dynamic
2978 5 | LeftMargin Chunk:xxx
2980 7 | LabelType Static
2981 8 | LabelString "Chunk:"
2985 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2986 To make the label very visible we choose a larger font coloured red.
2988 88b <chunkstyle[2](
\v) ⇑88a, lang=> +≡ ▵88a
2989 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2998 |________________________________________________________________________
3001 17.1.2 The chunkref style
3002 We also define the Chunkref style which can be used to express cross references to chunks.
3004 88c <chunkref[1](
\v), lang=> ≡
3005 ________________________________________________________________________
3006 1 | InsetLayout Chunkref
3007 2 | LyxType charstyle
3008 3 | LatexType Command
3009 4 | LatexName chunkref
3016 |________________________________________________________________________
3020 We require the listings, noweb and xargs packages. As noweb defines it's own \code environment, we re-define the one that L Y X logical markup module expects here.
3022 88d <./fangle.sty[1](
\v), lang=tex> ≡ 89a⊳
3023 ________________________________________________________________________
3024 1 | \usepackage{listings}%
3025 2 | \usepackage{noweb}%
3026 3 | \usepackage{xargs}%
3027 4 | \renewcommand{\code}[1]{\texttt{#1}}%
3028 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3029 We also define a CChunk macro, for use as: \begin{CChunk} which will need renaming to \begin{Chunk} when I can do this without clashing with \Chunk.
3031 89a <./fangle.sty[2](
\v) ⇑88d, lang=> +≡ ⊲88d 89b▿
3032 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3033 5 | \lstnewenvironment{Chunk}{\relax}{\relax}%
3034 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3035 We also define a suitable \lstset of parameters that suit the literate programming style after the fashion of noweave.
3037 89b <./fangle.sty[3](
\v) ⇑88d, lang=> +≡ ▵89a 89c▿
3038 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3039 6 | \lstset{numbers=left, stepnumber=5, numbersep=5pt,
3040 7 | breaklines=false,basicstyle=\ttfamily,
3041 8 | numberstyle=\tiny, language=C}%
3042 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3043 We also define a notangle-like mechanism for escaping to LaTeX from the listing, and by which we can refer to other listings. We declare the =<...> sequence to contain LaTeX code, and include another like this chunk: <chunkname ?>. However, because =<...> is already defined to contain LaTeX code for this document --- this is a fangle document after all --- the code fragment below effectively contains the LaTeX code: }{. To avoid problems with document generation, I had to declare an lstlistings property: escapeinside={} for this listing only; which in L Y X was done by right-clicking the listings inset, choosing settings->advanced. Therefore =< isn't interpreted literally here, in a listing when the escape sequence is already defined as shown... we need to somehow escape this representation...
3045 89c <./fangle.sty[4](
\v) ⇑88d, lang=> +≡ ▵89b 89d▿
3046 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3047 9 | \lstset{escapeinside={=<}{>}}%
3048 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3049 Although our macros will contain the @ symbol, they will be included in a \makeatletter section by L Y X; however we keep the commented out \makeatletter as a reminder. The listings package likes to centre the titles, but noweb titles are specially formatted and must be left aligned. The simplest way to do this turned out to be by removing the definition of \lst@maketitle. This may interact badly if other listings want a regular title or caption. We remember the old maketitle in case we need it.
3051 89d <./fangle.sty[5](
\v) ⇑88d, lang=> +≡ ▵89c 89e▿
3052 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3054 11 | %somehow re-defining maketitle gives us a left-aligned title
3055 12 | %which is extactly what our specially formatted title needs!
3056 13 | \global\let\fangle@lst@maketitle\lst@maketitle%
3057 14 | \global\def\lst@maketitle{}%
3058 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3059 17.2.1 The chunk command
3060 Our chunk command accepts one argument, and calls \ltset. Although \ltset will note the name, this is erased when the next \lstlisting starts, so we make a note of this in \lst@chunkname and restore in in lstlistings Init hook.
3062 89e <./fangle.sty[6](
\v) ⇑88d, lang=> +≡ ▵89d 90a⊳
3063 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3065 16 | \lstset{title={\fanglecaption},name=#1}%
3066 17 | \global\edef\lst@chunkname{\lst@intname}%
3068 19 | \def\lst@chunkname{\empty}%
3069 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3070 17.2.1.1 Chunk parameters
3071 Fangle permits parameterized chunks, and requires the paramters to be specified as listings options. The fangle script uses this, and although we don't do anything with these in the LaTeX code right now, we need to stop the listings package complaining.
3073 90a <./fangle.sty[7](
\v) ⇑88d, lang=> +≡ ⊲89e 90b▿
3074 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3075 20 | \lst@Key{params}\relax{\def\fangle@chunk@params{#1}}%
3076 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3077 As it is common to define a chunk which then needs appending to another chunk, and annoying to have to declare a single line chunk to manage the include, we support an append= option.
3079 90b <./fangle.sty[8](
\v) ⇑88d, lang=> +≡ ▵90a 90c▿
3080 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3081 21 | \lst@Key{append}\relax{\def\fangle@chunk@append{#1}}%
3082 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3083 17.2.2 The noweb styled caption
3084 We define a public macro \fanglecaption which can be set as a regular title. By means of \protect, It expands to \fangle@caption at the appopriate time when the caption is emitted.
3086 90c <./fangle.sty[9](
\v) ⇑88d, lang=> +≡ ▵90b 90d▿
3087 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3088 \def\fanglecaption{\protect\fangle@caption}%
3089 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3090 22c ⟨some-chunk 19b⟩≡+ ⊲22b 24d⊳
3092 In this example, the current chunk is 22c, and therefore the third chunk on page 22.
3093 It's name is some-chunk.
3094 The first chunk with this name (19b) occurs as the second chunk on page 19.
3095 The previous chunk (22d) with the same name is the second chunk on page 22.
3096 The next chunk (24d) is the fourth chunk on page 24.
3098 Figure 1. Noweb Heading
3100 The general noweb output format compactly identifies the current chunk, and references to the first chunk, and the previous and next chunks that have the same name.
3101 This means that we need to keep a counter for each chunk-name, that we use to count chunks of the same name.
3102 17.2.3 The chunk counter
3103 It would be natural to have a counter for each chunk name, but TeX would soon run out of counters1. ...soon did run out of counters and so I had to re-write the LaTeX macros to share a counter as described here. ^1, so we have one counter which we save at the end of a chunk and restore at the beginning of a chunk.
3105 90d <./fangle.sty[10](
\v) ⇑88d, lang=> +≡ ▵90c 91c⊳
3106 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3107 22 | \newcounter{fangle@chunkcounter}%
3108 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3109 We construct the name of this variable to store the counter to be the text lst-chunk- prefixed onto the chunks own name, and store it in \chunkcount.
3110 We save the counter like this:
3112 91a <save-counter[1](
\v), lang=> ≡
3113 ________________________________________________________________________
3114 \global\expandafter\edef\csname \chunkcount\endcsname{\arabic{fangle@chunkcounter}}%
3115 |________________________________________________________________________
3118 and restore the counter like this:
3120 91b <restore-counter[1](
\v), lang=> ≡
3121 ________________________________________________________________________
3122 \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
3123 |________________________________________________________________________
3126 If there does not already exist a variable whose name is stored in \chunkcount, then we know we are the first chunk with this name, and then define a counter.
3127 Although chunks of the same name share a common counter, they must still be distinguished. We use is the internal name of the listing, suffixed by the counter value. So the first chunk might be something-1 and the second chunk be something-2, etc.
3128 We also calculate the name of the previous chunk if we can (before we increment the chunk counter). If this is the first chunk of that name, then \prevchunkname is set to \relax which the noweb package will interpret as not existing.
3130 91c <./fangle.sty[11](
\v) ⇑88d, lang=> +≡ ⊲90d 91d▿
3131 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3132 23 | \def\fangle@caption{%
3133 24 | \edef\chunkcount{lst-chunk-\lst@intname}%
3134 25 | \@ifundefined{\chunkcount}{%
3135 26 | \expandafter\gdef\csname \chunkcount\endcsname{0}%
3136 27 | \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
3137 28 | \let\prevchunkname\relax%
3139 30 | \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
3140 31 | \edef\prevchunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
3142 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3143 After incrementing the chunk counter, we then define the name of this chunk, as well as the name of the first chunk.
3145 91d <./fangle.sty[12](
\v) ⇑88d, lang=> +≡ ▵91c 91e▿
3146 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3147 33 | \addtocounter{fangle@chunkcounter}{1}%
3148 34 | \global\expandafter\edef\csname \chunkcount\endcsname{\arabic{fangle@chunkcounter}}%
3149 35 | \edef\chunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
3150 36 | \edef\firstchunkname{\lst@intname-1}%
3151 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3152 We now need to calculate the name of the next chunk. We do this by temporarily skipping the counter on by one; however there may not actually be another chunk with this name! We detect this by also defining a label for each chunk based on the chunkname. If there is a next chunkname then it will define a label with that name. As labels are persistent, we can at least tell the second time LaTeX is run. If we don't find such a defined label then we define \nextchunkname to \relax.
3154 91e <./fangle.sty[13](
\v) ⇑88d, lang=> +≡ ▵91d 92a⊳
3155 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3156 37 | \addtocounter{fangle@chunkcounter}{1}%
3157 38 | \edef\nextchunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
3158 39 | \@ifundefined{r@label-\nextchunkname}{\let\nextchunkname\relax}{}%
3159 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3160 The noweb package requires that we define a \sublabel for every chunk, with a unique name, which is then used to print out it's navigation hints.
3161 We also define a regular label for this chunk, as was mentioned above when we calculated \nextchunkname. This requires LaTeX to be run at least twice after new chunk sections are added --- but noweb requried that anyway.
3163 92a <./fangle.sty[14](
\v) ⇑88d, lang=> +≡ ⊲91e 92b▿
3164 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3165 40 | \sublabel{\chunkname}%
3166 41 | % define this label for every chunk instance, so we
3167 42 | % can tell when we are the last chunk of this name
3168 43 | \label{label-\chunkname}%
3169 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3170 We also try and add the chunk to the list of listings, but I'm afraid we don't do very well. We want each chunk name listing once, with all of it's references.
3172 92b <./fangle.sty[15](
\v) ⇑88d, lang=> +≡ ▵92a 92c▿
3173 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3174 44 | \addcontentsline{lol}{lstlisting}{\lst@name~[\protect\subpageref{\chunkname}]}%
3175 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3176 We then call the noweb output macros in the same way that noweave generates them, except that we don't need to call \nwstartdeflinemarkup or \nwenddeflinemarkup — and if we do, it messes up the output somewhat.
3178 92c <./fangle.sty[16](
\v) ⇑88d, lang=> +≡ ▵92b 92d▿
3179 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3183 48 | \subpageref{\chunkname}%
3190 55 | \nwtagstyle{}\/%
3191 56 | \@ifundefined{fangle@chunk@params}{}{%
3192 57 | (\fangle@chunk@params)%
3194 59 | [\csname \chunkcount\endcsname]~%
3195 60 | \subpageref{\firstchunkname}%
3197 62 | \@ifundefined{fangle@chunk@append}{}{%
3198 63 | \ifx{}\fangle@chunk@append{x}\else%
3199 64 | ,~add~to~\fangle@chunk@append%
3202 67 | \global\def\fangle@chunk@append{}%
3203 68 | \lstset{append=x}%
3206 71 | \ifx\relax\prevchunkname\endmoddef\else\plusendmoddef\fi%
3207 72 | % \nwstartdeflinemarkup%
3208 73 | \nwprevnextdefs{\prevchunkname}{\nextchunkname}%
3209 74 | % \nwenddeflinemarkup%
3211 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3212 Originally this was developed as a listings aspect, in the Init hook, but it was found easier to affect the title without using a hook — \lst@AddToHookExe{PreSet} is still required to set the listings name to the name passed to the \Chunk command, though.
3214 92d <./fangle.sty[17](
\v) ⇑88d, lang=> +≡ ▵92c 93a⊳
3215 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3216 76 | %\lst@BeginAspect{fangle}
3217 77 | %\lst@Key{fangle}{true}[t]{\lstKV@SetIf{#1}{true}}
3218 78 | \lst@AddToHookExe{PreSet}{\global\let\lst@intname\lst@chunkname}
3219 79 | \lst@AddToHook{Init}{}%\fangle@caption}
3220 80 | %\lst@EndAspect
3221 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3222 17.2.4 Cross references
3223 We define the \chunkref command which makes it easy to generate visual references to different code chunks, e.g.
3226 \chunkref[3]{preamble}
3227 \chunkref{preamble}[arg1, arg2]
3229 Chunkref can also be used within a code chunk to include another code chunk. The third optional parameter to chunkref is a comma sepatarated list of arguments, which will replace defined parameters in the chunkref.
3230 Note 1. Darn it, if I have: =<\chunkref{new-mode-tracker}[{chunks[chunk_name, "language"]},{mode}]> the inner braces (inside [ ]) cause _ to signify subscript even though we have lst@ReplaceIn
3232 93a <./fangle.sty[18](
\v) ⇑88d, lang=> +≡ ⊲92d 94a⊳
3233 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3234 81 | \def\chunkref@args#1,{%
3236 83 | \lst@ReplaceIn\arg\lst@filenamerpl%
3238 85 | \@ifnextchar){\relax}{, \chunkref@args}%
3240 87 | \newcommand\chunkref[2][0]{%
3241 88 | \@ifnextchar({\chunkref@i{#1}{#2}}{\chunkref@i{#1}{#2}()}%
3243 90 | \def\chunkref@i#1#2(#3){%
3245 92 | \def\chunk{#2}%
3246 93 | \def\chunkno{#1}%
3247 94 | \def\chunkargs{#3}%
3248 95 | \ifx\chunkno\zero%
3249 96 | \def\chunkname{#2-1}%
3251 98 | \def\chunkname{#2-\chunkno}%
3253 100 | \let\lst@arg\chunk%
3254 101 | \lst@ReplaceIn\chunk\lst@filenamerpl%
3255 102 | \LA{%\moddef{%
3258 105 | \nwtagstyle{}\/%
3259 106 | \ifx\chunkno\zero%
3263 110 | \ifx\chunkargs\empty%
3265 112 | (\chunkref@args #3,)%
3267 114 | ~\subpageref{\chunkname}%
3270 117 | \RA%\endmoddef%
3272 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3275 94a <./fangle.sty[19](
\v) ⇑88d, lang=> +≡ ⊲93a
3276 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3279 |________________________________________________________________________
3282 Chapter 18Extracting fangle
3283 18.1 Extracting from Lyx
3284 To extract from L Y X, you will need to configure L Y X as explained in section ?.
3285 And this lyx-build scrap will extract fangle for me.
3287 95a <lyx-build[2](
\v) ⇑20a, lang=sh> +≡ ⊲20a
3288 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3292 14 | «lyx-build-helper 19b»
3293 15 | cd $PROJECT_DIR || exit 1
3295 17 | /usr/local/bin/fangle -R./fangle $TEX_SRC > ./fangle
3296 18 | /usr/local/bin/fangle -R./fangle.module $TEX_SRC > ./fangle.module
3298 20 | export FANGLE=./fangle
3299 21 | export TMP=${TMP:-/tmp}
3301 |________________________________________________________________________
3304 With a lyx-build-helper
3306 95b <lyx-build-helper[2](
\v) ⇑19b, lang=sh> +≡ ⊲19b
3307 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3308 5 | PROJECT_DIR="$LYX_r"
3309 6 | LYX_SRC="$PROJECT_DIR/${LYX_i%.tex}.lyx"
3310 7 | TEX_DIR="$LYX_p"
3311 8 | TEX_SRC="$TEX_DIR/$LYX_i"
3312 9 | TXT_SRC="$TEX_SRC"
3313 |________________________________________________________________________
3316 18.2 Extracting documentation
3318 95c <./gen-www[1](
\v), lang=> ≡
3319 ________________________________________________________________________
3320 1 | #python -m elyxer --css lyx.css $LYX_SRC | \
3321 2 | # iconv -c -f utf-8 -t ISO-8859-1//TRANSLIT | \
3322 3 | # sed 's/UTF-8"\(.\)>/ISO-8859-1"\1>/' > www/docs/fangle.html
3324 5 | python -m elyxer --css lyx.css --iso885915 --html --destdirectory www/docs/fangle.e \
3325 6 | fangle.lyx > www/docs/fangle.e/fangle.html
3327 8 | ( mkdir -p www/docs/fangle && cd www/docs/fangle && \
3328 9 | lyx -e latex ../../../fangle.lyx && \
3329 10 | htlatex ../../../fangle.tex "xhtml,fn-in" && \
3330 11 | sed -i -e 's/<!--l\. [0-9][0-9]* *-->//g' fangle.html
3333 14 | ( mkdir -p www/docs/literate && cd www/docs/literate && \
3334 15 | lyx -e latex ../../../literate.lyx && \
3335 16 | htlatex ../../../literate.tex "xhtml,fn-in" && \
3336 17 | sed -i -e 's/<!--l\. [0-9][0-9]* *-->$//g' literate.html
3338 |________________________________________________________________________
3341 18.3 Extracting from the command line
3342 First you will need the tex output, then you can extract:
3344 96a <lyx-build-manual[1](
\v), lang=sh> ≡
3345 ________________________________________________________________________
3346 1 | lyx -e latex fangle.lyx
3347 2 | fangle -R./fangle fangle.tex > ./fangle
3348 3 | fangle -R./fangle.module fangle.tex > ./fangle.module
3349 |________________________________________________________________________
3356 99a <test:*[1](
\v), lang=> ≡
3357 ________________________________________________________________________
3360 3 | export SRC="${SRC:-./fangle.tm}"
3361 4 | export FANGLE="${FANGLE:-./fangle}"
3362 5 | export TMP="${TMP:-/tmp}"
3363 6 | export TESTDIR="$TMP/$USER/fangle.tests"
3364 7 | export TXT_SRC="${TXT_SRC:-$TESTDIR/fangle.txt}"
3365 8 | export AWK="${AWK:-awk}"
3366 9 | export RUN_FANGLE="${RUN_FANGLE:-$AWK -f}"
3369 12 | ${AWK} -f ${FANGLE} "$@"
3372 15 | mkdir -p "$TESTDIR"
3374 17 | tm -s -c "$SRC" "$TXT_SRC" -q
3376 19 | «test:helpers 100a»
3378 21 | «test:run-tests 99b»
3381 24 | # test current fangle
3382 25 | echo Testing current fangle
3385 28 | # extract new fangle
3386 29 | echo testing new fangle
3387 30 | fangle -R./fangle "$TXT_SRC" > "$TESTDIR/fangle"
3388 31 | export FANGLE="$TESTDIR/fangle"
3391 34 | # Now check that it can extract a fangle that also passes the tests!
3392 35 | echo testing if new fangle can generate itself
3393 36 | fangle -R./fangle "$TXT_SRC" > "$TESTDIR/fangle.new"
3394 37 | passtest diff -bwu "$FANGLE" "$TESTDIR/fangle.new"
3395 38 | export FANGLE="$TESTDIR/fangle.new"
3397 |________________________________________________________________________
3401 99b <test:run-tests[1](
\v), lang=sh> ≡
3402 ________________________________________________________________________
3404 2 | fangle -Rpca-test.awk $TXT_SRC | awk -f - || exit 1
3405 3 | «test:cromulence 59g»
3406 4 | «test:escapes 64a»
3407 5 | «test:test-chunk
\v(test:example-sh
\v) 100b»
3408 6 | «test:test-chunk
\v(test:example-makefile
\v) 100b»
3409 7 | «test:test-chunk
\v(test:q:1
\v) 100b»
3410 8 | «test:test-chunk
\v(test:make:1
\v) 100b»
3411 9 | «test:test-chunk
\v(test:make:2
\v) 100b»
3412 10 | «test:chunk-params 101e»
3413 |________________________________________________________________________
3417 100a <test:helpers[1](
\v), lang=> ≡
3418 ________________________________________________________________________
3421 3 | then echo "Passed $TEST"
3422 4 | else echo "Failed $TEST"
3429 11 | then echo "Passed $TEST"
3430 12 | else echo "Failed $TEST"
3434 |________________________________________________________________________
3437 This chunk will render a named chunk and compare it to another rendered nameed chunk
3439 100b <test:test-chunk[1](chunk
\v\v), lang=sh> ≡
3440 ________________________________________________________________________
3441 1 | «test:test-chunk-result
\v(${chunk}
\v, ${chunk}.result
\v) 100c»
3442 |________________________________________________________________________
3446 100c <test:test-chunk-result[1](chunk
\v, result
\v\v), lang=sh> ≡
3447 ________________________________________________________________________
3448 1 | TEST="${result}" passtest diff -u --label "EXPECTED: ${result}" <( fangle -R${result} $TXT_SRC ) \
3449 2 | --label "ACTUAL: ${chunk}" <( fangle -R${chunk} $TXT_SRC )
3450 |________________________________________________________________________
3453 Chapter 20Chunk Parameters
3456 101a <test:lyx:chunk-params:sub[1](THING
\v, colour
\v\v), lang=> ≡
3457 ________________________________________________________________________
3458 1 | I see a ${THING},
3459 2 | a ${THING} of colour ${colour},
3460 3 | and looking closer =<\chunkref{test:lyx:chunk-params:sub:sub}(${colour})>
3461 |________________________________________________________________________
3465 101b <test:lyx:chunk-params:sub:sub[1](colour
\v\v), lang=> ≡
3466 ________________________________________________________________________
3467 1 | a funny shade of ${colour}
3468 |________________________________________________________________________
3472 101c <test:lyx:chunk-params:text[1](
\v), lang=> ≡
3473 ________________________________________________________________________
3474 1 | What do you see? "=<\chunkref{test:lyx:chunk-params:sub}(joe, red)>"
3476 |________________________________________________________________________
3479 Should generate output:
3481 101d <test:lyx:chunk-params:result[1](
\v), lang=> ≡
3482 ________________________________________________________________________
3483 1 | What do you see? "I see a joe,
3484 2 | a joe of colour red,
3485 3 | and looking closer a funny shade of red"
3487 |________________________________________________________________________
3490 And this chunk will perform the test:
3492 101e <test:chunk-params[1](
\v), lang=> ≡ 102b⊳
3493 ________________________________________________________________________
3494 1 | «test:test-chunk-result
\v(test:lyx:chunk-params:text
\v, test:lyx:chunk-params:result
\v) 100c» || exit 1
3495 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3498 101f <test:chunk-params:sub[1](THING
\v, colour
\v\v), lang=> ≡
3499 ________________________________________________________________________
3500 1 | I see a ${THING},
3501 2 | a ${THING} of colour ${colour},
3502 3 | and looking closer «test:chunk-params:sub:sub
\v(${colour}
\v) 101g»
3503 |________________________________________________________________________
3507 101g <test:chunk-params:sub:sub[1](colour
\v\v), lang=> ≡
3508 ________________________________________________________________________
3509 1 | a funny shade of ${colour}
3510 |________________________________________________________________________
3514 101h <test:chunk-params:text[1](
\v), lang=> ≡
3515 ________________________________________________________________________
3516 1 | What do you see? "«test:chunk-params:sub
\v(joe
\v, red
\v) 101f»"
3518 |________________________________________________________________________
3521 Should generate output:
3523 102a <test:chunk-params:result[1](
\v), lang=> ≡
3524 ________________________________________________________________________
3525 1 | What do you see? "I see a joe,
3526 2 | a joe of colour red,
3527 3 | and looking closer a funny shade of red"
3529 |________________________________________________________________________
3532 And this chunk will perform the test:
3534 102b <test:chunk-params[2](
\v) ⇑101e, lang=> +≡ ⊲101e
3535 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
3536 2 | «test:test-chunk-result
\v(test:chunk-params:text
\v, test:chunk-params:result
\v) 100c» || exit 1
3537 |________________________________________________________________________
3540 Chapter 21Compile-log-lyx
3542 103a <Chunk:./compile-log-lyx[1](
\v), lang=sh> ≡
3543 ________________________________________________________________________
3545 2 | # can't use gtkdialog -i, cos it uses the "source" command which ubuntu sh doesn't have
3548 5 | errors="/tmp/compile.log.$$"
3549 6 | # if grep '^[^ ]*:\( In \|[0-9][0-9]*: [^ ]*:\)' > $errors
3550 7 | if grep '^[^ ]*(\([0-9][0-9]*\)) *: *\(error\|warning\)' > $errors
3552 9 | sed -i -e 's/^[^ ]*[/\\]\([^/\\]*\)(\([ 0-9][ 0-9]*\)) *: */\1:\2|\2|/' $errors
3553 10 | COMPILE_DIALOG='
3556 13 | <label>Compiler errors:</label>
3558 15 | <tree exported_column="0">
3559 16 | <variable>LINE</variable>
3560 17 | <height>400</height><width>800</width>
3561 18 | <label>File | Line | Message</label>
3562 19 | <action>'". $SELF ; "'lyxgoto $LINE</action>
3563 20 | <input>'"cat $errors"'</input>
3566 23 | <button><label>Build</label>
3567 24 | <action>lyxclient -c "LYXCMD:build-program" &</action>
3569 26 | <button ok></button>
3573 30 | export COMPILE_DIALOG
3574 31 | ( gtkdialog --program=COMPILE_DIALOG ; rm $errors ) &
3581 38 | file="${LINE%:*}"
3582 39 | line="${LINE##*:}"
3583 40 | extraline=‘cat $file | head -n $line | tac | sed '/^\\\\begin{lstlisting}/q' | wc -l‘
3584 41 | extraline=‘expr $extraline - 1‘
3585 42 | lyxclient -c "LYXCMD:command-sequence server-goto-file-row $file $line ; char-forward ; repeat $extraline paragraph-down ; paragraph-up-select"
3589 46 | if test -z "$COMPILE_DIALOG"
3592 |________________________________________________________________________