11 YAML is a human readable data serialization language. The full YAML language
12 spec can be read at `yaml.org
13 <http://www.yaml.org/spec/1.2/spec.html#Introduction>`_. The simplest form of
14 yaml is just "scalars", "mappings", and "sequences". A scalar is any number
15 or string. The pound/hash symbol (#) begins a comment line. A mapping is
16 a set of key-value pairs where the key ends with a colon. For example:
24 A sequence is a list of items where each item starts with a leading dash ('-').
34 You can combine mappings and sequences by indenting. For example a sequence
35 of mappings in which one of the mapping values is itself a sequence:
39 # a sequence of mappings with one key's value being a sequence
52 Sometime sequences are known to be short and the one entry per line is too
53 verbose, so YAML offers an alternate syntax for sequences called a "Flow
54 Sequence" in which you put comma separated sequence elements into square
55 brackets. The above example could then be simplified to :
60 # a sequence of mappings with one key's value being a flow sequence
66 cpus: [ PowerPC, x86 ]
69 Introduction to YAML I/O
70 ========================
72 The use of indenting makes the YAML easy for a human to read and understand,
73 but having a program read and write YAML involves a lot of tedious details.
74 The YAML I/O library structures and simplifies reading and writing YAML
77 YAML I/O assumes you have some "native" data structures which you want to be
78 able to dump as YAML and recreate from YAML. The first step is to try
79 writing example YAML for your data structures. You may find after looking at
80 possible YAML representations that a direct mapping of your data structures
81 to YAML is not very readable. Often the fields are not in the order that
82 a human would find readable. Or the same information is replicated in multiple
83 locations, making it hard for a human to write such YAML correctly.
85 In relational database theory there is a design step called normalization in
86 which you reorganize fields and tables. The same considerations need to
87 go into the design of your YAML encoding. But, you may not want to change
88 your existing native data structures. Therefore, when writing out YAML
89 there may be a normalization step, and when reading YAML there would be a
90 corresponding denormalization step.
92 YAML I/O uses a non-invasive, traits based design. YAML I/O defines some
93 abstract base templates. You specialize those templates on your data types.
94 For instance, if you have an enumerated type FooBar you could specialize
95 ScalarEnumerationTraits on that type and define the enumeration() method:
99 using llvm::yaml::ScalarEnumerationTraits;
100 using llvm::yaml::IO;
103 struct ScalarEnumerationTraits<FooBar> {
104 static void enumeration(IO &io, FooBar &value) {
110 As with all YAML I/O template specializations, the ScalarEnumerationTraits is used for
111 both reading and writing YAML. That is, the mapping between in-memory enum
112 values and the YAML string representation is only in one place.
113 This assures that the code for writing and parsing of YAML stays in sync.
115 To specify a YAML mappings, you define a specialization on
116 llvm::yaml::MappingTraits.
117 If your native data structure happens to be a struct that is already normalized,
118 then the specialization is simple. For example:
122 using llvm::yaml::MappingTraits;
123 using llvm::yaml::IO;
126 struct MappingTraits<Person> {
127 static void mapping(IO &io, Person &info) {
128 io.mapRequired("name", info.name);
129 io.mapOptional("hat-size", info.hatSize);
134 A YAML sequence is automatically inferred if you data type has begin()/end()
135 iterators and a push_back() method. Therefore any of the STL containers
136 (such as std::vector<>) will automatically translate to YAML sequences.
138 Once you have defined specializations for your data types, you can
139 programmatically use YAML I/O to write a YAML document:
143 using llvm::yaml::Output;
151 std::vector<Person> persons;
152 persons.push_back(tom);
153 persons.push_back(dan);
155 Output yout(llvm::outs());
158 This would write the following:
167 And you can also read such YAML documents with the following code:
171 using llvm::yaml::Input;
173 typedef std::vector<Person> PersonList;
174 std::vector<PersonList> docs;
176 Input yin(document.getBuffer());
182 // Process read document
183 for ( PersonList &pl : docs ) {
184 for ( Person &person : pl ) {
185 cout << "name=" << person.name;
189 One other feature of YAML is the ability to define multiple documents in a
190 single file. That is why reading YAML produces a vector of your document type.
197 When parsing a YAML document, if the input does not match your schema (as
198 expressed in your XxxTraits<> specializations). YAML I/O
199 will print out an error message and your Input object's error() method will
200 return true. For instance the following document:
209 Has a key (shoe-size) that is not defined in the schema. YAML I/O will
210 automatically generate this error:
214 YAML:2:2: error: unknown key 'shoe-size'
218 Similar errors are produced for other input not conforming to the schema.
224 YAML scalars are just strings (i.e. not a sequence or mapping). The YAML I/O
225 library provides support for translating between YAML scalars and specific
231 The following types have built-in support in YAML I/O:
247 That is, you can use those types in fields of MappingTraits or as element type
248 in sequence. When reading, YAML I/O will validate that the string found
249 is convertible to that type and error out if not.
254 Given that YAML I/O is trait based, the selection of how to convert your data
255 to YAML is based on the type of your data. But in C++ type matching, typedefs
256 do not generate unique type names. That means if you have two typedefs of
257 unsigned int, to YAML I/O both types look exactly like unsigned int. To
258 facilitate make unique type names, YAML I/O provides a macro which is used
259 like a typedef on built-in types, but expands to create a class with conversion
260 operators to and from the base type. For example:
264 LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyFooFlags)
265 LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyBarFlags)
267 This generates two classes MyFooFlags and MyBarFlags which you can use in your
268 native data structures instead of uint32_t. They are implicitly
269 converted to and from uint32_t. The point of creating these unique types
270 is that you can now specify traits on them to get different YAML conversions.
274 An example use of a unique type is that YAML I/O provides fixed sized unsigned
275 integers that are written with YAML I/O as hexadecimal instead of the decimal
276 format used by the built-in integer types:
283 You can use llvm::yaml::Hex32 instead of uint32_t and the only different will
284 be that when YAML I/O writes out that type it will be formatted in hexadecimal.
287 ScalarEnumerationTraits
288 -----------------------
289 YAML I/O supports translating between in-memory enumerations and a set of string
290 values in YAML documents. This is done by specializing ScalarEnumerationTraits<>
291 on your enumeration type and define a enumeration() method.
292 For instance, suppose you had an enumeration of CPUs and a struct with it as
308 To support reading and writing of this enumeration, you can define a
309 ScalarEnumerationTraits specialization on CPUs, which can then be used
314 using llvm::yaml::ScalarEnumerationTraits;
315 using llvm::yaml::MappingTraits;
316 using llvm::yaml::IO;
319 struct ScalarEnumerationTraits<CPUs> {
320 static void enumeration(IO &io, CPUs &value) {
321 io.enumCase(value, "x86_64", cpu_x86_64);
322 io.enumCase(value, "x86", cpu_x86);
323 io.enumCase(value, "PowerPC", cpu_PowerPC);
328 struct MappingTraits<Info> {
329 static void mapping(IO &io, Info &info) {
330 io.mapRequired("cpu", info.cpu);
331 io.mapOptional("flags", info.flags, 0);
335 When reading YAML, if the string found does not match any of the strings
336 specified by enumCase() methods, an error is automatically generated.
337 When writing YAML, if the value being written does not match any of the values
338 specified by the enumCase() methods, a runtime assertion is triggered.
343 Another common data structure in C++ is a field where each bit has a unique
344 meaning. This is often used in a "flags" field. YAML I/O has support for
345 converting such fields to a flow sequence. For instance suppose you
346 had the following bit flags defined:
357 LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyFlags)
359 To support reading and writing of MyFlags, you specialize ScalarBitSetTraits<>
360 on MyFlags and provide the bit values and their names.
364 using llvm::yaml::ScalarBitSetTraits;
365 using llvm::yaml::MappingTraits;
366 using llvm::yaml::IO;
369 struct ScalarBitSetTraits<MyFlags> {
370 static void bitset(IO &io, MyFlags &value) {
371 io.bitSetCase(value, "hollow", flagHollow);
372 io.bitSetCase(value, "flat", flagFlat);
373 io.bitSetCase(value, "round", flagRound);
374 io.bitSetCase(value, "pointy", flagPointy);
384 struct MappingTraits<Info> {
385 static void mapping(IO &io, Info& info) {
386 io.mapRequired("name", info.name);
387 io.mapRequired("flags", info.flags);
391 With the above, YAML I/O (when writing) will test mask each value in the
392 bitset trait against the flags field, and each that matches will
393 cause the corresponding string to be added to the flow sequence. The opposite
394 is done when reading and any unknown string values will result in a error. With
395 the above schema, a same valid YAML document is:
400 flags: [ pointy, flat ]
402 Sometimes a "flags" field might contains an enumeration part
403 defined by a bit-mask.
418 To support reading and writing such fields, you need to use the maskedBitSet()
419 method and provide the bit values, their names and the enumeration mask.
424 struct ScalarBitSetTraits<MyFlags> {
425 static void bitset(IO &io, MyFlags &value) {
426 io.bitSetCase(value, "featureA", flagsFeatureA);
427 io.bitSetCase(value, "featureB", flagsFeatureB);
428 io.bitSetCase(value, "featureC", flagsFeatureC);
429 io.maskedBitSetCase(value, "CPU1", flagsCPU1, flagsCPUMask);
430 io.maskedBitSetCase(value, "CPU2", flagsCPU2, flagsCPUMask);
434 YAML I/O (when writing) will apply the enumeration mask to the flags field,
435 and compare the result and values from the bitset. As in case of a regular
436 bitset, each that matches will cause the corresponding string to be added
437 to the flow sequence.
441 Sometimes for readability a scalar needs to be formatted in a custom way. For
442 instance your internal data structure may use a integer for time (seconds since
443 some epoch), but in YAML it would be much nicer to express that integer in
444 some time format (e.g. 4-May-2012 10:30pm). YAML I/O has a way to support
445 custom formatting and parsing of scalar types by specializing ScalarTraits<> on
446 your data type. When writing, YAML I/O will provide the native type and
447 your specialization must create a temporary llvm::StringRef. When reading,
448 YAML I/O will provide an llvm::StringRef of scalar and your specialization
449 must convert that to your native data type. An outline of a custom scalar type
454 using llvm::yaml::ScalarTraits;
455 using llvm::yaml::IO;
458 struct ScalarTraits<MyCustomType> {
459 static void output(const MyCustomType &value, void*,
460 llvm::raw_ostream &out) {
461 out << value; // do custom formatting here
463 static StringRef input(StringRef scalar, void*, MyCustomType &value) {
464 // do custom parsing here. Return the empty string on success,
465 // or an error message on failure.
468 // Determine if this scalar needs quotes.
469 static QuotingType mustQuote(StringRef) { return QuotingType::Single; }
475 YAML block scalars are string literals that are represented in YAML using the
476 literal block notation, just like the example shown below:
484 The YAML I/O library provides support for translating between YAML block scalars
485 and specific C++ types by allowing you to specialize BlockScalarTraits<> on
486 your data type. The library doesn't provide any built-in support for block
487 scalar I/O for types like std::string and llvm::StringRef as they are already
488 supported by YAML I/O and use the ordinary scalar notation by default.
490 BlockScalarTraits specializations are very similar to the
491 ScalarTraits specialization - YAML I/O will provide the native type and your
492 specialization must create a temporary llvm::StringRef when writing, and
493 it will also provide an llvm::StringRef that has the value of that block scalar
494 and your specialization must convert that to your native data type when reading.
495 An example of a custom type with an appropriate specialization of
496 BlockScalarTraits is shown below:
500 using llvm::yaml::BlockScalarTraits;
501 using llvm::yaml::IO;
503 struct MyStringType {
508 struct BlockScalarTraits<MyStringType> {
509 static void output(const MyStringType &Value, void *Ctxt,
510 llvm::raw_ostream &OS) {
514 static StringRef input(StringRef Scalar, void *Ctxt,
515 MyStringType &Value) {
516 Value.Str = Scalar.str();
526 To be translated to or from a YAML mapping for your type T you must specialize
527 llvm::yaml::MappingTraits on T and implement the "void mapping(IO &io, T&)"
528 method. If your native data structures use pointers to a class everywhere,
529 you can specialize on the class pointer. Examples:
533 using llvm::yaml::MappingTraits;
534 using llvm::yaml::IO;
536 // Example of struct Foo which is used by value
538 struct MappingTraits<Foo> {
539 static void mapping(IO &io, Foo &foo) {
540 io.mapOptional("size", foo.size);
545 // Example of struct Bar which is natively always a pointer
547 struct MappingTraits<Bar*> {
548 static void mapping(IO &io, Bar *&bar) {
549 io.mapOptional("size", bar->size);
558 The mapping() method is responsible, if needed, for normalizing and
559 denormalizing. In a simple case where the native data structure requires no
560 normalization, the mapping method just uses mapOptional() or mapRequired() to
561 bind the struct's fields to YAML key names. For example:
565 using llvm::yaml::MappingTraits;
566 using llvm::yaml::IO;
569 struct MappingTraits<Person> {
570 static void mapping(IO &io, Person &info) {
571 io.mapRequired("name", info.name);
572 io.mapOptional("hat-size", info.hatSize);
580 When [de]normalization is required, the mapping() method needs a way to access
581 normalized values as fields. To help with this, there is
582 a template MappingNormalization<> which you can then use to automatically
583 do the normalization and denormalization. The template is used to create
584 a local variable in your mapping() method which contains the normalized keys.
586 Suppose you have native data type
587 Polar which specifies a position in polar coordinates (distance, angle):
596 but you've decided the normalized YAML for should be in x,y coordinates. That
597 is, you want the yaml to look like:
604 You can support this by defining a MappingTraits that normalizes the polar
605 coordinates to x,y coordinates when writing YAML and denormalizes x,y
606 coordinates into polar when reading YAML.
610 using llvm::yaml::MappingTraits;
611 using llvm::yaml::IO;
614 struct MappingTraits<Polar> {
616 class NormalizedPolar {
618 NormalizedPolar(IO &io)
621 NormalizedPolar(IO &, Polar &polar)
622 : x(polar.distance * cos(polar.angle)),
623 y(polar.distance * sin(polar.angle)) {
625 Polar denormalize(IO &) {
626 return Polar(sqrt(x*x+y*y), arctan(x,y));
633 static void mapping(IO &io, Polar &polar) {
634 MappingNormalization<NormalizedPolar, Polar> keys(io, polar);
636 io.mapRequired("x", keys->x);
637 io.mapRequired("y", keys->y);
641 When writing YAML, the local variable "keys" will be a stack allocated
642 instance of NormalizedPolar, constructed from the supplied polar object which
643 initializes it x and y fields. The mapRequired() methods then write out the x
644 and y values as key/value pairs.
646 When reading YAML, the local variable "keys" will be a stack allocated instance
647 of NormalizedPolar, constructed by the empty constructor. The mapRequired
648 methods will find the matching key in the YAML document and fill in the x and y
649 fields of the NormalizedPolar object keys. At the end of the mapping() method
650 when the local keys variable goes out of scope, the denormalize() method will
651 automatically be called to convert the read values back to polar coordinates,
652 and then assigned back to the second parameter to mapping().
654 In some cases, the normalized class may be a subclass of the native type and
655 could be returned by the denormalize() method, except that the temporary
656 normalized instance is stack allocated. In these cases, the utility template
657 MappingNormalizationHeap<> can be used instead. It just like
658 MappingNormalization<> except that it heap allocates the normalized object
659 when reading YAML. It never destroys the normalized object. The denormalize()
660 method can this return "this".
665 Within a mapping() method, calls to io.mapRequired() mean that that key is
666 required to exist when parsing YAML documents, otherwise YAML I/O will issue an
669 On the other hand, keys registered with io.mapOptional() are allowed to not
670 exist in the YAML document being read. So what value is put in the field
671 for those optional keys?
672 There are two steps to how those optional fields are filled in. First, the
673 second parameter to the mapping() method is a reference to a native class. That
674 native class must have a default constructor. Whatever value the default
675 constructor initially sets for an optional field will be that field's value.
676 Second, the mapOptional() method has an optional third parameter. If provided
677 it is the value that mapOptional() should set that field to if the YAML document
678 does not have that key.
680 There is one important difference between those two ways (default constructor
681 and third parameter to mapOptional). When YAML I/O generates a YAML document,
682 if the mapOptional() third parameter is used, if the actual value being written
683 is the same as (using ==) the default value, then that key/value is not written.
689 When writing out a YAML document, the keys are written in the order that the
690 calls to mapRequired()/mapOptional() are made in the mapping() method. This
691 gives you a chance to write the fields in an order that a human reader of
692 the YAML document would find natural. This may be different that the order
693 of the fields in the native class.
695 When reading in a YAML document, the keys in the document can be in any order,
696 but they are processed in the order that the calls to mapRequired()/mapOptional()
697 are made in the mapping() method. That enables some interesting
698 functionality. For instance, if the first field bound is the cpu and the second
699 field bound is flags, and the flags are cpu specific, you can programmatically
700 switch how the flags are converted to and from YAML based on the cpu.
701 This works for both reading and writing. For example:
705 using llvm::yaml::MappingTraits;
706 using llvm::yaml::IO;
714 struct MappingTraits<Info> {
715 static void mapping(IO &io, Info &info) {
716 io.mapRequired("cpu", info.cpu);
717 // flags must come after cpu for this to work when reading yaml
718 if ( info.cpu == cpu_x86_64 )
719 io.mapRequired("flags", *(My86_64Flags*)info.flags);
721 io.mapRequired("flags", *(My86Flags*)info.flags);
729 The YAML syntax supports tags as a way to specify the type of a node before
730 it is parsed. This allows dynamic types of nodes. But the YAML I/O model uses
731 static typing, so there are limits to how you can use tags with the YAML I/O
732 model. Recently, we added support to YAML I/O for checking/setting the optional
733 tag on a map. Using this functionality it is even possbile to support different
734 mappings, as long as they are convertible.
736 To check a tag, inside your mapping() method you can use io.mapTag() to specify
737 what the tag should be. This will also add that tag when writing yaml.
742 Sometimes in a yaml map, each key/value pair is valid, but the combination is
743 not. This is similar to something having no syntax errors, but still having
744 semantic errors. To support semantic level checking, YAML I/O allows
745 an optional ``validate()`` method in a MappingTraits template specialization.
747 When parsing yaml, the ``validate()`` method is call *after* all key/values in
748 the map have been processed. Any error message returned by the ``validate()``
749 method during input will be printed just a like a syntax error would be printed.
750 When writing yaml, the ``validate()`` method is called *before* the yaml
751 key/values are written. Any error during output will trigger an ``assert()``
752 because it is a programming error to have invalid struct values.
757 using llvm::yaml::MappingTraits;
758 using llvm::yaml::IO;
765 struct MappingTraits<Stuff> {
766 static void mapping(IO &io, Stuff &stuff) {
769 static StringRef validate(IO &io, Stuff &stuff) {
770 // Look at all fields in 'stuff' and if there
771 // are any bad values return a string describing
772 // the error. Otherwise return an empty string.
779 A YAML "flow mapping" is a mapping that uses the inline notation
780 (e.g { x: 1, y: 0 } ) when written to YAML. To specify that a type should be
781 written in YAML using flow mapping, your MappingTraits specialization should
782 add "static const bool flow = true;". For instance:
786 using llvm::yaml::MappingTraits;
787 using llvm::yaml::IO;
794 struct MappingTraits<Stuff> {
795 static void mapping(IO &io, Stuff &stuff) {
799 static const bool flow = true;
802 Flow mappings are subject to line wrapping according to the Output object
808 To be translated to or from a YAML sequence for your type T you must specialize
809 llvm::yaml::SequenceTraits on T and implement two methods:
810 ``size_t size(IO &io, T&)`` and
811 ``T::value_type& element(IO &io, T&, size_t indx)``. For example:
816 struct SequenceTraits<MySeq> {
817 static size_t size(IO &io, MySeq &list) { ... }
818 static MySeqEl &element(IO &io, MySeq &list, size_t index) { ... }
821 The size() method returns how many elements are currently in your sequence.
822 The element() method returns a reference to the i'th element in the sequence.
823 When parsing YAML, the element() method may be called with an index one bigger
824 than the current size. Your element() method should allocate space for one
825 more element (using default constructor if element is a C++ object) and returns
826 a reference to that new allocated space.
831 A YAML "flow sequence" is a sequence that when written to YAML it uses the
832 inline notation (e.g [ foo, bar ] ). To specify that a sequence type should
833 be written in YAML as a flow sequence, your SequenceTraits specialization should
834 add "static const bool flow = true;". For instance:
839 struct SequenceTraits<MyList> {
840 static size_t size(IO &io, MyList &list) { ... }
841 static MyListEl &element(IO &io, MyList &list, size_t index) { ... }
843 // The existence of this member causes YAML I/O to use a flow sequence
844 static const bool flow = true;
847 With the above, if you used MyList as the data type in your native data
848 structures, then when converted to YAML, a flow sequence of integers
849 will be used (e.g. [ 10, -3, 4 ]).
851 Flow sequences are subject to line wrapping according to the Output object
856 Since a common source of sequences is std::vector<>, YAML I/O provides macros:
857 LLVM_YAML_IS_SEQUENCE_VECTOR() and LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR() which
858 can be used to easily specify SequenceTraits<> on a std::vector type. YAML
859 I/O does not partial specialize SequenceTraits on std::vector<> because that
860 would force all vectors to be sequences. An example use of the macros:
864 std::vector<MyType1>;
865 std::vector<MyType2>;
866 LLVM_YAML_IS_SEQUENCE_VECTOR(MyType1)
867 LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR(MyType2)
874 YAML allows you to define multiple "documents" in a single YAML file. Each
875 new document starts with a left aligned "---" token. The end of all documents
876 is denoted with a left aligned "..." token. Many users of YAML will never
877 have need for multiple documents. The top level node in their YAML schema
878 will be a mapping or sequence. For those cases, the following is not needed.
879 But for cases where you do want multiple documents, you can specify a
880 trait for you document list type. The trait has the same methods as
881 SequenceTraits but is named DocumentListTraits. For example:
886 struct DocumentListTraits<MyDocList> {
887 static size_t size(IO &io, MyDocList &list) { ... }
888 static MyDocType element(IO &io, MyDocList &list, size_t index) { ... }
894 When an llvm::yaml::Input or llvm::yaml::Output object is created their
895 constructors take an optional "context" parameter. This is a pointer to
896 whatever state information you might need.
898 For instance, in a previous example we showed how the conversion type for a
899 flags field could be determined at runtime based on the value of another field
900 in the mapping. But what if an inner mapping needs to know some field value
901 of an outer mapping? That is where the "context" parameter comes in. You
902 can set values in the context in the outer map's mapping() method and
903 retrieve those values in the inner map's mapping() method.
905 The context value is just a void*. All your traits which use the context
906 and operate on your native data types, need to agree what the context value
907 actually is. It could be a pointer to an object or struct which your various
908 traits use to shared context sensitive information.
914 The llvm::yaml::Output class is used to generate a YAML document from your
915 in-memory data structures, using traits defined on your data types.
916 To instantiate an Output object you need an llvm::raw_ostream, an optional
917 context pointer and an optional wrapping column:
921 class Output : public IO {
923 Output(llvm::raw_ostream &, void *context = NULL, int WrapColumn = 70);
925 Once you have an Output object, you can use the C++ stream operator on it
926 to write your native data as YAML. One thing to recall is that a YAML file
927 can contain multiple "documents". If the top level data structure you are
928 streaming as YAML is a mapping, scalar, or sequence, then Output assumes you
929 are generating one document and wraps the mapping output
930 with "``---``" and trailing "``...``".
932 The WrapColumn parameter will cause the flow mappings and sequences to
933 line-wrap when they go over the supplied column. Pass 0 to completely
934 suppress the wrapping.
938 using llvm::yaml::Output;
940 void dumpMyMapDoc(const MyMapType &info) {
941 Output yout(llvm::outs());
945 The above could produce output like:
954 On the other hand, if the top level data structure you are streaming as YAML
955 has a DocumentListTraits specialization, then Output walks through each element
956 of your DocumentList and generates a "---" before the start of each element
957 and ends with a "...".
961 using llvm::yaml::Output;
963 void dumpMyMapDoc(const MyDocListType &docList) {
964 Output yout(llvm::outs());
968 The above could produce output like:
983 The llvm::yaml::Input class is used to parse YAML document(s) into your native
984 data structures. To instantiate an Input
985 object you need a StringRef to the entire YAML file, and optionally a context
990 class Input : public IO {
992 Input(StringRef inputContent, void *context=NULL);
994 Once you have an Input object, you can use the C++ stream operator to read
995 the document(s). If you expect there might be multiple YAML documents in
996 one file, you'll need to specialize DocumentListTraits on a list of your
997 document type and stream in that document list type. Otherwise you can
998 just stream in the document type. Also, you can check if there was
999 any syntax errors in the YAML be calling the error() method on the Input
1000 object. For example:
1004 // Reading a single document
1005 using llvm::yaml::Input;
1007 Input yin(mb.getBuffer());
1009 // Parse the YAML file
1020 // Reading multiple documents in one file
1021 using llvm::yaml::Input;
1023 LLVM_YAML_IS_DOCUMENT_LIST_VECTOR(MyDocType)
1025 Input yin(mb.getBuffer());
1027 // Parse the YAML file
1028 std::vector<MyDocType> theDocList;