1 =============================================================================
2 Python Unicode Integration Proposal Version: 1.7
3 -----------------------------------------------------------------------------
9 The idea of this proposal is to add native Unicode 3.0 support to
10 Python in a way that makes use of Unicode strings as simple as
11 possible without introducing too many pitfalls along the way.
13 Since this goal is not easy to achieve -- strings being one of the
14 most fundamental objects in Python --, we expect this proposal to
15 undergo some significant refinements.
17 Note that the current version of this proposal is still a bit unsorted
18 due to the many different aspects of the Unicode-Python integration.
20 The latest version of this document is always available at:
22 http://starship.python.net/~lemburg/unicode-proposal.txt
24 Older versions are available as:
26 http://starship.python.net/~lemburg/unicode-proposal-X.X.txt
32 · In examples we use u = Unicode object and s = Python string
34 · 'XXX' markings indicate points of discussion (PODs)
40 · Unicode encoding names should be lower case on output and
41 case-insensitive on input (they will be converted to lower case
42 by all APIs taking an encoding name as input).
44 · Encoding names should follow the name conventions as used by the
45 Unicode Consortium: spaces are converted to hyphens, e.g. 'utf 16' is
48 · Codec modules should use the same names, but with hyphens converted
49 to underscores, e.g. utf_8, utf_16, iso_8859_1.
52 Unicode Default Encoding:
53 -------------------------
55 The Unicode implementation has to make some assumption about the
56 encoding of 8-bit strings passed to it for coercion and about the
57 encoding to as default for conversion of Unicode to strings when no
58 specific encoding is given. This encoding is called <default encoding>
61 For this, the implementation maintains a global which can be set in
62 the site.py Python startup script. Subsequent changes are not
63 possible. The <default encoding> can be set and queried using the
66 sys.setdefaultencoding(encoding)
67 --> Sets the <default encoding> used by the Unicode implementation.
68 encoding has to be an encoding which is supported by the Python
69 installation, otherwise, a LookupError is raised.
71 Note: This API is only available in site.py ! It is removed
72 from the sys module by site.py after usage.
74 sys.getdefaultencoding()
75 --> Returns the current <default encoding>.
77 If not otherwise defined or set, the <default encoding> defaults to
78 'ascii'. This encoding is also the startup default of Python (and in
79 effect before site.py is executed).
81 Note that the default site.py startup module contains disabled
82 optional code which can set the <default encoding> according to the
83 encoding defined by the current locale. The locale module is used to
84 extract the encoding from the locale default settings defined by the
85 OS environment (see locale.py). If the encoding cannot be determined,
86 is unkown or unsupported, the code defaults to setting the <default
87 encoding> to 'ascii'. To enable this code, edit the site.py file or
88 place the appropriate code into the sitecustomize.py module of your
95 Python should provide a built-in constructor for Unicode strings which
96 is available through __builtins__:
98 u = unicode(encoded_string[,encoding=<default encoding>][,errors="strict"])
100 u = u'<unicode-escape encoded Python string>'
102 u = ur'<raw-unicode-escape encoded Python string>'
104 With the 'unicode-escape' encoding being defined as:
106 · all non-escape characters represent themselves as Unicode ordinal
107 (e.g. 'a' -> U+0061).
109 · all existing defined Python escape sequences are interpreted as
110 Unicode ordinals; note that \xXXXX can represent all Unicode
111 ordinals, and \OOO (octal) can represent Unicode ordinals up to U+01FF.
113 · a new escape sequence, \uXXXX, represents U+XXXX; it is a syntax
114 error to have fewer than 4 digits after \u.
116 For an explanation of possible values for errors see the Codec section
121 u'abc' -> U+0061 U+0062 U+0063
123 u'abc\u1234\n' -> U+0061 U+0062 U+0063 U+1234 U+005c
125 The 'raw-unicode-escape' encoding is defined as follows:
127 · \uXXXX sequence represent the U+XXXX Unicode character if and
128 only if the number of leading backslashes is odd
130 · all other characters represent themselves as Unicode ordinal
134 Note that you should provide some hint to the encoding you used to
135 write your programs as pragma line in one the first few comment lines
136 of the source file (e.g. '# source file encoding: latin-1'). If you
137 only use 7-bit ASCII then everything is fine and no such notice is
138 needed, but if you include Latin-1 characters not defined in ASCII, it
139 may well be worthwhile including a hint since people in other
140 countries will want to be able to read your source strings too.
146 Unicode objects should have the type UnicodeType with type name
147 'unicode', made available through the standard types module.
153 Unicode objects have a method .encode([encoding=<default encoding>])
154 which returns a Python string encoding the Unicode string using the
155 given scheme (see Codecs).
157 print u := print u.encode() # using the <default encoding>
159 str(u) := u.encode() # using the <default encoding>
161 repr(u) := "u%s" % repr(u.encode('unicode-escape'))
163 Also see Internal Argument Parsing and Buffer Interface for details on
164 how other APIs written in C will treat Unicode objects.
170 Since Unicode 3.0 has a 32-bit ordinal character set, the implementation
171 should provide 32-bit aware ordinal conversion APIs:
173 ord(u[:1]) (this is the standard ord() extended to work with Unicode
175 --> Unicode ordinal number (32-bit)
178 --> Unicode object for character i (provided it is 32-bit);
181 Both APIs should go into __builtins__ just like their string
182 counterparts ord() and chr().
184 Note that Unicode provides space for private encodings. Usage of these
185 can cause different output representations on different machines. This
186 problem is not a Python or Unicode problem, but a machine setup and
190 Comparison & Hash Value:
191 ------------------------
193 Unicode objects should compare equal to other objects after these
194 other objects have been coerced to Unicode. For strings this means
195 that they are interpreted as Unicode string using the <default
198 Unicode objects should return the same hash value as their ASCII
199 equivalent strings. Unicode strings holding non-ASCII values are not
200 guaranteed to return the same hash values as the default encoded
201 equivalent string representation.
203 When compared using cmp() (or PyObject_Compare()) the implementation
204 should mask TypeErrors raised during the conversion to remain in synch
205 with the string behavior. All other errors such as ValueErrors raised
206 during coercion of strings to Unicode should not be masked and passed
209 In containment tests ('a' in u'abc' and u'a' in 'abc') both sides
210 should be coerced to Unicode before applying the test. Errors occurring
211 during coercion (e.g. None in u'abc') should not be masked.
217 Using Python strings and Unicode objects to form new objects should
218 always coerce to the more precise format, i.e. Unicode objects.
220 u + s := u + unicode(s)
222 s + u := unicode(s) + u
224 All string methods should delegate the call to an equivalent Unicode
225 object method call by converting all involved strings to Unicode and
226 then applying the arguments to the Unicode method of the same name,
229 string.join((s,u),sep) := (s + sep) + u
231 sep.join((s,u)) := (s + sep) + u
233 For a discussion of %-formatting w/r to Unicode objects, see
240 UnicodeError is defined in the exceptions module as a subclass of
241 ValueError. It is available at the C level via PyExc_UnicodeError.
242 All exceptions related to Unicode encoding/decoding should be
243 subclasses of UnicodeError.
246 Codecs (Coder/Decoders) Lookup:
247 -------------------------------
249 A Codec (see Codec Interface Definition) search registry should be
250 implemented by a module "codecs":
252 codecs.register(search_function)
254 Search functions are expected to take one argument, the encoding name
255 in all lower case letters and with hyphens and spaces converted to
256 underscores, and return a tuple of functions (encoder, decoder,
257 stream_reader, stream_writer) taking the following arguments:
260 These must be functions or methods which have the same
261 interface as the .encode/.decode methods of Codec instances
262 (see Codec Interface). The functions/methods are expected to
263 work in a stateless mode.
265 stream_reader and stream_writer:
266 These need to be factory functions with the following
269 factory(stream,errors='strict')
271 The factory functions must return objects providing
272 the interfaces defined by StreamWriter/StreamReader resp.
273 (see Codec Interface). Stream codecs can maintain state.
275 Possible values for errors are defined in the Codec
278 In case a search function cannot find a given encoding, it should
281 Aliasing support for encodings is left to the search functions
284 The codecs module will maintain an encoding cache for performance
285 reasons. Encodings are first looked up in the cache. If not found, the
286 list of registered search functions is scanned. If no codecs tuple is
287 found, a LookupError is raised. Otherwise, the codecs tuple is stored
288 in the cache and returned to the caller.
290 To query the Codec instance the following API should be used:
292 codecs.lookup(encoding)
294 This will either return the found codecs tuple or raise a LookupError.
300 Standard codecs should live inside an encodings/ package directory in the
301 Standard Python Code Library. The __init__.py file of that directory should
302 include a Codec Lookup compatible search function implementing a lazy module
305 Python should provide a few standard codecs for the most relevant
308 'utf-8': 8-bit variable length encoding
309 'utf-16': 16-bit variable length encoding (little/big endian)
310 'utf-16-le': utf-16 but explicitly little endian
311 'utf-16-be': utf-16 but explicitly big endian
312 'ascii': 7-bit ASCII codepage
313 'iso-8859-1': ISO 8859-1 (Latin 1) codepage
314 'unicode-escape': See Unicode Constructors for a definition
315 'raw-unicode-escape': See Unicode Constructors for a definition
316 'native': Dump of the Internal Format used by Python
318 Common aliases should also be provided per default, e.g. 'latin-1'
321 Note: 'utf-16' should be implemented by using and requiring byte order
322 marks (BOM) for file input/output.
324 All other encodings such as the CJK ones to support Asian scripts
325 should be implemented in separate packages which do not get included
326 in the core Python distribution and are not a part of this proposal.
329 Codecs Interface Definition:
330 ----------------------------
332 The following base class should be defined in the module
333 "codecs". They provide not only templates for use by encoding module
334 implementors, but also define the interface which is expected by the
335 Unicode implementation.
337 Note that the Codec Interface defined here is well suitable for a
338 larger range of applications. The Unicode implementation expects
339 Unicode objects on input for .encode() and .write() and character
340 buffer compatible objects on input for .decode(). Output of .encode()
341 and .read() should be a Python string and .decode() must return an
344 First, we have the stateless encoders/decoders. These do not work in
345 chunks as the stream codecs (see below) do, because all components are
346 expected to be available in memory.
350 """ Defines the interface for stateless encoders/decoders.
352 The .encode()/.decode() methods may implement different error
353 handling schemes by providing the errors argument. These
354 string values are defined:
356 'strict' - raise an error (or a subclass)
357 'ignore' - ignore the character and continue with the next
358 'replace' - replace with a suitable replacement character;
359 Python will use the official U+FFFD REPLACEMENT
360 CHARACTER for the builtin Unicode codecs.
363 def encode(self,input,errors='strict'):
365 """ Encodes the object input and returns a tuple (output
366 object, length consumed).
368 errors defines the error handling to apply. It defaults to
371 The method may not store state in the Codec instance. Use
372 StreamCodec for codecs which have to keep state in order to
373 make encoding/decoding efficient.
378 def decode(self,input,errors='strict'):
380 """ Decodes the object input and returns a tuple (output
381 object, length consumed).
383 input must be an object which provides the bf_getreadbuf
384 buffer slot. Python strings, buffer objects and memory
385 mapped files are examples of objects providing this slot.
387 errors defines the error handling to apply. It defaults to
390 The method may not store state in the Codec instance. Use
391 StreamCodec for codecs which have to keep state in order to
392 make encoding/decoding efficient.
397 StreamWriter and StreamReader define the interface for stateful
398 encoders/decoders which work on streams. These allow processing of the
399 data in chunks to efficiently use memory. If you have large strings in
400 memory, you may want to wrap them with cStringIO objects and then use
401 these codecs on them to be able to do chunk processing as well,
402 e.g. to provide progress information to the user.
404 class StreamWriter(Codec):
406 def __init__(self,stream,errors='strict'):
408 """ Creates a StreamWriter instance.
410 stream must be a file-like object open for writing
413 The StreamWriter may implement different error handling
414 schemes by providing the errors keyword argument. These
415 parameters are defined:
417 'strict' - raise a ValueError (or a subclass)
418 'ignore' - ignore the character and continue with the next
419 'replace'- replace with a suitable replacement character
425 def write(self,object):
427 """ Writes the object's contents encoded to self.stream.
429 data, consumed = self.encode(object,self.errors)
430 self.stream.write(data)
432 def writelines(self, list):
434 """ Writes the concatenated list of strings to the stream
437 self.write(''.join(list))
441 """ Flushes and resets the codec buffers used for keeping state.
443 Calling this method should ensure that the data on the
444 output is put into a clean state, that allows appending
445 of new fresh data without having to rescan the whole
446 stream to recover state.
451 def __getattr__(self,name,
455 """ Inherit all other methods from the underlying stream.
457 return getattr(self.stream,name)
459 class StreamReader(Codec):
461 def __init__(self,stream,errors='strict'):
463 """ Creates a StreamReader instance.
465 stream must be a file-like object open for reading
468 The StreamReader may implement different error handling
469 schemes by providing the errors keyword argument. These
470 parameters are defined:
472 'strict' - raise a ValueError (or a subclass)
473 'ignore' - ignore the character and continue with the next
474 'replace'- replace with a suitable replacement character;
480 def read(self,size=-1):
482 """ Decodes data from the stream self.stream and returns the
485 size indicates the approximate maximum number of bytes to
486 read from the stream for decoding purposes. The decoder
487 can modify this setting as appropriate. The default value
488 -1 indicates to read and decode as much as possible. size
489 is intended to prevent having to decode huge files in one
492 The method should use a greedy read strategy meaning that
493 it should read as much data as is allowed within the
494 definition of the encoding and the given size, e.g. if
495 optional encoding endings or state markers are available
496 on the stream, these should be read too.
501 return self.decode(self.stream.read())[0]
504 read = self.stream.read
510 object, decodedbytes = decode(data)
511 except ValueError,why:
512 # This method is slow but should work under pretty much
513 # all conditions; at most 10 tries are made
516 if not newdata or i > 10:
518 data = data + newdata
522 def readline(self, size=None):
524 """ Read one line from the input stream and return the
527 Note: Unlike the .readlines() method, this method inherits
528 the line breaking knowledge from the underlying stream's
529 .readline() method -- there is currently no support for
530 line breaking using the codec decoder due to lack of line
531 buffering. Subclasses should however, if possible, try to
532 implement this method using their own knowledge of line
535 size, if given, is passed as size argument to the stream's
540 line = self.stream.readline()
542 line = self.stream.readline(size)
543 return self.decode(line)[0]
545 def readlines(self, sizehint=0):
547 """ Read all lines available on the input stream
548 and return them as list of lines.
550 Line breaks are implemented using the codec's decoder
551 method and are included in the list entries.
553 sizehint, if given, is passed as size argument to the
554 stream's .read() method.
558 data = self.stream.read()
560 data = self.stream.read(sizehint)
561 return self.decode(data)[0].splitlines(1)
565 """ Resets the codec buffers used for keeping state.
567 Note that no stream repositioning should take place.
568 This method is primarily intended to be able to recover
569 from decoding errors.
574 def __getattr__(self,name,
578 """ Inherit all other methods from the underlying stream.
580 return getattr(self.stream,name)
583 Stream codec implementors are free to combine the StreamWriter and
584 StreamReader interfaces into one class. Even combining all these with
585 the Codec class should be possible.
587 Implementors are free to add additional methods to enhance the codec
588 functionality or provide extra state information needed for them to
589 work. The internal codec implementation will only use the above
592 It is not required by the Unicode implementation to use these base
593 classes, only the interfaces must match; this allows writing Codecs as
596 As guideline, large mapping tables should be implemented using static
597 C data in separate (shared) extension modules. That way multiple
598 processes can share the same data.
600 A tool to auto-convert Unicode mapping files to mapping modules should be
601 provided to simplify support for additional mappings (see References).
607 The .split() method will have to know about what is considered
608 whitespace in Unicode.
614 Case conversion is rather complicated with Unicode data, since there
615 are many different conditions to respect. See
617 http://www.unicode.org/unicode/reports/tr13/
619 for some guidelines on implementing case conversion.
621 For Python, we should only implement the 1-1 conversions included in
622 Unicode. Locale dependent and other special case conversions (see the
623 Unicode standard file SpecialCasing.txt) should be left to user land
624 routines and not go into the core interpreter.
626 The methods .capitalize() and .iscapitalized() should follow the case
627 mapping algorithm defined in the above technical report as closely as
634 Line breaking should be done for all Unicode characters having the B
635 property as well as the combinations CRLF, CR, LF (interpreted in that
636 order) and other special line separators defined by the standard.
638 The Unicode type should provide a .splitlines() method which returns a
639 list of lines according to the above specification. See Unicode
643 Unicode Character Properties:
644 -----------------------------
646 A separate module "unicodedata" should provide a compact interface to
647 all Unicode character properties defined in the standard's
648 UnicodeData.txt file.
650 Among other things, these properties provide ways to recognize
651 numbers, digits, spaces, whitespace, etc.
653 Since this module will have to provide access to all Unicode
654 characters, it will eventually have to contain the data from
655 UnicodeData.txt which takes up around 600kB. For this reason, the data
656 should be stored in static C data. This enables compilation as shared
657 module which the underlying OS can shared between processes (unlike
658 normal Python code modules).
660 There should be a standard Python interface for accessing this information
661 so that other implementors can plug in their own possibly enhanced versions,
662 e.g. ones that do decompressing of the data on-the-fly.
665 Private Code Point Areas:
666 -------------------------
668 Support for these is left to user land Codecs and not explicitly
669 integrated into the core. Note that due to the Internal Format being
670 implemented, only the area between \uE000 and \uF8FF is usable for
677 The internal format for Unicode objects should use a Python specific
678 fixed format <PythonUnicode> implemented as 'unsigned short' (or
679 another unsigned numeric type having 16 bits). Byte order is platform
682 This format will hold UTF-16 encodings of the corresponding Unicode
683 ordinals. The Python Unicode implementation will address these values
684 as if they were UCS-2 values. UCS-2 and UTF-16 are the same for all
685 currently defined Unicode character points. UTF-16 without surrogates
686 provides access to about 64k characters and covers all characters in
687 the Basic Multilingual Plane (BMP) of Unicode.
689 It is the Codec's responsibility to ensure that the data they pass to
690 the Unicode object constructor respects this assumption. The
691 constructor does not check the data for Unicode compliance or use of
694 Future implementations can extend the 32 bit restriction to the full
695 set of all UTF-16 addressable characters (around 1M characters).
697 The Unicode API should provide interface routines from <PythonUnicode>
698 to the compiler's wchar_t which can be 16 or 32 bit depending on the
699 compiler/libc/platform being used.
701 Unicode objects should have a pointer to a cached Python string object
702 <defenc> holding the object's value using the <default encoding>.
703 This is needed for performance and internal parsing (see Internal
704 Argument Parsing) reasons. The buffer is filled when the first
705 conversion request to the <default encoding> is issued on the object.
707 Interning is not needed (for now), since Python identifiers are
708 defined as being ASCII only.
710 codecs.BOM should return the byte order mark (BOM) for the format
711 used internally. The codecs module should provide the following
712 additional constants for convenience and reference (codecs.BOM will
713 either be BOM_BE or BOM_LE depending on the platform):
716 (corresponds to Unicode U+0000FEFF in UTF-16 on big endian
717 platforms == ZERO WIDTH NO-BREAK SPACE)
720 (corresponds to Unicode U+0000FFFE in UTF-16 on little endian
721 platforms == defined as being an illegal Unicode character)
723 BOM4_BE: '\000\000\376\377'
724 (corresponds to Unicode U+0000FEFF in UCS-4)
726 BOM4_LE: '\377\376\000\000'
727 (corresponds to Unicode U+0000FFFE in UCS-4)
729 Note that Unicode sees big endian byte order as being "correct". The
730 swapped order is taken to be an indicator for a "wrong" format, hence
731 the illegal character definition.
733 The configure script should provide aid in deciding whether Python can
734 use the native wchar_t type or not (it has to be a 16-bit unsigned
741 Implement the buffer interface using the <defenc> Python string object
742 as basis for bf_getcharbuf and the internal buffer for
743 bf_getreadbuf. If bf_getcharbuf is requested and the <defenc> object
744 does not yet exist, it is created first.
746 Note that as special case, the parser marker "s#" will not return raw
747 Unicode UTF-16 data (which the bf_getreadbuf returns), but instead
748 tries to encode the Unicode object using the default encoding and then
749 returns a pointer to the resulting string object (or raises an
750 exception in case the conversion fails). This was done in order to
751 prevent accidentely writing binary data to an output stream which the
752 other end might not recognize.
754 This has the advantage of being able to write to output streams (which
755 typically use this interface) without additional specification of the
758 If you need to access the read buffer interface of Unicode objects,
759 use the PyObject_AsReadBuffer() interface.
761 The internal format can also be accessed using the 'unicode-internal'
762 codec, e.g. via u.encode('unicode-internal').
768 Should have native Unicode object support. The objects should be
769 encoded using platform independent encodings.
771 Marshal should use UTF-8 and Pickle should either choose
772 Raw-Unicode-Escape (in text mode) or UTF-8 (in binary mode) as
773 encoding. Using UTF-8 instead of UTF-16 has the advantage of
774 eliminating the need to store a BOM mark.
780 Secret Labs AB is working on a Unicode-aware regular expression
781 machinery. It works on plain 8-bit, UCS-2, and (optionally) UCS-4
782 internal character buffers.
786 http://www.unicode.org/unicode/reports/tr18/
788 for some remarks on how to treat Unicode REs.
794 Format markers are used in Python format strings. If Python strings
795 are used as format strings, the following interpretations should be in
798 '%s': For Unicode objects this will cause coercion of the
799 whole format string to Unicode. Note that
800 you should use a Unicode format string to start
801 with for performance reasons.
803 In case the format string is an Unicode object, all parameters are coerced
804 to Unicode first and then put together and formatted according to the format
805 string. Numbers are first converted to strings and then to Unicode.
807 '%s': Python strings are interpreted as Unicode
808 string using the <default encoding>. Unicode
809 objects are taken as is.
811 All other string formatters should work accordingly.
815 u"%s %s" % (u"abc", "abc") == u"abc abc"
818 Internal Argument Parsing:
819 --------------------------
821 These markers are used by the PyArg_ParseTuple() APIs:
823 "U": Check for Unicode object and return a pointer to it
825 "s": For Unicode objects: return a pointer to the object's
826 <defenc> buffer (which uses the <default encoding>).
828 "s#": Access to the default encoded version of the Unicode object
829 (see Buffer Interface); note that the length relates to the length
830 of the default encoded string rather than the Unicode object length.
835 Takes two parameters: encoding (const char *) and
838 The input object is first coerced to Unicode in the usual way
839 and then encoded into a string using the given encoding.
841 On output, a buffer of the needed size is allocated and
842 returned through *buffer as NULL-terminated string.
843 The encoded may not contain embedded NULL characters.
844 The caller is responsible for calling PyMem_Free()
845 to free the allocated *buffer after usage.
848 Takes three parameters: encoding (const char *),
849 buffer (char **) and buffer_len (int *).
851 The input object is first coerced to Unicode in the usual way
852 and then encoded into a string using the given encoding.
854 If *buffer is non-NULL, *buffer_len must be set to sizeof(buffer)
855 on input. Output is then copied to *buffer.
857 If *buffer is NULL, a buffer of the needed size is
858 allocated and output copied into it. *buffer is then
859 updated to point to the allocated memory area.
860 The caller is responsible for calling PyMem_Free()
861 to free the allocated *buffer after usage.
863 In both cases *buffer_len is updated to the number of
864 characters written (excluding the trailing NULL-byte).
865 The output buffer is assured to be NULL-terminated.
869 Using "es#" with auto-allocation:
872 test_parser(PyObject *self,
876 const char *encoding = "latin-1";
880 if (!PyArg_ParseTuple(args, "es#:test_parser",
881 encoding, &buffer, &buffer_len))
884 PyErr_SetString(PyExc_SystemError,
888 str = PyString_FromStringAndSize(buffer, buffer_len);
893 Using "es" with auto-allocation returning a NULL-terminated string:
896 test_parser(PyObject *self,
900 const char *encoding = "latin-1";
903 if (!PyArg_ParseTuple(args, "es:test_parser",
907 PyErr_SetString(PyExc_SystemError,
911 str = PyString_FromString(buffer);
916 Using "es#" with a pre-allocated buffer:
919 test_parser(PyObject *self,
923 const char *encoding = "latin-1";
925 char *buffer = _buffer;
926 int buffer_len = sizeof(_buffer);
928 if (!PyArg_ParseTuple(args, "es#:test_parser",
929 encoding, &buffer, &buffer_len))
932 PyErr_SetString(PyExc_SystemError,
936 str = PyString_FromStringAndSize(buffer, buffer_len);
944 Since file.write(object) and most other stream writers use the "s#" or
945 "t#" argument parsing marker for querying the data to write, the
946 default encoded string version of the Unicode object will be written
947 to the streams (see Buffer Interface).
949 For explicit handling of files using Unicode, the standard stream
950 codecs as available through the codecs module should be used.
952 The codecs module should provide a short-cut open(filename,mode,encoding)
953 available which also assures that mode contains the 'b' character when
960 Only the user knows what encoding the input data uses, so no special
961 magic is applied. The user will have to explicitly convert the string
962 data to Unicode objects as needed or use the file wrappers defined in
963 the codecs module (see File/Stream Output).
966 Unicode Methods & Attributes:
967 -----------------------------
969 All Python string methods, plus:
971 .encode([encoding=<default encoding>][,errors="strict"])
972 --> see Unicode Output
974 .splitlines([include_breaks=0])
975 --> breaks the Unicode string into a list of (Unicode) lines;
976 returns the lines with line breaks included, if include_breaks
977 is true. See Line Breaks for a specification of how line breaking
984 We should use Fredrik Lundh's Unicode object implementation as basis.
985 It already implements most of the string methods needed and provides a
986 well written code base which we can build upon.
988 The object sharing implemented in Fredrik's implementation should
995 Test cases should follow those in Lib/test/test_string.py and include
996 additional checks for the Codec Registry and the Standard Codecs.
1003 http://www.unicode.org/
1006 http://www.unicode.org/unicode/faq/
1009 http://www.unicode.org/unicode/standard/versions/Unicode3.0.html
1011 Unicode-TechReports:
1012 http://www.unicode.org/unicode/reports/techreports.html
1015 ftp://ftp.unicode.org/Public/MAPPINGS/
1017 Introduction to Unicode (a little outdated by still nice to read):
1018 http://www.nada.kth.se/i18n/ucs/unicode-iso10646-oview.html
1021 Introducing Unicode to ECMAScript (aka JavaScript) --
1022 http://www-4.ibm.com/software/developer/library/internationalization-support.html
1024 IANA Character Set Names:
1025 ftp://ftp.isi.edu/in-notes/iana/assignments/character-sets
1027 Discussion of UTF-8 and Unicode support for POSIX and Linux:
1028 http://www.cl.cam.ac.uk/~mgk25/unicode.html
1033 http://czyborra.com/utf/
1036 http://www.uazone.com/multiling/unicode/ucs2.html
1039 Defined in RFC2152, e.g.
1040 http://www.uazone.com/multiling/ml-docs/rfc2152.txt
1043 Defined in RFC2279, e.g.
1044 http://info.internet.isi.edu/in-notes/rfc/files/rfc2279.txt
1047 http://www.uazone.com/multiling/unicode/wg2n1035.html
1050 History of this Proposal:
1051 -------------------------
1052 1.7: Added note about the changed behaviour of "s#".
1053 1.6: Changed <defencstr> to <defenc> since this is the name used in the
1054 implementation. Added notes about the usage of <defenc> in the
1055 buffer protocol implementation.
1056 1.5: Added notes about setting the <default encoding>. Fixed some
1057 typos (thanks to Andrew Kuchling). Changed <defencstr> to <utf8str>.
1058 1.4: Added note about mixed type comparisons and contains tests.
1059 Changed treating of Unicode objects in format strings (if used
1060 with '%s' % u they will now cause the format string to be
1061 coerced to Unicode, thus producing a Unicode object on return).
1062 Added link to IANA charset names (thanks to Lars Marius Garshol).
1063 Added new codec methods .readline(), .readlines() and .writelines().
1064 1.3: Added new "es" and "es#" parser markers
1065 1.2: Removed POD about codecs.open()
1066 1.1: Added note about comparisons and hash values. Added note about
1067 case mapping algorithms. Changed stream codecs .read() and
1068 .write() method to match the standard file-like object methods
1069 (bytes consumed information is no longer returned by the methods)
1070 1.0: changed encode Codec method to be symmetric to the decode method
1071 (they both return (object, data consumed) now and thus become
1072 interchangeable); removed __init__ method of Codec class (the
1073 methods are stateless) and moved the errors argument down to the
1074 methods; made the Codec design more generic w/r to type of input
1075 and output objects; changed StreamWriter.flush to StreamWriter.reset
1076 in order to avoid overriding the stream's .flush() method;
1077 renamed .breaklines() to .splitlines(); renamed the module unicodec
1078 to codecs; modified the File I/O section to refer to the stream codecs.
1079 0.9: changed errors keyword argument definition; added 'replace' error
1080 handling; changed the codec APIs to accept buffer like objects on
1081 input; some minor typo fixes; added Whitespace section and
1082 included references for Unicode characters that have the whitespace
1083 and the line break characteristic; added note that search functions
1084 can expect lower-case encoding names; dropped slicing and offsets
1086 0.8: added encodings package and raw unicode escape encoding; untabified
1087 the proposal; added notes on Unicode format strings; added
1088 .breaklines() method
1089 0.7: added a whole new set of codec APIs; added a different encoder
1090 lookup scheme; fixed some names
1091 0.6: changed "s#" to "t#"; changed <defencbuf> to <defencstr> holding
1092 a real Python string object; changed Buffer Interface to delegate
1093 requests to <defencstr>'s buffer interface; removed the explicit
1094 reference to the unicodec.codecs dictionary (the module can implement
1095 this in way fit for the purpose); removed the settable default
1096 encoding; move UnicodeError from unicodec to exceptions; "s#"
1097 not returns the internal data; passed the UCS-2/UTF-16 checking
1098 from the Unicode constructor to the Codecs
1099 0.5: moved sys.bom to unicodec.BOM; added sections on case mapping,
1100 private use encodings and Unicode character properties
1101 0.4: added Codec interface, notes on %-formatting, changed some encoding
1102 details, added comments on stream wrappers, fixed some discussion
1103 points (most important: Internal Format), clarified the
1104 'unicode-escape' encoding, added encoding references
1105 0.3: added references, comments on codec modules, the internal format,
1106 bf_getcharbuffer and the RE engine; added 'unicode-escape' encoding
1107 proposed by Tim Peters and fixed repr(u) accordingly
1108 0.2: integrated Guido's suggestions, added stream codecs and file
1113 -----------------------------------------------------------------------------
1114 Written by Marc-Andre Lemburg, 1999-2000, mal@lemburg.com
1115 -----------------------------------------------------------------------------