1 $NetBSD: storage,v 1.7 2009/10/02 07:43:01 cegger Exp $
6 This is a small roadmap document, and deals with the storage and file
7 systems side of the operating system.
9 The following elements and projects are pencilled in for 6.0, but
10 please do not rely on them being there.
12 Features that will be in 6.0:
13 2. logical volume management
14 3. a native port of Sun's ZFS
16 10. RAIDframe parity map
18 Features that are planned for 6.0:
21 5. web-based management tools for storage subsystems
22 6. support for flash devices - NAND and MMC/SD
23 8. virtualised disks in userland
24 9. in-kernel iSCSI initiator
26 We currently expect to branch 6.0 in the March 2010 timeframe, with a view
27 to a 6.0 release later in 2010.
29 We'll continue to update this roadmap as features and dates get firmed up.
37 Devfs will allow device special files (the files used to access
38 devices) to be created dynamically as and when they are attached to
39 the system. This will greatly reduce the number of files in a /dev
40 directory and removes the need to run the MAKEDEV script when support
41 for new devices is added to the NetBSD kernel. NetBSD's devfs
42 implementation will also allow multiple instances of the file system
43 to be mounted simultaneously, which is very useful for chroot jails.
44 Please contact core@ if you are interested in devfs development.
48 2. Logical Volume Management
49 ----------------------------
51 Based on the Linux lvm2 and devmapper software, with a new kernel component
52 for NetBSD written. Merged in 5.99.5 sources, will be in 6.0.
54 Responsible: haad, martin
56 3. Native port of Sun's ZFS
57 ---------------------------
59 Two Summer of Code projects have been held, concentrating on the
60 provision of ZFS support for NetBSD. Mostly completed by haad, and
61 building on ver's work, this is the port of Sun's ZFS, with
62 modifications to make it compile on NetBSD by ad@, and based on the
63 Sun code for the block layer. Discussions are still taking place to
64 get the design right for support for the openat(2) system call family,
65 and the correct architecture for reclaiming vnodes.
67 The ZFS source code has been committed to the repository.
69 Responsible: haad, ad, ver
74 FUSE has two interfaces, the normal high-level one, and a lower-level
75 interface which is closer to the way standard file systems operate. This
76 adds the low-level functionality in the same way that ReFUSE adds the
77 high-level functionality
79 Responsible: pooka, agc
81 5. Web-based Management tools for Storage Subsystems
82 ----------------------------------------------------
84 Standard tools for managing the storage subsystems that NetBSD provides,
85 using a standard web-server as the basic user interface on the storage
86 device, allowing remote management by a standard web browser.
90 6. Support for flash devices - NAND and MMC/SD
91 ----------------------------------------------
93 The NetBSD Foundation is interested in having a file system which is
94 optimised to work with today's flash devices, including SSDs both with
95 wear-levelling functionality and without, as well as support for NAND,
96 and MMC/SD devices. Please get in touch with core@ if you're interested
97 in helping out with this area of development.
104 Rump support has been in NetBSD for 2 releases now, and continues to be
105 developed actively. Recent additions have included cgd support, and smbfs
111 8. Virtualised disks in Userland
112 --------------------------------
114 For better support of virtualization, a library which provides a consistent
115 view of virtualized disk images has been developed by jmcneill.
117 Responsible: jmcneill
120 9. In-kernel iSCSI Initiator
121 ----------------------------
123 NetBSD has had a userland implementation of an iSCSI initiator since
124 NetBSD 4.99.35, based on ReFUSE. There is a possibility that an
125 in-kernel initiator may be available - please contact core if you are
126 interested in this functionality.
131 10. RAIDframe parity map
132 ------------------------
134 Jed Davis successfully completed a Summer of Code project to implement
135 parity map zones for RAIDframe. Parity mapping drastically reduces
136 the amount of time spent rewriting parity after an unclean shutdown by
137 keeping better track of which regions might have had outstanding
138 writes. Enabled by default; can be disabled on a per-set basis, or
139 tuned, with the new raidctl(8) commands.
141 Merged in 5.99.22 sources, and will be in 6.0. A separate set of
142 patches is available for NetBSD-5.
147 Tue Nov 17 07:17:20 PST 2009