1 .. SPDX-License-Identifier: GPL-2.0
3 ============================
4 Ceph Distributed File System
5 ============================
7 Ceph is a distributed network file system designed to provide good
8 performance, reliability, and scalability.
10 Basic features include:
13 * Seamless scaling from 1 to many thousands of nodes
14 * High availability and reliability. No single point of failure.
15 * N-way replication of data across storage nodes
16 * Fast recovery from node failures
17 * Automatic rebalancing of data on node addition/removal
18 * Easy deployment: most FS components are userspace daemons
22 * Flexible snapshots (on any directory)
23 * Recursive accounting (nested files, directories, bytes)
25 In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
26 on symmetric access by all clients to shared block devices, Ceph
27 separates data and metadata management into independent server
28 clusters, similar to Lustre. Unlike Lustre, however, metadata and
29 storage nodes run entirely as user space daemons. File data is striped
30 across storage nodes in large chunks to distribute workload and
31 facilitate high throughputs. When storage nodes fail, data is
32 re-replicated in a distributed fashion by the storage nodes themselves
33 (with some minimal coordination from a cluster monitor), making the
34 system extremely efficient and scalable.
36 Metadata servers effectively form a large, consistent, distributed
37 in-memory cache above the file namespace that is extremely scalable,
38 dynamically redistributes metadata in response to workload changes,
39 and can tolerate arbitrary (well, non-Byzantine) node failures. The
40 metadata server takes a somewhat unconventional approach to metadata
41 storage to significantly improve performance for common workloads. In
42 particular, inodes with only a single link are embedded in
43 directories, allowing entire directories of dentries and inodes to be
44 loaded into its cache with a single I/O operation. The contents of
45 extremely large directories can be fragmented and managed by
46 independent metadata servers, allowing scalable concurrent access.
48 The system offers automatic data rebalancing/migration when scaling
49 from a small cluster of just a few nodes to many hundreds, without
50 requiring an administrator carve the data set into static volumes or
51 go through the tedious process of migrating data between servers.
52 When the file system approaches full, new nodes can be easily added
53 and things will "just work."
55 Ceph includes flexible snapshot mechanism that allows a user to create
56 a snapshot on any subdirectory (and its nested contents) in the
57 system. Snapshot creation and deletion are as simple as 'mkdir
58 .snap/foo' and 'rmdir .snap/foo'.
60 Snapshot names have two limitations:
62 * They can not start with an underscore ('_'), as these names are reserved
63 for internal usage by the MDS.
64 * They can not exceed 240 characters in size. This is because the MDS makes
65 use of long snapshot names internally, which follow the format:
66 `_<SNAPSHOT-NAME>_<INODE-NUMBER>`. Since filenames in general can't have
67 more than 255 characters, and `<node-id>` takes 13 characters, the long
68 snapshot names can take as much as 255 - 1 - 1 - 13 = 240.
70 Ceph also provides some recursive accounting on directories for nested files
71 and bytes. You can run the commands::
73 getfattr -n ceph.dir.rfiles /some/dir
74 getfattr -n ceph.dir.rbytes /some/dir
76 to get the total number of nested files and their combined size, respectively.
77 This makes the identification of large disk space consumers relatively quick,
78 as no 'du' or similar recursive scan of the file system is required.
80 Finally, Ceph also allows quotas to be set on any directory in the system.
81 The quota can restrict the number of bytes or the number of files stored
82 beneath that point in the directory hierarchy. Quotas can be set using
83 extended attributes 'ceph.quota.max_files' and 'ceph.quota.max_bytes', eg::
85 setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir
86 getfattr -n ceph.quota.max_bytes /some/dir
88 A limitation of the current quotas implementation is that it relies on the
89 cooperation of the client mounting the file system to stop writers when a
90 limit is reached. A modified or adversarial client cannot be prevented
91 from writing as much data as it needs.
96 The basic mount syntax is::
98 # mount -t ceph user@fsid.fs_name=/[subdir] mnt -o mon_addr=monip1[:port][/monip2[:port]]
100 You only need to specify a single monitor, as the client will get the
101 full list when it connects. (However, if the monitor you specify
102 happens to be down, the mount won't succeed.) The port can be left
103 off if the monitor is using the default. So if the monitor is at
106 # mount -t ceph cephuser@07fe3187-00d9-42a3-814b-72a4d5e7d5be.cephfs=/ /mnt/ceph -o mon_addr=1.2.3.4
108 is sufficient. If /sbin/mount.ceph is installed, a hostname can be
109 used instead of an IP address and the cluster FSID can be left out
110 (as the mount helper will fill it in by reading the ceph configuration
113 # mount -t ceph cephuser@cephfs=/ /mnt/ceph -o mon_addr=mon-addr
115 Multiple monitor addresses can be passed by separating each address with a slash (`/`)::
117 # mount -t ceph cephuser@cephfs=/ /mnt/ceph -o mon_addr=192.168.1.100/192.168.1.101
119 When using the mount helper, monitor address can be read from ceph
120 configuration file if available. Note that, the cluster FSID (passed as part
121 of the device string) is validated by checking it with the FSID reported by
127 mon_addr=ip_address[:port][/ip_address[:port]]
128 Monitor address to the cluster. This is used to bootstrap the
129 connection to the cluster. Once connection is established, the
130 monitor addresses in the monitor map are followed.
133 FSID of the cluster (from `ceph fsid` command).
136 Specify the IP and/or port the client should bind to locally.
137 There is normally not much reason to do this. If the IP is not
138 specified, the client's IP address is determined by looking at the
139 address its connection to the monitor originates from.
142 Specify the maximum write size in bytes. Default: 64 MB.
145 Specify the maximum read size in bytes. Default: 64 MB.
148 Specify the maximum readahead size in bytes. Default: 8 MB.
151 Specify the timeout value for mount (in seconds), in the case
152 of a non-responsive Ceph file system. The default is 60
156 Specify the maximum number of caps to hold. Unused caps are released
157 when number of caps exceeds the limit. The default is 0 (no limit)
160 When stat() is called on a directory, set st_size to 'rbytes',
161 the summation of file sizes over all files nested beneath that
162 directory. This is the default.
165 When stat() is called on a directory, set st_size to the
166 number of entries in that directory.
169 Disable CRC32C calculation for data writes. If set, the storage node
170 must rely on TCP's error correction to detect data corruption
174 Use the dcache contents to perform negative lookups and
175 readdir when the client has the entire directory contents in
176 its cache. (This does not change correctness; the client uses
177 cached metadata only when a lease or capability ensures it is
181 Do not use the dcache as above. This avoids a significant amount of
182 complex code, sacrificing performance without affecting correctness,
183 and is useful for tracking down bugs.
186 Do not use the dcache as above for readdir.
189 Report overall filesystem usage in statfs instead of using the root
193 Don't use the RADOS 'copy-from' operation to perform remote object
194 copies. Currently, it's only used in copy_file_range, which will revert
195 to the default VFS implementation if this option is used.
197 recover_session=<no|clean>
198 Set auto reconnect mode in the case where the client is blocklisted. The
199 available modes are "no" and "clean". The default is "no".
201 * no: never attempt to reconnect when client detects that it has been
202 blocklisted. Operations will generally fail after being blocklisted.
204 * clean: client reconnects to the ceph cluster automatically when it
205 detects that it has been blocklisted. During reconnect, client drops
206 dirty data/metadata, invalidates page caches and writable file handles.
207 After reconnect, file locks become stale because the MDS loses track
208 of them. If an inode contains any stale file locks, read/write on the
209 inode is not allowed until applications release all stale file locks.
214 For more information on Ceph, see the home page at
217 The Linux kernel client source tree is available at
218 - https://github.com/ceph/ceph-client.git
220 and the source for the full system is at
221 https://github.com/ceph/ceph.git