5 * SOME HIGH LEVEL CODE DOCUMENTATION:
7 * Bcache mostly works with cache sets, cache devices, and backing devices.
9 * Support for multiple cache devices hasn't quite been finished off yet, but
10 * it's about 95% plumbed through. A cache set and its cache devices is sort of
11 * like a md raid array and its component devices. Most of the code doesn't care
12 * about individual cache devices, the main abstraction is the cache set.
14 * Multiple cache devices is intended to give us the ability to mirror dirty
15 * cached data and metadata, without mirroring clean cached data.
17 * Backing devices are different, in that they have a lifetime independent of a
18 * cache set. When you register a newly formatted backing device it'll come up
19 * in passthrough mode, and then you can attach and detach a backing device from
20 * a cache set at runtime - while it's mounted and in use. Detaching implicitly
21 * invalidates any cached data for that backing device.
23 * A cache set can have multiple (many) backing devices attached to it.
25 * There's also flash only volumes - this is the reason for the distinction
26 * between struct cached_dev and struct bcache_device. A flash only volume
27 * works much like a bcache device that has a backing device, except the
28 * "cached" data is always dirty. The end result is that we get thin
29 * provisioning with very little additional code.
31 * Flash only volumes work but they're not production ready because the moving
32 * garbage collector needs more work. More on that later.
36 * Bcache is primarily designed for caching, which means that in normal
37 * operation all of our available space will be allocated. Thus, we need an
38 * efficient way of deleting things from the cache so we can write new things to
41 * To do this, we first divide the cache device up into buckets. A bucket is the
42 * unit of allocation; they're typically around 1 mb - anywhere from 128k to 2M+
45 * Each bucket has a 16 bit priority, and an 8 bit generation associated with
46 * it. The gens and priorities for all the buckets are stored contiguously and
47 * packed on disk (in a linked list of buckets - aside from the superblock, all
48 * of bcache's metadata is stored in buckets).
50 * The priority is used to implement an LRU. We reset a bucket's priority when
51 * we allocate it or on cache it, and every so often we decrement the priority
52 * of each bucket. It could be used to implement something more sophisticated,
53 * if anyone ever gets around to it.
55 * The generation is used for invalidating buckets. Each pointer also has an 8
56 * bit generation embedded in it; for a pointer to be considered valid, its gen
57 * must match the gen of the bucket it points into. Thus, to reuse a bucket all
58 * we have to do is increment its gen (and write its new gen to disk; we batch
61 * Bcache is entirely COW - we never write twice to a bucket, even buckets that
62 * contain metadata (including btree nodes).
66 * Bcache is in large part design around the btree.
68 * At a high level, the btree is just an index of key -> ptr tuples.
70 * Keys represent extents, and thus have a size field. Keys also have a variable
71 * number of pointers attached to them (potentially zero, which is handy for
72 * invalidating the cache).
74 * The key itself is an inode:offset pair. The inode number corresponds to a
75 * backing device or a flash only volume. The offset is the ending offset of the
76 * extent within the inode - not the starting offset; this makes lookups
77 * slightly more convenient.
79 * Pointers contain the cache device id, the offset on that device, and an 8 bit
80 * generation number. More on the gen later.
82 * Index lookups are not fully abstracted - cache lookups in particular are
83 * still somewhat mixed in with the btree code, but things are headed in that
86 * Updates are fairly well abstracted, though. There are two different ways of
87 * updating the btree; insert and replace.
89 * BTREE_INSERT will just take a list of keys and insert them into the btree -
90 * overwriting (possibly only partially) any extents they overlap with. This is
91 * used to update the index after a write.
93 * BTREE_REPLACE is really cmpxchg(); it inserts a key into the btree iff it is
94 * overwriting a key that matches another given key. This is used for inserting
95 * data into the cache after a cache miss, and for background writeback, and for
96 * the moving garbage collector.
98 * There is no "delete" operation; deleting things from the index is
99 * accomplished by either by invalidating pointers (by incrementing a bucket's
100 * gen) or by inserting a key with 0 pointers - which will overwrite anything
101 * previously present at that location in the index.
103 * This means that there are always stale/invalid keys in the btree. They're
104 * filtered out by the code that iterates through a btree node, and removed when
105 * a btree node is rewritten.
109 * Our unit of allocation is a bucket, and we we can't arbitrarily allocate and
110 * free smaller than a bucket - so, that's how big our btree nodes are.
112 * (If buckets are really big we'll only use part of the bucket for a btree node
113 * - no less than 1/4th - but a bucket still contains no more than a single
114 * btree node. I'd actually like to change this, but for now we rely on the
115 * bucket's gen for deleting btree nodes when we rewrite/split a node.)
117 * Anyways, btree nodes are big - big enough to be inefficient with a textbook
118 * btree implementation.
120 * The way this is solved is that btree nodes are internally log structured; we
121 * can append new keys to an existing btree node without rewriting it. This
122 * means each set of keys we write is sorted, but the node is not.
124 * We maintain this log structure in memory - keeping 1Mb of keys sorted would
125 * be expensive, and we have to distinguish between the keys we have written and
126 * the keys we haven't. So to do a lookup in a btree node, we have to search
127 * each sorted set. But we do merge written sets together lazily, so the cost of
128 * these extra searches is quite low (normally most of the keys in a btree node
129 * will be in one big set, and then there'll be one or two sets that are much
132 * This log structure makes bcache's btree more of a hybrid between a
133 * conventional btree and a compacting data structure, with some of the
134 * advantages of both.
136 * GARBAGE COLLECTION:
138 * We can't just invalidate any bucket - it might contain dirty data or
139 * metadata. If it once contained dirty data, other writes might overwrite it
140 * later, leaving no valid pointers into that bucket in the index.
142 * Thus, the primary purpose of garbage collection is to find buckets to reuse.
143 * It also counts how much valid data it each bucket currently contains, so that
144 * allocation can reuse buckets sooner when they've been mostly overwritten.
146 * It also does some things that are really internal to the btree
147 * implementation. If a btree node contains pointers that are stale by more than
148 * some threshold, it rewrites the btree node to avoid the bucket's generation
149 * wrapping around. It also merges adjacent btree nodes if they're empty enough.
153 * Bcache's journal is not necessary for consistency; we always strictly
154 * order metadata writes so that the btree and everything else is consistent on
155 * disk in the event of an unclean shutdown, and in fact bcache had writeback
156 * caching (with recovery from unclean shutdown) before journalling was
159 * Rather, the journal is purely a performance optimization; we can't complete a
160 * write until we've updated the index on disk, otherwise the cache would be
161 * inconsistent in the event of an unclean shutdown. This means that without the
162 * journal, on random write workloads we constantly have to update all the leaf
163 * nodes in the btree, and those writes will be mostly empty (appending at most
164 * a few keys each) - highly inefficient in terms of amount of metadata writes,
165 * and it puts more strain on the various btree resorting/compacting code.
167 * The journal is just a log of keys we've inserted; on startup we just reinsert
168 * all the keys in the open journal entries. That means that when we're updating
169 * a node in the btree, we can wait until a 4k block of keys fills up before
172 * For simplicity, we only journal updates to leaf nodes; updates to parent
173 * nodes are rare enough (since our leaf nodes are huge) that it wasn't worth
174 * the complexity to deal with journalling them (in particular, journal replay)
175 * - updates to non leaf nodes just happen synchronously (see btree_split()).
178 #define pr_fmt(fmt) "bcache: %s() " fmt "\n", __func__
180 #include <linux/bio.h>
181 #include <linux/kobject.h>
182 #include <linux/list.h>
183 #include <linux/mutex.h>
184 #include <linux/rbtree.h>
185 #include <linux/rwsem.h>
186 #include <linux/types.h>
187 #include <linux/workqueue.h>
197 uint8_t last_gc
; /* Most out of date gen in the btree */
203 * I'd use bitfields for these, but I don't trust the compiler not to screw me
204 * as multiple threads touch struct bucket without locking
207 BITMASK(GC_MARK
, struct bucket
, gc_mark
, 0, 2);
208 #define GC_MARK_RECLAIMABLE 0
209 #define GC_MARK_DIRTY 1
210 #define GC_MARK_METADATA 2
211 BITMASK(GC_SECTORS_USED
, struct bucket
, gc_mark
, 2, 14);
219 /* Enough for a key with 6 pointers */
222 #define BKEY_PADDED(key) \
223 union { struct bkey key; uint64_t key ## _pad[BKEY_PAD]; }
225 /* Version 0: Cache device
226 * Version 1: Backing device
227 * Version 2: Seed pointer into btree node checksum
228 * Version 3: Cache device with new UUID format
229 * Version 4: Backing device with data offset
231 #define BCACHE_SB_VERSION_CDEV 0
232 #define BCACHE_SB_VERSION_BDEV 1
233 #define BCACHE_SB_VERSION_CDEV_WITH_UUID 3
234 #define BCACHE_SB_VERSION_BDEV_WITH_OFFSET 4
235 #define BCACHE_SB_MAX_VERSION 4
239 #define SB_LABEL_SIZE 32
240 #define SB_JOURNAL_BUCKETS 256U
241 /* SB_JOURNAL_BUCKETS must be divisible by BITS_PER_LONG */
242 #define MAX_CACHES_PER_SET 8
244 #define BDEV_DATA_START_DEFAULT 16 /* sectors */
248 uint64_t offset
; /* sector where this sb was written */
255 uint8_t set_uuid
[16];
258 uint8_t label
[SB_LABEL_SIZE
];
267 uint64_t nbuckets
; /* device size */
269 uint16_t block_size
; /* sectors */
270 uint16_t bucket_size
; /* sectors */
273 uint16_t nr_this_dev
;
276 /* Backing devices */
277 uint64_t data_offset
;
280 * block_size from the cache device section is still used by
281 * backing devices, so don't add anything here until we fix
282 * things to not need it for backing devices anymore
287 uint32_t last_mount
; /* time_t */
289 uint16_t first_bucket
;
291 uint16_t njournal_buckets
;
294 uint64_t d
[SB_JOURNAL_BUCKETS
]; /* journal buckets */
297 BITMASK(CACHE_SYNC
, struct cache_sb
, flags
, 0, 1);
298 BITMASK(CACHE_DISCARD
, struct cache_sb
, flags
, 1, 1);
299 BITMASK(CACHE_REPLACEMENT
, struct cache_sb
, flags
, 2, 3);
300 #define CACHE_REPLACEMENT_LRU 0U
301 #define CACHE_REPLACEMENT_FIFO 1U
302 #define CACHE_REPLACEMENT_RANDOM 2U
304 BITMASK(BDEV_CACHE_MODE
, struct cache_sb
, flags
, 0, 4);
305 #define CACHE_MODE_WRITETHROUGH 0U
306 #define CACHE_MODE_WRITEBACK 1U
307 #define CACHE_MODE_WRITEAROUND 2U
308 #define CACHE_MODE_NONE 3U
309 BITMASK(BDEV_STATE
, struct cache_sb
, flags
, 61, 2);
310 #define BDEV_STATE_NONE 0U
311 #define BDEV_STATE_CLEAN 1U
312 #define BDEV_STATE_DIRTY 2U
313 #define BDEV_STATE_STALE 3U
315 /* Version 1: Seed pointer into btree node checksum
317 #define BCACHE_BSET_VERSION 1
320 * This is the on disk format for btree nodes - a btree node on disk is a list
321 * of these; within each set the keys are sorted
331 struct bkey start
[0];
337 * On disk format for priorities and gens - see super.c near prio_write() for
347 uint64_t next_bucket
;
352 } __attribute((packed
)) data
[];
362 uint32_t invalidated
;
365 /* Size of flash only volumes */
373 BITMASK(UUID_FLASH_ONLY
, struct uuid_entry
, flags
, 0, 1);
387 typedef bool (keybuf_pred_fn
)(struct keybuf
*, struct bkey
*);
390 struct bkey last_scanned
;
394 * Beginning and end of range in rb tree - so that we can skip taking
395 * lock and checking the rb tree when we need to check for overlapping
403 #define KEYBUF_NR 100
404 DECLARE_ARRAY_ALLOCATOR(struct keybuf_key
, freelist
, KEYBUF_NR
);
407 struct bio_split_pool
{
408 struct bio_set
*bio_split
;
409 mempool_t
*bio_split_hook
;
412 struct bio_split_hook
{
414 struct bio_split_pool
*p
;
416 bio_end_io_t
*bi_end_io
;
420 struct bcache_device
{
427 #define BCACHEDEVNAME_SIZE 12
428 char name
[BCACHEDEVNAME_SIZE
];
430 struct gendisk
*disk
;
432 /* If nonzero, we're closing */
435 /* If nonzero, we're detaching/unregistering from cache set */
440 unsigned stripe_size_bits
;
441 atomic_t
*stripe_sectors_dirty
;
443 unsigned long sectors_dirty_last
;
444 long sectors_dirty_derivative
;
446 mempool_t
*unaligned_bvec
;
447 struct bio_set
*bio_split
;
449 unsigned data_csum
:1;
451 int (*cache_miss
)(struct btree
*, struct search
*,
452 struct bio
*, unsigned);
453 int (*ioctl
) (struct bcache_device
*, fmode_t
, unsigned, unsigned long);
455 struct bio_split_pool bio_split_hook
;
459 /* Used to track sequential IO so it can be skipped */
460 struct hlist_node hash
;
461 struct list_head lru
;
463 unsigned long jiffies
;
469 struct list_head list
;
470 struct bcache_device disk
;
471 struct block_device
*bdev
;
475 struct bio_vec sb_bv
[1];
476 struct closure_with_waitlist sb_write
;
478 /* Refcount on the cache set. Always nonzero when we're caching. */
480 struct work_struct detach
;
483 * Device might not be running if it's dirty and the cache set hasn't
489 * Writes take a shared lock from start to finish; scanning for dirty
490 * data to refill the rb tree requires an exclusive lock.
492 struct rw_semaphore writeback_lock
;
495 * Nonzero, and writeback has a refcount (d->count), iff there is dirty
496 * data in the cache. Protected by writeback_lock; must have an
497 * shared lock to set and exclusive lock to clear.
501 struct bch_ratelimit writeback_rate
;
502 struct delayed_work writeback_rate_update
;
505 * Internal to the writeback code, so read_dirty() can keep track of
510 /* Limit number of writeback bios in flight */
511 struct semaphore in_flight
;
512 struct closure_with_timer writeback
;
514 struct keybuf writeback_keys
;
516 /* For tracking sequential IO */
517 #define RECENT_IO_BITS 7
518 #define RECENT_IO (1 << RECENT_IO_BITS)
519 struct io io
[RECENT_IO
];
520 struct hlist_head io_hash
[RECENT_IO
+ 1];
521 struct list_head io_lru
;
524 struct cache_accounting accounting
;
526 /* The rest of this all shows up in sysfs */
527 unsigned sequential_cutoff
;
530 unsigned sequential_merge
:1;
533 unsigned partial_stripes_expensive
:1;
534 unsigned writeback_metadata
:1;
535 unsigned writeback_running
:1;
536 unsigned char writeback_percent
;
537 unsigned writeback_delay
;
539 int writeback_rate_change
;
540 int64_t writeback_rate_derivative
;
541 uint64_t writeback_rate_target
;
543 unsigned writeback_rate_update_seconds
;
544 unsigned writeback_rate_d_term
;
545 unsigned writeback_rate_p_term_inverse
;
546 unsigned writeback_rate_d_smooth
;
549 enum alloc_watermarks
{
558 struct cache_set
*set
;
561 struct bio_vec sb_bv
[1];
564 struct block_device
*bdev
;
566 unsigned watermark
[WATERMARK_MAX
];
568 struct task_struct
*alloc_thread
;
571 struct prio_set
*disk_buckets
;
574 * When allocating new buckets, prio_write() gets first dibs - since we
575 * may not be allocate at all without writing priorities and gens.
576 * prio_buckets[] contains the last buckets we wrote priorities to (so
577 * gc can mark them as metadata), prio_next[] contains the buckets
578 * allocated for the next prio write.
580 uint64_t *prio_buckets
;
581 uint64_t *prio_last_buckets
;
584 * free: Buckets that are ready to be used
586 * free_inc: Incoming buckets - these are buckets that currently have
587 * cached data in them, and we can't reuse them until after we write
588 * their new gen to disk. After prio_write() finishes writing the new
589 * gens/prios, they'll be moved to the free list (and possibly discarded
592 * unused: GC found nothing pointing into these buckets (possibly
593 * because all the data they contained was overwritten), so we only
594 * need to discard them before they can be moved to the free list.
596 DECLARE_FIFO(long, free
);
597 DECLARE_FIFO(long, free_inc
);
598 DECLARE_FIFO(long, unused
);
600 size_t fifo_last_bucket
;
602 /* Allocation stuff: */
603 struct bucket
*buckets
;
605 DECLARE_HEAP(struct bucket
*, heap
);
608 * max(gen - disk_gen) for all buckets. When it gets too big we have to
609 * call prio_write() to keep gens from wrapping.
611 uint8_t need_save_prio
;
612 unsigned gc_move_threshold
;
615 * If nonzero, we know we aren't going to find any buckets to invalidate
616 * until a gc finishes - otherwise we could pointlessly burn a ton of
619 unsigned invalidate_needs_gc
:1;
621 bool discard
; /* Get rid of? */
624 * We preallocate structs for issuing discards to buckets, and keep them
625 * on this list when they're not in use; do_discard() issues discards
626 * whenever there's work to do and is called by free_some_buckets() and
627 * when a discard finishes.
629 atomic_t discards_in_flight
;
630 struct list_head discards
;
632 struct journal_device journal
;
634 /* The rest of this all shows up in sysfs */
635 #define IO_ERROR_SHIFT 20
639 atomic_long_t meta_sectors_written
;
640 atomic_long_t btree_sectors_written
;
641 atomic_long_t sectors_written
;
643 struct bio_split_pool bio_split_hook
;
651 uint64_t data
; /* sectors */
652 uint64_t dirty
; /* sectors */
653 unsigned in_use
; /* percent */
657 * Flag bits, for how the cache set is shutting down, and what phase it's at:
659 * CACHE_SET_UNREGISTERING means we're not just shutting down, we're detaching
660 * all the backing devices first (their cached data gets invalidated, and they
661 * won't automatically reattach).
663 * CACHE_SET_STOPPING always gets set first when we're closing down a cache set;
664 * we'll continue to run normally for awhile with CACHE_SET_STOPPING set (i.e.
665 * flushing dirty data).
667 * CACHE_SET_RUNNING means all cache devices have been registered and journal
668 * replay is complete.
670 #define CACHE_SET_UNREGISTERING 0
671 #define CACHE_SET_STOPPING 1
672 #define CACHE_SET_RUNNING 2
677 struct list_head list
;
679 struct kobject internal
;
680 struct dentry
*debug
;
681 struct cache_accounting accounting
;
687 struct cache
*cache
[MAX_CACHES_PER_SET
];
688 struct cache
*cache_by_alloc
[MAX_CACHES_PER_SET
];
691 struct bcache_device
**devices
;
692 struct list_head cached_devs
;
693 uint64_t cached_dev_sectors
;
694 struct closure caching
;
696 struct closure_with_waitlist sb_write
;
700 struct bio_set
*bio_split
;
702 /* For the btree cache */
703 struct shrinker shrink
;
705 /* For the btree cache and anything allocation related */
706 struct mutex bucket_lock
;
708 /* log2(bucket_size), in sectors */
709 unsigned short bucket_bits
;
711 /* log2(block_size), in sectors */
712 unsigned short block_bits
;
715 * Default number of pages for a new btree node - may be less than a
718 unsigned btree_pages
;
721 * Lists of struct btrees; lru is the list for structs that have memory
722 * allocated for actual btree node, freed is for structs that do not.
724 * We never free a struct btree, except on shutdown - we just put it on
725 * the btree_cache_freed list and reuse it later. This simplifies the
726 * code, and it doesn't cost us much memory as the memory usage is
727 * dominated by buffers that hold the actual btree node data and those
728 * can be freed - and the number of struct btrees allocated is
729 * effectively bounded.
731 * btree_cache_freeable effectively is a small cache - we use it because
732 * high order page allocations can be rather expensive, and it's quite
733 * common to delete and allocate btree nodes in quick succession. It
734 * should never grow past ~2-3 nodes in practice.
736 struct list_head btree_cache
;
737 struct list_head btree_cache_freeable
;
738 struct list_head btree_cache_freed
;
740 /* Number of elements in btree_cache + btree_cache_freeable lists */
741 unsigned bucket_cache_used
;
744 * If we need to allocate memory for a new btree node and that
745 * allocation fails, we can cannibalize another node in the btree cache
746 * to satisfy the allocation. However, only one thread can be doing this
747 * at a time, for obvious reasons - try_harder and try_wait are
748 * basically a lock for this that we can wait on asynchronously. The
749 * btree_root() macro releases the lock when it returns.
751 struct closure
*try_harder
;
752 struct closure_waitlist try_wait
;
753 uint64_t try_harder_start
;
756 * When we free a btree node, we increment the gen of the bucket the
757 * node is in - but we can't rewrite the prios and gens until we
758 * finished whatever it is we were doing, otherwise after a crash the
759 * btree node would be freed but for say a split, we might not have the
760 * pointers to the new nodes inserted into the btree yet.
762 * This is a refcount that blocks prio_write() until the new keys are
765 atomic_t prio_blocked
;
766 struct closure_waitlist bucket_wait
;
769 * For any bio we don't skip we subtract the number of sectors from
770 * rescale; when it hits 0 we rescale all the bucket priorities.
774 * When we invalidate buckets, we use both the priority and the amount
775 * of good data to determine which buckets to reuse first - to weight
776 * those together consistently we keep track of the smallest nonzero
777 * priority of any bucket.
782 * max(gen - gc_gen) for all buckets. When it gets too big we have to gc
783 * to keep gens from wrapping around.
786 struct gc_stat gc_stats
;
789 struct closure_with_waitlist gc
;
790 /* Where in the btree gc currently is */
794 * The allocation code needs gc_mark in struct bucket to be correct, but
795 * it's not while a gc is in progress. Protected by bucket_lock.
799 /* Counts how many sectors bio_insert has added to the cache */
800 atomic_t sectors_to_gc
;
802 struct closure moving_gc
;
803 struct closure_waitlist moving_gc_wait
;
804 struct keybuf moving_gc_keys
;
805 /* Number of moving GC bios in flight */
810 #ifdef CONFIG_BCACHE_DEBUG
811 struct btree
*verify_data
;
812 struct mutex verify_lock
;
816 struct uuid_entry
*uuids
;
817 BKEY_PADDED(uuid_bucket
);
818 struct closure_with_waitlist uuid_write
;
821 * A btree node on disk could have too many bsets for an iterator to fit
822 * on the stack - have to dynamically allocate them
824 mempool_t
*fill_iter
;
827 * btree_sort() is a merge sort and requires temporary space - single
830 struct mutex sort_lock
;
832 unsigned sort_crit_factor
;
834 /* List of buckets we're currently writing data to */
835 struct list_head data_buckets
;
836 spinlock_t data_bucket_lock
;
838 struct journal journal
;
840 #define CONGESTED_MAX 1024
841 unsigned congested_last_us
;
844 /* The rest of this all shows up in sysfs */
845 unsigned congested_read_threshold_us
;
846 unsigned congested_write_threshold_us
;
848 spinlock_t sort_time_lock
;
849 struct time_stats sort_time
;
850 struct time_stats btree_gc_time
;
851 struct time_stats btree_split_time
;
852 spinlock_t btree_read_time_lock
;
853 struct time_stats btree_read_time
;
854 struct time_stats try_harder_time
;
856 atomic_long_t cache_read_races
;
857 atomic_long_t writeback_keys_done
;
858 atomic_long_t writeback_keys_failed
;
859 unsigned error_limit
;
860 unsigned error_decay
;
861 unsigned short journal_delay_ms
;
863 unsigned key_merging_disabled
:1;
864 unsigned gc_always_rewrite
:1;
865 unsigned shrinker_disabled
:1;
866 unsigned copy_gc_enabled
:1;
868 #define BUCKET_HASH_BITS 12
869 struct hlist_head bucket_hash
[1 << BUCKET_HASH_BITS
];
872 static inline bool key_merging_disabled(struct cache_set
*c
)
874 #ifdef CONFIG_BCACHE_DEBUG
875 return c
->key_merging_disabled
;
881 static inline bool SB_IS_BDEV(const struct cache_sb
*sb
)
883 return sb
->version
== BCACHE_SB_VERSION_BDEV
884 || sb
->version
== BCACHE_SB_VERSION_BDEV_WITH_OFFSET
;
888 unsigned submit_time_us
;
893 * We only need pad = 3 here because we only ever carry around a
894 * single pointer - i.e. the pointer we're doing io to/from.
900 static inline unsigned local_clock_us(void)
902 return local_clock() >> 10;
905 #define BTREE_PRIO USHRT_MAX
906 #define INITIAL_PRIO 32768
908 #define btree_bytes(c) ((c)->btree_pages * PAGE_SIZE)
909 #define btree_blocks(b) \
910 ((unsigned) (KEY_SIZE(&b->key) >> (b)->c->block_bits))
912 #define btree_default_blocks(c) \
913 ((unsigned) ((PAGE_SECTORS * (c)->btree_pages) >> (c)->block_bits))
915 #define bucket_pages(c) ((c)->sb.bucket_size / PAGE_SECTORS)
916 #define bucket_bytes(c) ((c)->sb.bucket_size << 9)
917 #define block_bytes(c) ((c)->sb.block_size << 9)
919 #define __set_bytes(i, k) (sizeof(*(i)) + (k) * sizeof(uint64_t))
920 #define set_bytes(i) __set_bytes(i, i->keys)
922 #define __set_blocks(i, k, c) DIV_ROUND_UP(__set_bytes(i, k), block_bytes(c))
923 #define set_blocks(i, c) __set_blocks(i, (i)->keys, c)
925 #define node(i, j) ((struct bkey *) ((i)->d + (j)))
926 #define end(i) node(i, (i)->keys)
928 #define index(i, b) \
929 ((size_t) (((void *) i - (void *) (b)->sets[0].data) / \
932 #define btree_data_space(b) (PAGE_SIZE << (b)->page_order)
934 #define prios_per_bucket(c) \
935 ((bucket_bytes(c) - sizeof(struct prio_set)) / \
936 sizeof(struct bucket_disk))
937 #define prio_buckets(c) \
938 DIV_ROUND_UP((size_t) (c)->sb.nbuckets, prios_per_bucket(c))
940 #define JSET_MAGIC 0x245235c1a3625032ULL
941 #define PSET_MAGIC 0x6750e15f87337f91ULL
942 #define BSET_MAGIC 0x90135c78b99e07f5ULL
944 #define jset_magic(c) ((c)->sb.set_magic ^ JSET_MAGIC)
945 #define pset_magic(c) ((c)->sb.set_magic ^ PSET_MAGIC)
946 #define bset_magic(c) ((c)->sb.set_magic ^ BSET_MAGIC)
948 /* Bkey fields: all units are in sectors */
950 #define KEY_FIELD(name, field, offset, size) \
951 BITMASK(name, struct bkey, field, offset, size)
953 #define PTR_FIELD(name, offset, size) \
954 static inline uint64_t name(const struct bkey *k, unsigned i) \
955 { return (k->ptr[i] >> offset) & ~(((uint64_t) ~0) << size); } \
957 static inline void SET_##name(struct bkey *k, unsigned i, uint64_t v)\
959 k->ptr[i] &= ~(~((uint64_t) ~0 << size) << offset); \
960 k->ptr[i] |= v << offset; \
963 KEY_FIELD(KEY_PTRS
, high
, 60, 3)
964 KEY_FIELD(HEADER_SIZE
, high
, 58, 2)
965 KEY_FIELD(KEY_CSUM
, high
, 56, 2)
966 KEY_FIELD(KEY_PINNED
, high
, 55, 1)
967 KEY_FIELD(KEY_DIRTY
, high
, 36, 1)
969 KEY_FIELD(KEY_SIZE
, high
, 20, 16)
970 KEY_FIELD(KEY_INODE
, high
, 0, 20)
972 /* Next time I change the on disk format, KEY_OFFSET() won't be 64 bits */
974 static inline uint64_t KEY_OFFSET(const struct bkey
*k
)
979 static inline void SET_KEY_OFFSET(struct bkey
*k
, uint64_t v
)
984 PTR_FIELD(PTR_DEV
, 51, 12)
985 PTR_FIELD(PTR_OFFSET
, 8, 43)
986 PTR_FIELD(PTR_GEN
, 0, 8)
988 #define PTR_CHECK_DEV ((1 << 12) - 1)
990 #define PTR(gen, offset, dev) \
991 ((((uint64_t) dev) << 51) | ((uint64_t) offset) << 8 | gen)
993 static inline size_t sector_to_bucket(struct cache_set
*c
, sector_t s
)
995 return s
>> c
->bucket_bits
;
998 static inline sector_t
bucket_to_sector(struct cache_set
*c
, size_t b
)
1000 return ((sector_t
) b
) << c
->bucket_bits
;
1003 static inline sector_t
bucket_remainder(struct cache_set
*c
, sector_t s
)
1005 return s
& (c
->sb
.bucket_size
- 1);
1008 static inline struct cache
*PTR_CACHE(struct cache_set
*c
,
1009 const struct bkey
*k
,
1012 return c
->cache
[PTR_DEV(k
, ptr
)];
1015 static inline size_t PTR_BUCKET_NR(struct cache_set
*c
,
1016 const struct bkey
*k
,
1019 return sector_to_bucket(c
, PTR_OFFSET(k
, ptr
));
1022 static inline struct bucket
*PTR_BUCKET(struct cache_set
*c
,
1023 const struct bkey
*k
,
1026 return PTR_CACHE(c
, k
, ptr
)->buckets
+ PTR_BUCKET_NR(c
, k
, ptr
);
1029 /* Btree key macros */
1032 * The high bit being set is a relic from when we used it to do binary
1033 * searches - it told you where a key started. It's not used anymore,
1034 * and can probably be safely dropped.
1036 #define KEY(dev, sector, len) \
1038 .high = (1ULL << 63) | ((uint64_t) (len) << 20) | (dev), \
1042 static inline void bkey_init(struct bkey
*k
)
1047 #define KEY_START(k) (KEY_OFFSET(k) - KEY_SIZE(k))
1048 #define START_KEY(k) KEY(KEY_INODE(k), KEY_START(k), 0)
1049 #define MAX_KEY KEY(~(~0 << 20), ((uint64_t) ~0) >> 1, 0)
1050 #define ZERO_KEY KEY(0, 0, 0)
1053 * This is used for various on disk data structures - cache_sb, prio_set, bset,
1054 * jset: The checksum is _always_ the first 8 bytes of these structs
1056 #define csum_set(i) \
1057 bch_crc64(((void *) (i)) + sizeof(uint64_t), \
1058 ((void *) end(i)) - (((void *) (i)) + sizeof(uint64_t)))
1060 /* Error handling macros */
1062 #define btree_bug(b, ...) \
1064 if (bch_cache_set_error((b)->c, __VA_ARGS__)) \
1068 #define cache_bug(c, ...) \
1070 if (bch_cache_set_error(c, __VA_ARGS__)) \
1074 #define btree_bug_on(cond, b, ...) \
1077 btree_bug(b, __VA_ARGS__); \
1080 #define cache_bug_on(cond, c, ...) \
1083 cache_bug(c, __VA_ARGS__); \
1086 #define cache_set_err_on(cond, c, ...) \
1089 bch_cache_set_error(c, __VA_ARGS__); \
1092 /* Looping macros */
1094 #define for_each_cache(ca, cs, iter) \
1095 for (iter = 0; ca = cs->cache[iter], iter < (cs)->sb.nr_in_set; iter++)
1097 #define for_each_bucket(b, ca) \
1098 for (b = (ca)->buckets + (ca)->sb.first_bucket; \
1099 b < (ca)->buckets + (ca)->sb.nbuckets; b++)
1101 static inline void __bkey_put(struct cache_set
*c
, struct bkey
*k
)
1105 for (i
= 0; i
< KEY_PTRS(k
); i
++)
1106 atomic_dec_bug(&PTR_BUCKET(c
, k
, i
)->pin
);
1109 static inline void cached_dev_put(struct cached_dev
*dc
)
1111 if (atomic_dec_and_test(&dc
->count
))
1112 schedule_work(&dc
->detach
);
1115 static inline bool cached_dev_get(struct cached_dev
*dc
)
1117 if (!atomic_inc_not_zero(&dc
->count
))
1120 /* Paired with the mb in cached_dev_attach */
1121 smp_mb__after_atomic_inc();
1126 * bucket_gc_gen() returns the difference between the bucket's current gen and
1127 * the oldest gen of any pointer into that bucket in the btree (last_gc).
1129 * bucket_disk_gen() returns the difference between the current gen and the gen
1130 * on disk; they're both used to make sure gens don't wrap around.
1133 static inline uint8_t bucket_gc_gen(struct bucket
*b
)
1135 return b
->gen
- b
->last_gc
;
1138 static inline uint8_t bucket_disk_gen(struct bucket
*b
)
1140 return b
->gen
- b
->disk_gen
;
1143 #define BUCKET_GC_GEN_MAX 96U
1144 #define BUCKET_DISK_GEN_MAX 64U
1146 #define kobj_attribute_write(n, fn) \
1147 static struct kobj_attribute ksysfs_##n = __ATTR(n, S_IWUSR, NULL, fn)
1149 #define kobj_attribute_rw(n, show, store) \
1150 static struct kobj_attribute ksysfs_##n = \
1151 __ATTR(n, S_IWUSR|S_IRUSR, show, store)
1153 static inline void wake_up_allocators(struct cache_set
*c
)
1158 for_each_cache(ca
, c
, i
)
1159 wake_up_process(ca
->alloc_thread
);
1162 /* Forward declarations */
1164 void bch_count_io_errors(struct cache
*, int, const char *);
1165 void bch_bbio_count_io_errors(struct cache_set
*, struct bio
*,
1167 void bch_bbio_endio(struct cache_set
*, struct bio
*, int, const char *);
1168 void bch_bbio_free(struct bio
*, struct cache_set
*);
1169 struct bio
*bch_bbio_alloc(struct cache_set
*);
1171 struct bio
*bch_bio_split(struct bio
*, int, gfp_t
, struct bio_set
*);
1172 void bch_generic_make_request(struct bio
*, struct bio_split_pool
*);
1173 void __bch_submit_bbio(struct bio
*, struct cache_set
*);
1174 void bch_submit_bbio(struct bio
*, struct cache_set
*, struct bkey
*, unsigned);
1176 uint8_t bch_inc_gen(struct cache
*, struct bucket
*);
1177 void bch_rescale_priorities(struct cache_set
*, int);
1178 bool bch_bucket_add_unused(struct cache
*, struct bucket
*);
1180 long bch_bucket_alloc(struct cache
*, unsigned, struct closure
*);
1181 void bch_bucket_free(struct cache_set
*, struct bkey
*);
1183 int __bch_bucket_alloc_set(struct cache_set
*, unsigned,
1184 struct bkey
*, int, struct closure
*);
1185 int bch_bucket_alloc_set(struct cache_set
*, unsigned,
1186 struct bkey
*, int, struct closure
*);
1189 bool bch_cache_set_error(struct cache_set
*, const char *, ...);
1191 void bch_prio_write(struct cache
*);
1192 void bch_write_bdev_super(struct cached_dev
*, struct closure
*);
1194 extern struct workqueue_struct
*bcache_wq
, *bch_gc_wq
;
1195 extern const char * const bch_cache_modes
[];
1196 extern struct mutex bch_register_lock
;
1197 extern struct list_head bch_cache_sets
;
1199 extern struct kobj_type bch_cached_dev_ktype
;
1200 extern struct kobj_type bch_flash_dev_ktype
;
1201 extern struct kobj_type bch_cache_set_ktype
;
1202 extern struct kobj_type bch_cache_set_internal_ktype
;
1203 extern struct kobj_type bch_cache_ktype
;
1205 void bch_cached_dev_release(struct kobject
*);
1206 void bch_flash_dev_release(struct kobject
*);
1207 void bch_cache_set_release(struct kobject
*);
1208 void bch_cache_release(struct kobject
*);
1210 int bch_uuid_write(struct cache_set
*);
1211 void bcache_write_super(struct cache_set
*);
1213 int bch_flash_dev_create(struct cache_set
*c
, uint64_t size
);
1215 int bch_cached_dev_attach(struct cached_dev
*, struct cache_set
*);
1216 void bch_cached_dev_detach(struct cached_dev
*);
1217 void bch_cached_dev_run(struct cached_dev
*);
1218 void bcache_device_stop(struct bcache_device
*);
1220 void bch_cache_set_unregister(struct cache_set
*);
1221 void bch_cache_set_stop(struct cache_set
*);
1223 struct cache_set
*bch_cache_set_alloc(struct cache_sb
*);
1224 void bch_btree_cache_free(struct cache_set
*);
1225 int bch_btree_cache_alloc(struct cache_set
*);
1226 void bch_moving_init_cache_set(struct cache_set
*);
1228 int bch_cache_allocator_start(struct cache
*ca
);
1229 void bch_cache_allocator_exit(struct cache
*ca
);
1230 int bch_cache_allocator_init(struct cache
*ca
);
1232 void bch_debug_exit(void);
1233 int bch_debug_init(struct kobject
*);
1234 void bch_writeback_exit(void);
1235 int bch_writeback_init(void);
1236 void bch_request_exit(void);
1237 int bch_request_init(void);
1238 void bch_btree_exit(void);
1239 int bch_btree_init(void);
1241 #endif /* _BCACHE_H */