summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--Documentation/device-mapper/thin-provisioning.txt285
-rw-r--r--drivers/md/Kconfig28
-rw-r--r--drivers/md/Makefile3
-rw-r--r--drivers/md/dm-thin-metadata.c1391
-rw-r--r--drivers/md/dm-thin-metadata.h156
-rw-r--r--drivers/md/dm-thin.c2428
6 files changed, 4291 insertions, 0 deletions
diff --git a/Documentation/device-mapper/thin-provisioning.txt b/Documentation/device-mapper/thin-provisioning.txt
new file mode 100644
index 000000000000..801d9d1cf82b
--- /dev/null
+++ b/Documentation/device-mapper/thin-provisioning.txt
@@ -0,0 +1,285 @@
+Introduction
+============
+
+This document descibes a collection of device-mapper targets that
+between them implement thin-provisioning and snapshots.
+
+The main highlight of this implementation, compared to the previous
+implementation of snapshots, is that it allows many virtual devices to
+be stored on the same data volume. This simplifies administration and
+allows the sharing of data between volumes, thus reducing disk usage.
+
+Another significant feature is support for an arbitrary depth of
+recursive snapshots (snapshots of snapshots of snapshots ...). The
+previous implementation of snapshots did this by chaining together
+lookup tables, and so performance was O(depth). This new
+implementation uses a single data structure to avoid this degradation
+with depth. Fragmentation may still be an issue, however, in some
+scenarios.
+
+Metadata is stored on a separate device from data, giving the
+administrator some freedom, for example to:
+
+- Improve metadata resilience by storing metadata on a mirrored volume
+ but data on a non-mirrored one.
+
+- Improve performance by storing the metadata on SSD.
+
+Status
+======
+
+These targets are very much still in the EXPERIMENTAL state. Please
+do not yet rely on them in production. But do experiment and offer us
+feedback. Different use cases will have different performance
+characteristics, for example due to fragmentation of the data volume.
+
+If you find this software is not performing as expected please mail
+dm-devel@redhat.com with details and we'll try our best to improve
+things for you.
+
+Userspace tools for checking and repairing the metadata are under
+development.
+
+Cookbook
+========
+
+This section describes some quick recipes for using thin provisioning.
+They use the dmsetup program to control the device-mapper driver
+directly. End users will be advised to use a higher-level volume
+manager such as LVM2 once support has been added.
+
+Pool device
+-----------
+
+The pool device ties together the metadata volume and the data volume.
+It maps I/O linearly to the data volume and updates the metadata via
+two mechanisms:
+
+- Function calls from the thin targets
+
+- Device-mapper 'messages' from userspace which control the creation of new
+ virtual devices amongst other things.
+
+Setting up a fresh pool device
+------------------------------
+
+Setting up a pool device requires a valid metadata device, and a
+data device. If you do not have an existing metadata device you can
+make one by zeroing the first 4k to indicate empty metadata.
+
+ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
+
+The amount of metadata you need will vary according to how many blocks
+are shared between thin devices (i.e. through snapshots). If you have
+less sharing than average you'll need a larger-than-average metadata device.
+
+As a guide, we suggest you calculate the number of bytes to use in the
+metadata device as 48 * $data_dev_size / $data_block_size but round it up
+to 2MB if the answer is smaller. The largest size supported is 16GB.
+
+If you're creating large numbers of snapshots which are recording large
+amounts of change, you may need find you need to increase this.
+
+Reloading a pool table
+----------------------
+
+You may reload a pool's table, indeed this is how the pool is resized
+if it runs out of space. (N.B. While specifying a different metadata
+device when reloading is not forbidden at the moment, things will go
+wrong if it does not route I/O to exactly the same on-disk location as
+previously.)
+
+Using an existing pool device
+-----------------------------
+
+ dmsetup create pool \
+ --table "0 20971520 thin-pool $metadata_dev $data_dev \
+ $data_block_size $low_water_mark"
+
+$data_block_size gives the smallest unit of disk space that can be
+allocated at a time expressed in units of 512-byte sectors. People
+primarily interested in thin provisioning may want to use a value such
+as 1024 (512KB). People doing lots of snapshotting may want a smaller value
+such as 128 (64KB). If you are not zeroing newly-allocated data,
+a larger $data_block_size in the region of 256000 (128MB) is suggested.
+$data_block_size must be the same for the lifetime of the
+metadata device.
+
+$low_water_mark is expressed in blocks of size $data_block_size. If
+free space on the data device drops below this level then a dm event
+will be triggered which a userspace daemon should catch allowing it to
+extend the pool device. Only one such event will be sent.
+Resuming a device with a new table itself triggers an event so the
+userspace daemon can use this to detect a situation where a new table
+already exceeds the threshold.
+
+Thin provisioning
+-----------------
+
+i) Creating a new thinly-provisioned volume.
+
+ To create a new thinly- provisioned volume you must send a message to an
+ active pool device, /dev/mapper/pool in this example.
+
+ dmsetup message /dev/mapper/pool 0 "create_thin 0"
+
+ Here '0' is an identifier for the volume, a 24-bit number. It's up
+ to the caller to allocate and manage these identifiers. If the
+ identifier is already in use, the message will fail with -EEXIST.
+
+ii) Using a thinly-provisioned volume.
+
+ Thinly-provisioned volumes are activated using the 'thin' target:
+
+ dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0"
+
+ The last parameter is the identifier for the thinp device.
+
+Internal snapshots
+------------------
+
+i) Creating an internal snapshot.
+
+ Snapshots are created with another message to the pool.
+
+ N.B. If the origin device that you wish to snapshot is active, you
+ must suspend it before creating the snapshot to avoid corruption.
+ This is NOT enforced at the moment, so please be careful!
+
+ dmsetup suspend /dev/mapper/thin
+ dmsetup message /dev/mapper/pool 0 "create_snap 1 0"
+ dmsetup resume /dev/mapper/thin
+
+ Here '1' is the identifier for the volume, a 24-bit number. '0' is the
+ identifier for the origin device.
+
+ii) Using an internal snapshot.
+
+ Once created, the user doesn't have to worry about any connection
+ between the origin and the snapshot. Indeed the snapshot is no
+ different from any other thinly-provisioned device and can be
+ snapshotted itself via the same method. It's perfectly legal to
+ have only one of them active, and there's no ordering requirement on
+ activating or removing them both. (This differs from conventional
+ device-mapper snapshots.)
+
+ Activate it exactly the same way as any other thinly-provisioned volume:
+
+ dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1"
+
+Deactivation
+------------
+
+All devices using a pool must be deactivated before the pool itself
+can be.
+
+ dmsetup remove thin
+ dmsetup remove snap
+ dmsetup remove pool
+
+Reference
+=========
+
+'thin-pool' target
+------------------
+
+i) Constructor
+
+ thin-pool <metadata dev> <data dev> <data block size (sectors)> \
+ <low water mark (blocks)> [<number of feature args> [<arg>]*]
+
+ Optional feature arguments:
+ - 'skip_block_zeroing': skips the zeroing of newly-provisioned blocks.
+
+ Data block size must be between 64KB (128 sectors) and 1GB
+ (2097152 sectors) inclusive.
+
+
+ii) Status
+
+ <transaction id> <used metadata blocks>/<total metadata blocks>
+ <used data blocks>/<total data blocks> <held metadata root>
+
+
+ transaction id:
+ A 64-bit number used by userspace to help synchronise with metadata
+ from volume managers.
+
+ used data blocks / total data blocks
+ If the number of free blocks drops below the pool's low water mark a
+ dm event will be sent to userspace. This event is edge-triggered and
+ it will occur only once after each resume so volume manager writers
+ should register for the event and then check the target's status.
+
+ held metadata root:
+ The location, in sectors, of the metadata root that has been
+ 'held' for userspace read access. '-' indicates there is no
+ held root. This feature is not yet implemented so '-' is
+ always returned.
+
+iii) Messages
+
+ create_thin <dev id>
+
+ Create a new thinly-provisioned device.
+ <dev id> is an arbitrary unique 24-bit identifier chosen by
+ the caller.
+
+ create_snap <dev id> <origin id>
+
+ Create a new snapshot of another thinly-provisioned device.
+ <dev id> is an arbitrary unique 24-bit identifier chosen by
+ the caller.
+ <origin id> is the identifier of the thinly-provisioned device
+ of which the new device will be a snapshot.
+
+ delete <dev id>
+
+ Deletes a thin device. Irreversible.
+
+ trim <dev id> <new size in sectors>
+
+ Delete mappings from the end of a thin device. Irreversible.
+ You might want to use this if you're reducing the size of
+ your thinly-provisioned device. In many cases, due to the
+ sharing of blocks between devices, it is not possible to
+ determine in advance how much space 'trim' will release. (In
+ future a userspace tool might be able to perform this
+ calculation.)
+
+ set_transaction_id <current id> <new id>
+
+ Userland volume managers, such as LVM, need a way to
+ synchronise their external metadata with the internal metadata of the
+ pool target. The thin-pool target offers to store an
+ arbitrary 64-bit transaction id and return it on the target's
+ status line. To avoid races you must provide what you think
+ the current transaction id is when you change it with this
+ compare-and-swap message.
+
+'thin' target
+-------------
+
+i) Constructor
+
+ thin <pool dev> <dev id>
+
+ pool dev:
+ the thin-pool device, e.g. /dev/mapper/my_pool or 253:0
+
+ dev id:
+ the internal device identifier of the device to be
+ activated.
+
+The pool doesn't store any size against the thin devices. If you
+load a thin target that is smaller than you've been using previously,
+then you'll have no access to blocks mapped beyond the end. If you
+load a target that is bigger than before, then extra blocks will be
+provisioned as and when needed.
+
+If you wish to reduce the size of your thin device and potentially
+regain some space then send the 'trim' message to the pool.
+
+ii) Status
+
+ <nr mapped sectors> <highest mapped sector>
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index b1a921497043..faa4741df6d3 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -216,6 +216,8 @@ config DM_BUFIO
as a cache, holding recently-read blocks in memory and performing
delayed writes.
+source "drivers/md/persistent-data/Kconfig"
+
config DM_CRYPT
tristate "Crypt target support"
depends on BLK_DEV_DM
@@ -241,6 +243,32 @@ config DM_SNAPSHOT
---help---
Allow volume managers to take writable snapshots of a device.
+config DM_THIN_PROVISIONING
+ tristate "Thin provisioning target (EXPERIMENTAL)"
+ depends on BLK_DEV_DM && EXPERIMENTAL
+ select DM_PERSISTENT_DATA
+ ---help---
+ Provides thin provisioning and snapshots that share a data store.
+
+config DM_DEBUG_BLOCK_STACK_TRACING
+ boolean "Keep stack trace of thin provisioning block lock holders"
+ depends on STACKTRACE_SUPPORT && DM_THIN_PROVISIONING
+ select STACKTRACE
+ ---help---
+ Enable this for messages that may help debug problems with the
+ block manager locking used by thin provisioning.
+
+ If unsure, say N.
+
+config DM_DEBUG_SPACE_MAPS
+ boolean "Extra validation for thin provisioning space maps"
+ depends on DM_THIN_PROVISIONING
+ ---help---
+ Enable this for messages that may help debug problems with the
+ space maps used by thin provisioning.
+
+ If unsure, say N.
+
config DM_MIRROR
tristate "Mirror target"
depends on BLK_DEV_DM
diff --git a/drivers/md/Makefile b/drivers/md/Makefile
index 56661c4272f2..046860c7a166 100644
--- a/drivers/md/Makefile
+++ b/drivers/md/Makefile
@@ -10,6 +10,7 @@ dm-snapshot-y += dm-snap.o dm-exception-store.o dm-snap-transient.o \
dm-mirror-y += dm-raid1.o
dm-log-userspace-y \
+= dm-log-userspace-base.o dm-log-userspace-transfer.o
+dm-thin-pool-y += dm-thin.o dm-thin-metadata.o
md-mod-y += md.o bitmap.o
raid456-y += raid5.o
@@ -35,10 +36,12 @@ obj-$(CONFIG_DM_MULTIPATH) += dm-multipath.o dm-round-robin.o
obj-$(CONFIG_DM_MULTIPATH_QL) += dm-queue-length.o
obj-$(CONFIG_DM_MULTIPATH_ST) += dm-service-time.o
obj-$(CONFIG_DM_SNAPSHOT) += dm-snapshot.o
+obj-$(CONFIG_DM_PERSISTENT_DATA) += persistent-data/
obj-$(CONFIG_DM_MIRROR) += dm-mirror.o dm-log.o dm-region-hash.o
obj-$(CONFIG_DM_LOG_USERSPACE) += dm-log-userspace.o
obj-$(CONFIG_DM_ZERO) += dm-zero.o
obj-$(CONFIG_DM_RAID) += dm-raid.o
+obj-$(CONFIG_DM_THIN_PROVISIONING) += dm-thin-pool.o
ifeq ($(CONFIG_DM_UEVENT),y)
dm-mod-objs += dm-uevent.o
diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
new file mode 100644
index 000000000000..59c4f0446ffa
--- /dev/null
+++ b/drivers/md/dm-thin-metadata.c
@@ -0,0 +1,1391 @@
+/*
+ * Copyright (C) 2011 Red Hat, Inc.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-thin-metadata.h"
+#include "persistent-data/dm-btree.h"
+#include "persistent-data/dm-space-map.h"
+#include "persistent-data/dm-space-map-disk.h"
+#include "persistent-data/dm-transaction-manager.h"
+
+#include <linux/list.h>
+#include <linux/device-mapper.h>
+#include <linux/workqueue.h>
+
+/*--------------------------------------------------------------------------
+ * As far as the metadata goes, there is:
+ *
+ * - A superblock in block zero, taking up fewer than 512 bytes for
+ * atomic writes.
+ *
+ * - A space map managing the metadata blocks.
+ *
+ * - A space map managing the data blocks.
+ *
+ * - A btree mapping our internal thin dev ids onto struct disk_device_details.
+ *
+ * - A hierarchical btree, with 2 levels which effectively maps (thin
+ * dev id, virtual block) -> block_time. Block time is a 64-bit
+ * field holding the time in the low 24 bits, and block in the top 48
+ * bits.
+ *
+ * BTrees consist solely of btree_nodes, that fill a block. Some are
+ * internal nodes, as such their values are a __le64 pointing to other
+ * nodes. Leaf nodes can store data of any reasonable size (ie. much
+ * smaller than the block size). The nodes consist of the header,
+ * followed by an array of keys, followed by an array of values. We have
+ * to binary search on the keys so they're all held together to help the
+ * cpu cache.
+ *
+ * Space maps have 2 btrees:
+ *
+ * - One maps a uint64_t onto a struct index_entry. Which points to a
+ * bitmap block, and has some details about how many free entries there
+ * are etc.
+ *
+ * - The bitmap blocks have a header (for the checksum). Then the rest
+ * of the block is pairs of bits. With the meaning being:
+ *
+ * 0 - ref count is 0
+ * 1 - ref count is 1
+ * 2 - ref count is 2
+ * 3 - ref count is higher than 2
+ *
+ * - If the count is higher than 2 then the ref count is entered in a
+ * second btree that directly maps the block_address to a uint32_t ref
+ * count.
+ *
+ * The space map metadata variant doesn't have a bitmaps btree. Instead
+ * it has one single blocks worth of index_entries. This avoids
+ * recursive issues with the bitmap btree needing to allocate space in
+ * order to insert. With a small data block size such as 64k the
+ * metadata support data devices that are hundreds of terrabytes.
+ *
+ * The space maps allocate space linearly from front to back. Space that
+ * is freed in a transaction is never recycled within that transaction.
+ * To try and avoid fragmenting _free_ space the allocator always goes
+ * back and fills in gaps.
+ *
+ * All metadata io is in THIN_METADATA_BLOCK_SIZE sized/aligned chunks
+ * from the block manager.
+ *--------------------------------------------------------------------------*/
+
+#define DM_MSG_PREFIX "thin metadata"
+
+#define THIN_SUPERBLOCK_MAGIC 27022010
+#define THIN_SUPERBLOCK_LOCATION 0
+#define THIN_VERSION 1
+#define THIN_METADATA_CACHE_SIZE 64
+#define SECTOR_TO_BLOCK_SHIFT 3
+
+/* This should be plenty */
+#define SPACE_MAP_ROOT_SIZE 128
+
+/*
+ * Little endian on-disk superblock and device details.
+ */
+struct thin_disk_superblock {
+ __le32 csum; /* Checksum of superblock except for this field. */
+ __le32 flags;
+ __le64 blocknr; /* This block number, dm_block_t. */
+
+ __u8 uuid[16];
+ __le64 magic;
+ __le32 version;
+ __le32 time;
+
+ __le64 trans_id;
+
+ /*
+ * Root held by userspace transactions.
+ */
+ __le64 held_root;
+
+ __u8 data_space_map_root[SPACE_MAP_ROOT_SIZE];
+ __u8 metadata_space_map_root[SPACE_MAP_ROOT_SIZE];
+
+ /*
+ * 2-level btree mapping (dev_id, (dev block, time)) -> data block
+ */
+ __le64 data_mapping_root;
+
+ /*
+ * Device detail root mapping dev_id -> device_details
+ */
+ __le64 device_details_root;
+
+ __le32 data_block_size; /* In 512-byte sectors. */
+
+ __le32 metadata_block_size; /* In 512-byte sectors. */
+ __le64 metadata_nr_blocks;
+
+ __le32 compat_flags;
+ __le32 compat_ro_flags;
+ __le32 incompat_flags;
+} __packed;
+
+struct disk_device_details {
+ __le64 mapped_blocks;
+ __le64 transaction_id; /* When created. */
+ __le32 creation_time;
+ __le32 snapshotted_time;
+} __packed;
+
+struct dm_pool_metadata {
+ struct hlist_node hash;
+
+ struct block_device *bdev;
+ struct dm_block_manager *bm;
+ struct dm_space_map *metadata_sm;
+ struct dm_space_map *data_sm;
+ struct dm_transaction_manager *tm;
+ struct dm_transaction_manager *nb_tm;
+
+ /*
+ * Two-level btree.
+ * First level holds thin_dev_t.
+ * Second level holds mappings.
+ */
+ struct dm_btree_info info;
+
+ /*
+ * Non-blocking version of the above.
+ */
+ struct dm_btree_info nb_info;
+
+ /*
+ * Just the top level for deleting whole devices.
+ */
+ struct dm_btree_info tl_info;
+
+ /*
+ * Just the bottom level for creating new devices.
+ */
+ struct dm_btree_info bl_info;
+
+ /*
+ * Describes the device details btree.
+ */
+ struct dm_btree_info details_info;
+
+ struct rw_semaphore root_lock;
+ uint32_t time;
+ int need_commit;
+ dm_block_t root;
+ dm_block_t details_root;
+ struct list_head thin_devices;
+ uint64_t trans_id;
+ unsigned long flags;
+ sector_t data_block_size;
+};
+
+struct dm_thin_device {
+ struct list_head list;
+ struct dm_pool_metadata *pmd;
+ dm_thin_id id;
+
+ int open_count;
+ int changed;
+ uint64_t mapped_blocks;
+ uint64_t transaction_id;
+ uint32_t creation_time;
+ uint32_t snapshotted_time;
+};
+
+/*----------------------------------------------------------------
+ * superblock validator
+ *--------------------------------------------------------------*/
+
+#define SUPERBLOCK_CSUM_XOR 160774
+
+static void sb_prepare_for_write(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size)
+{
+ struct thin_disk_superblock *disk_super = dm_block_data(b);
+
+ disk_super->blocknr = cpu_to_le64(dm_block_location(b));
+ disk_super->csum = cpu_to_le32(dm_bm_checksum(&disk_super->flags,
+ block_size - sizeof(__le32),
+ SUPERBLOCK_CSUM_XOR));
+}
+
+static int sb_check(struct dm_block_validator *v,
+ struct dm_block *b,
+ size_t block_size)
+{
+ struct thin_disk_superblock *disk_super = dm_block_data(b);
+ __le32 csum_le;
+
+ if (dm_block_location(b) != le64_to_cpu(disk_super->blocknr)) {
+ DMERR("sb_check failed: blocknr %llu: "
+ "wanted %llu", le64_to_cpu(disk_super->blocknr),
+ (unsigned long long)dm_block_location(b));
+ return -ENOTBLK;
+ }
+
+ if (le64_to_cpu(disk_super->magic) != THIN_SUPERBLOCK_MAGIC) {
+ DMERR("sb_check failed: magic %llu: "
+ "wanted %llu", le64_to_cpu(disk_super->magic),
+ (unsigned long long)THIN_SUPERBLOCK_MAGIC);
+ return -EILSEQ;
+ }
+
+ csum_le = cpu_to_le32(dm_bm_checksum(&disk_super->flags,
+ block_size - sizeof(__le32),
+ SUPERBLOCK_CSUM_XOR));
+ if (csum_le != disk_super->csum) {
+ DMERR("sb_check failed: csum %u: wanted %u",
+ le32_to_cpu(csum_le), le32_to_cpu(disk_super->csum));
+ return -EILSEQ;
+ }
+
+ return 0;
+}
+
+static struct dm_block_validator sb_validator = {
+ .name = "superblock",
+ .prepare_for_write = sb_prepare_for_write,
+ .check = sb_check
+};
+
+/*----------------------------------------------------------------
+ * Methods for the btree value types
+ *--------------------------------------------------------------*/
+
+static uint64_t pack_block_time(dm_block_t b, uint32_t t)
+{
+ return (b << 24) | t;
+}
+
+static void unpack_block_time(uint64_t v, dm_block_t *b, uint32_t *t)
+{
+ *b = v >> 24;
+ *t = v & ((1 << 24) - 1);
+}
+
+static void data_block_inc(void *context, void *value_le)
+{
+ struct dm_space_map *sm = context;
+ __le64 v_le;
+ uint64_t b;
+ uint32_t t;
+
+ memcpy(&v_le, value_le, sizeof(v_le));
+ unpack_block_time(le64_to_cpu(v_le), &b, &t);
+ dm_sm_inc_block(sm, b);
+}
+
+static void data_block_dec(void *context, void *value_le)
+{
+ struct dm_space_map *sm = context;
+ __le64 v_le;
+ uint64_t b;
+ uint32_t t;
+
+ memcpy(&v_le, value_le, sizeof(v_le));
+ unpack_block_time(le64_to_cpu(v_le), &b, &t);
+ dm_sm_dec_block(sm, b);
+}
+
+static int data_block_equal(void *context, void *value1_le, void *value2_le)
+{
+ __le64 v1_le, v2_le;
+ uint64_t b1, b2;
+ uint32_t t;
+
+ memcpy(&v1_le, value1_le, sizeof(v1_le));
+ memcpy(&v2_le, value2_le, sizeof(v2_le));
+ unpack_block_time(le64_to_cpu(v1_le), &b1, &t);
+ unpack_block_time(le64_to_cpu(v2_le), &b2, &t);
+
+ return b1 == b2;
+}
+
+static void subtree_inc(void *context, void *value)
+{
+ struct dm_btree_info *info = context;
+ __le64 root_le;
+ uint64_t root;
+
+ memcpy(&root_le, value, sizeof(root_le));
+ root = le64_to_cpu(root_le);
+ dm_tm_inc(info->tm, root);
+}
+
+static void subtree_dec(void *context, void *value)
+{
+ struct dm_btree_info *info = context;
+ __le64 root_le;
+ uint64_t root;
+
+ memcpy(&root_le, value, sizeof(root_le));
+ root = le64_to_cpu(root_le);
+ if (dm_btree_del(info, root))
+ DMERR("btree delete failed\n");
+}
+
+static int subtree_equal(void *context, void *value1_le, void *value2_le)
+{
+ __le64 v1_le, v2_le;
+ memcpy(&v1_le, value1_le, sizeof(v1_le));
+ memcpy(&v2_le, value2_le, sizeof(v2_le));
+
+ return v1_le == v2_le;
+}
+
+/*----------------------------------------------------------------*/
+
+static int superblock_all_zeroes(struct dm_block_manager *bm, int *result)
+{
+ int r;
+ unsigned i;
+ struct dm_block *b;
+ __le64 *data_le, zero = cpu_to_le64(0);
+ unsigned block_size = dm_bm_block_size(bm) / sizeof(__le64);
+
+ /*
+ * We can't use a validator here - it may be all zeroes.
+ */
+ r = dm_bm_read_lock(bm, THIN_SUPERBLOCK_LOCATION, NULL, &b);
+ if (r)
+ return r;
+
+ data_le = dm_block_data(b);
+ *result = 1;
+ for (i = 0; i < block_size; i++) {
+ if (data_le[i] != zero) {
+ *result = 0;
+ break;
+ }
+ }
+
+ return dm_bm_unlock(b);
+}
+
+static int init_pmd(struct dm_pool_metadata *pmd,
+ struct dm_block_manager *bm,
+ dm_block_t nr_blocks, int create)
+{
+ int r;
+ struct dm_space_map *sm, *data_sm;
+ struct dm_transaction_manager *tm;
+ struct dm_block *sblock;
+
+ if (create) {
+ r = dm_tm_create_with_sm(bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, &tm, &sm, &sblock);
+ if (r < 0) {
+ DMERR("tm_create_with_sm failed");
+ return r;
+ }
+
+ data_sm = dm_sm_disk_create(tm, nr_blocks);
+ if (IS_ERR(data_sm)) {
+ DMERR("sm_disk_create failed");
+ r = PTR_ERR(data_sm);
+ goto bad;
+ }
+ } else {
+ struct thin_disk_superblock *disk_super = NULL;
+ size_t space_map_root_offset =
+ offsetof(struct thin_disk_superblock, metadata_space_map_root);
+
+ r = dm_tm_open_with_sm(bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, space_map_root_offset,
+ SPACE_MAP_ROOT_SIZE, &tm, &sm, &sblock);
+ if (r < 0) {
+ DMERR("tm_open_with_sm failed");
+ return r;
+ }
+
+ disk_super = dm_block_data(sblock);
+ data_sm = dm_sm_disk_open(tm, disk_super->data_space_map_root,
+ sizeof(disk_super->data_space_map_root));
+ if (IS_ERR(data_sm)) {
+ DMERR("sm_disk_open failed");
+ r = PTR_ERR(data_sm);
+ goto bad;
+ }
+ }
+
+
+ r = dm_tm_unlock(tm, sblock);
+ if (r < 0) {
+ DMERR("couldn't unlock superblock");
+ goto bad_data_sm;
+ }
+
+ pmd->bm = bm;
+ pmd->metadata_sm = sm;
+ pmd->data_sm = data_sm;
+ pmd->tm = tm;
+ pmd->nb_tm = dm_tm_create_non_blocking_clone(tm);
+ if (!pmd->nb_tm) {
+ DMERR("could not create clone tm");
+ r = -ENOMEM;
+ goto bad_data_sm;
+ }
+
+ pmd->info.tm = tm;
+ pmd->info.levels = 2;
+ pmd->info.value_type.context = pmd->data_sm;
+ pmd->info.value_type.size = sizeof(__le64);
+ pmd->info.value_type.inc = data_block_inc;
+ pmd->info.value_type.dec = data_block_dec;
+ pmd->info.value_type.equal = data_block_equal;
+
+ memcpy(&pmd->nb_info, &pmd->info, sizeof(pmd->nb_info));
+ pmd->nb_info.tm = pmd->nb_tm;
+
+ pmd->tl_info.tm = tm;
+ pmd->tl_info.levels = 1;
+ pmd->tl_info.value_type.context = &pmd->info;
+ pmd->tl_info.value_type.size = sizeof(__le64);
+ pmd->tl_info.value_type.inc = subtree_inc;
+ pmd->tl_info.value_type.dec = subtree_dec;
+ pmd->tl_info.value_type.equal = subtree_equal;
+
+ pmd->bl_info.tm = tm;
+ pmd->bl_info.levels = 1;
+ pmd->bl_info.value_type.context = pmd->data_sm;
+ pmd->bl_info.value_type.size = sizeof(__le64);
+ pmd->bl_info.value_type.inc = data_block_inc;
+ pmd->bl_info.value_type.dec = data_block_dec;
+ pmd->bl_info.value_type.equal = data_block_equal;
+
+ pmd->details_info.tm = tm;
+ pmd->details_info.levels = 1;
+ pmd->details_info.value_type.context = NULL;
+ pmd->details_info.value_type.size = sizeof(struct disk_device_details);
+ pmd->details_info.value_type.inc = NULL;
+ pmd->details_info.value_type.dec = NULL;
+ pmd->details_info.value_type.equal = NULL;
+
+ pmd->root = 0;
+
+ init_rwsem(&pmd->root_lock);
+ pmd->time = 0;
+ pmd->need_commit = 0;
+ pmd->details_root = 0;
+ pmd->trans_id = 0;
+ pmd->flags = 0;
+ INIT_LIST_HEAD(&pmd->thin_devices);
+
+ return 0;
+
+bad_data_sm:
+ dm_sm_destroy(data_sm);
+bad:
+ dm_tm_destroy(tm);
+ dm_sm_destroy(sm);
+
+ return r;
+}
+
+static int __begin_transaction(struct dm_pool_metadata *pmd)
+{
+ int r;
+ u32 features;
+ struct thin_disk_superblock *disk_super;
+ struct dm_block *sblock;
+
+ /*
+ * __maybe_commit_transaction() resets these
+ */
+ WARN_ON(pmd->need_commit);
+
+ /*
+ * We re-read the superblock every time. Shouldn't need to do this
+ * really.
+ */
+ r = dm_bm_read_lock(pmd->bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, &sblock);
+ if (r)
+ return r;
+
+ disk_super = dm_block_data(sblock);
+ pmd->time = le32_to_cpu(disk_super->time);
+ pmd->root = le64_to_cpu(disk_super->data_mapping_root);
+ pmd->details_root = le64_to_cpu(disk_super->device_details_root);
+ pmd->trans_id = le64_to_cpu(disk_super->trans_id);
+ pmd->flags = le32_to_cpu(disk_super->flags);
+ pmd->data_block_size = le32_to_cpu(disk_super->data_block_size);
+
+ features = le32_to_cpu(disk_super->incompat_flags) & ~THIN_FEATURE_INCOMPAT_SUPP;
+ if (features) {
+ DMERR("could not access metadata due to "
+ "unsupported optional features (%lx).",
+ (unsigned long)features);
+ r = -EINVAL;
+ goto out;
+ }
+
+ /*
+ * Check for read-only metadata to skip the following RDWR checks.
+ */
+ if (get_disk_ro(pmd->bdev->bd_disk))
+ goto out;
+
+ features = le32_to_cpu(disk_super->compat_ro_flags) & ~THIN_FEATURE_COMPAT_RO_SUPP;
+ if (features) {
+ DMERR("could not access metadata RDWR due to "
+ "unsupported optional features (%lx).",
+ (unsigned long)features);
+ r = -EINVAL;
+ }
+
+out:
+ dm_bm_unlock(sblock);
+ return r;
+}
+
+static int __write_changed_details(struct dm_pool_metadata *pmd)
+{
+ int r;
+ struct dm_thin_device *td, *tmp;
+ struct disk_device_details details;
+ uint64_t key;
+
+ list_for_each_entry_safe(td, tmp, &pmd->thin_devices, list) {
+ if (!td->changed)
+ continue;
+
+ key = td->id;
+
+ details.mapped_blocks = cpu_to_le64(td->mapped_blocks);
+ details.transaction_id = cpu_to_le64(td->transaction_id);
+ details.creation_time = cpu_to_le32(td->creation_time);
+ details.snapshotted_time = cpu_to_le32(td->snapshotted_time);
+ __dm_bless_for_disk(&details);
+
+ r = dm_btree_insert(&pmd->details_info, pmd->details_root,
+ &key, &details, &pmd->details_root);
+ if (r)
+ return r;
+
+ if (td->open_count)
+ td->changed = 0;
+ else {
+ list_del(&td->list);
+ kfree(td);
+ }
+
+ pmd->need_commit = 1;
+ }
+
+ return 0;
+}
+
+static int __commit_transaction(struct dm_pool_metadata *pmd)
+{
+ /*
+ * FIXME: Associated pool should be made read-only on failure.
+ */
+ int r;
+ size_t metadata_len, data_len;
+ struct thin_disk_superblock *disk_super;
+ struct dm_block *sblock;
+
+ /*
+ * We need to know if the thin_disk_superblock exceeds a 512-byte sector.
+ */
+ BUILD_BUG_ON(sizeof(struct thin_disk_superblock) > 512);
+
+ r = __write_changed_details(pmd);
+ if (r < 0)
+ goto out;
+
+ if (!pmd->need_commit)
+ goto out;
+
+ r = dm_sm_commit(pmd->data_sm);
+ if (r < 0)
+ goto out;
+
+ r = dm_tm_pre_commit(pmd->tm);
+ if (r < 0)
+ goto out;
+
+ r = dm_sm_root_size(pmd->metadata_sm, &metadata_len);
+ if (r < 0)
+ goto out;
+
+ r = dm_sm_root_size(pmd->metadata_sm, &data_len);
+ if (r < 0)
+ goto out;
+
+ r = dm_bm_write_lock(pmd->bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, &sblock);
+ if (r)
+ goto out;
+
+ disk_super = dm_block_data(sblock);
+ disk_super->time = cpu_to_le32(pmd->time);
+ disk_super->data_mapping_root = cpu_to_le64(pmd->root);
+ disk_super->device_details_root = cpu_to_le64(pmd->details_root);
+ disk_super->trans_id = cpu_to_le64(pmd->trans_id);
+ disk_super->flags = cpu_to_le32(pmd->flags);
+
+ r = dm_sm_copy_root(pmd->metadata_sm, &disk_super->metadata_space_map_root,
+ metadata_len);
+ if (r < 0)
+ goto out_locked;
+
+ r = dm_sm_copy_root(pmd->data_sm, &disk_super->data_space_map_root,
+ data_len);
+ if (r < 0)
+ goto out_locked;
+
+ r = dm_tm_commit(pmd->tm, sblock);
+ if (!r)
+ pmd->need_commit = 0;
+
+out:
+ return r;
+
+out_locked:
+ dm_bm_unlock(sblock);
+ return r;
+}
+
+struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
+ sector_t data_block_size)
+{
+ int r;
+ struct thin_disk_superblock *disk_super;
+ struct dm_pool_metadata *pmd;
+ sector_t bdev_size = i_size_read(bdev->bd_inode) >> SECTOR_SHIFT;
+ struct dm_block_manager *bm;
+ int create;
+ struct dm_block *sblock;
+
+ pmd = kmalloc(sizeof(*pmd), GFP_KERNEL);
+ if (!pmd) {
+ DMERR("could not allocate metadata struct");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ /*
+ * Max hex locks:
+ * 3 for btree insert +
+ * 2 for btree lookup used within space map
+ */
+ bm = dm_block_manager_create(bdev, THIN_METADATA_BLOCK_SIZE,
+ THIN_METADATA_CACHE_SIZE, 5);
+ if (!bm) {
+ DMERR("could not create block manager");
+ kfree(pmd);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ r = superblock_all_zeroes(bm, &create);
+ if (r) {
+ dm_block_manager_destroy(bm);
+ kfree(pmd);
+ return ERR_PTR(r);
+ }
+
+
+ r = init_pmd(pmd, bm, 0, create);
+ if (r) {
+ dm_block_manager_destroy(bm);
+ kfree(pmd);
+ return ERR_PTR(r);
+ }
+ pmd->bdev = bdev;
+
+ if (!create) {
+ r = __begin_transaction(pmd);
+ if (r < 0)
+ goto bad;
+ return pmd;
+ }
+
+ /*
+ * Create.
+ */
+ r = dm_bm_write_lock(pmd->bm, THIN_SUPERBLOCK_LOCATION,
+ &sb_validator, &sblock);
+ if (r)
+ goto bad;
+
+ disk_super = dm_block_data(sblock);
+ disk_super->magic = cpu_to_le64(THIN_SUPERBLOCK_MAGIC);
+ disk_super->version = cpu_to_le32(THIN_VERSION);
+ disk_super->time = 0;
+ disk_super->metadata_block_size = cpu_to_le32(THIN_METADATA_BLOCK_SIZE >> SECTOR_SHIFT);
+ disk_super->metadata_nr_blocks = cpu_to_le64(bdev_size >> SECTOR_TO_BLOCK_SHIFT);
+ disk_super->data_block_size = cpu_to_le32(data_block_size);
+
+ r = dm_bm_unlock(sblock);
+ if (r < 0)
+ goto bad;
+
+ r = dm_btree_empty(&pmd->info, &pmd->root);
+ if (r < 0)
+ goto bad;
+
+ r = dm_btree_empty(&pmd->details_info, &pmd->details_root);
+ if (r < 0) {
+ DMERR("couldn't create devices root");
+ goto bad;
+ }
+
+ pmd->flags = 0;
+ pmd->need_commit = 1;
+ r = dm_pool_commit_metadata(pmd);
+ if (r < 0) {
+ DMERR("%s: dm_pool_commit_metadata() failed, error = %d",
+ __func__, r);
+ goto bad;
+ }
+
+ return pmd;
+
+bad:
+ if (dm_pool_metadata_close(pmd) < 0)
+ DMWARN("%s: dm_pool_metadata_close() failed.", __func__);
+ return ERR_PTR(r);
+}
+
+int dm_pool_metadata_close(struct dm_pool_metadata *pmd)
+{
+ int r;
+ unsigned open_devices = 0;
+ struct dm_thin_device *td, *tmp;