History log of /drivers/md/raid5.h
Revision Date Author Comments
9a3e1101b827a59ac9036a672f5fa8d5279d0fe2 23-Dec-2011 NeilBrown <neilb@suse.de> md/raid5: detect and handle replacements during recovery.

During recovery we want to write to the replacement but not
the original. So we have two new flags
- R5_NeedReplace if this stripe has a replacement that needs to
be written at some stage
- R5_WantReplace if NeedReplace, and the data is available, and
a 'sync' has been requested on this stripe.

We also distinguish between 'sync and replace' which need to read
all other devices, and 'replace' which only needs to read the
devices being replaced.

Note that during resync we always write to any replacement device.
It might not need to be written to, but as we don't read to compare,
we have to write to be sure.

Signed-off-by: NeilBrown <neilb@suse.de>
977df36255ab0ea78b048cbc9055300c586dcc91 23-Dec-2011 NeilBrown <neilb@suse.de> md/raid5: writes should get directed to replacement as well as original.

When writing, we need to submit two writes, one to the original, and
one to the replacement - if there is a replacement.

If the write to the replacement results in a write error, we just fail
the device. We only try to record write errors to the original.

When writing for recovery, we shouldn't write to the original. This
will be addressed in a subsequent patch that generally addresses
recovery.

Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
ede7ee8b4d007f308aa033be676b1a048f99e9db 23-Dec-2011 NeilBrown <neilb@suse.de> md/raid5: raid5.h cleanup

Remove some #defines that are no longer used, and replace some
others with an enum.
And remove an unused field.

Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
671488cc25f7c194c7c7a9f258bab1df17a6ff69 23-Dec-2011 NeilBrown <neilb@suse.de> md/raid5: allow each slot to have an extra replacement device

Just enhance data structures to record a second device per slot to be
used as a 'replacement' device, replacing the original.
We also have a second bio in each slot in each stripe_head. This will
only be used when writing to the array - we need to write to both the
original and the replacement at the same time, so will need two bios.

For now, only try using the replacement drive for aligned-reads.
In this case, we prefer the replacement if it has been recovered far
enough, otherwise use the original.

This includes a small enhancement. Previously we would only do
aligned reads if the target device was fully recovered. Now we also
do them if it has recovered far enough.

Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
d1688a6d5515f1900af76a963b4bb6d9a6554cfa 11-Oct-2011 NeilBrown <neilb@suse.de> md/raid5: typedef removal: raid5_conf_t -> struct r5conf

Signed-off-by: NeilBrown <neilb@suse.de>
2b8bf3451d1e3133ebc3998721d14013a6c27114 11-Oct-2011 NeilBrown <neilb@suse.de> md: remove typedefs: mdk_thread_t -> struct md_thread

Signed-off-by: NeilBrown <neilb@suse.de>
fd01b88c75a718020ff77e7f560d33835e9b58de 11-Oct-2011 NeilBrown <neilb@suse.de> md: remove typedefs: mddev_t -> struct mddev

Having mddev_t and 'struct mddev_s' is ugly and not preferred

Signed-off-by: NeilBrown <neilb@suse.de>
3cb03002000f133f9f97269edefd73611eafc873 11-Oct-2011 NeilBrown <neilb@suse.de> md: removing typedefs: mdk_rdev_t -> struct md_rdev

The typedefs are just annoying. 'mdk' probably refers to 'md_k.h'
which used to be an include file that defined this thing.

Signed-off-by: NeilBrown <neilb@suse.de>
b84db560ead5417b5594349512baf8837959df4f 28-Jul-2011 NeilBrown <neilb@suse.de> md/raid5: Clear bad blocks on successful write.

On a successful write to a known bad block, flag the sh
so that raid5d can remove the known bad block from the list.

Signed-off-by: NeilBrown <neilb@suse.de>
bc2607f393bd4fb844c1886a02af929ca0372056 28-Jul-2011 NeilBrown <neilb@suse.de> md/raid5: write errors should be recorded as bad blocks if possible.

When a write error is detected, don't mark the device as failed
immediately but rather record the fact for handle_stripe to deal with.

Handle_stripe then attempts to record a bad block. Only if that fails
does the device get marked as faulty.

Signed-off-by: NeilBrown <neilb@suse.de>
7f0da59bdc2f65795a57009d78f7753d3aea1de3 28-Jul-2011 NeilBrown <neilb@suse.de> md/raid5: use bad-block log to improve handling of uncorrectable read errors.

If we get an uncorrectable read error - record a bad block rather than
failing the device.
And if these errors (which may be due to known bad blocks) cause
recovery to be impossible, record a bad block on the recovering
devices, or abort the recovery.

As we might abort a recovery without failing a device we need to teach
RAID5 about recovery_disabled handling.

Signed-off-by: NeilBrown <neilb@suse.de>
c5709ef6a094c72b56355590bfa55cc107e98376 26-Jul-2011 NeilBrown <neilb@suse.de> md/raid5: add some more fields to stripe_head_state

Adding these three fields will allow more common code to be moved
to handle_stripe()

struct field rearrangement by Namhyung Kim.

Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Namhyung Kim <namhyung@gmail.com>
f2b3b44deee1524ca4f006048e0569f47eefdb74 26-Jul-2011 NeilBrown <neilb@suse.de> md/raid5: unify stripe_head_state and r6_state

'struct stripe_head_state' stores state about the 'current' stripe
that is passed around while handling the stripe.
For RAID6 there is an extension structure: r6_state, which is also
passed around.
There is no value in keeping these separate, so move the fields from
the latter into the former.

This means that all code now needs to treat s->failed_num as an small
array, but this is a small cost.

Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Namhyung Kim <namhyung@gmail.com>
c4c1663be46b2ab94e59d3e0c583a8f6b188ff0c 26-Jul-2011 NeilBrown <neilb@suse.de> md/raid5: replace sh->lock with an 'active' flag.

sh->lock is now mainly used to ensure that two threads aren't running
in the locked part of handle_stripe[56] at the same time.

That can more neatly be achieved with an 'active' flag which we set
while running handle_stripe. If we find the flag is set, we simply
requeue the stripe for later by setting STRIPE_HANDLE.

For safety we take ->device_lock while examining the state of the
stripe and creating a summary in 'stripe_head_state / r6_state'.
This possibly isn't needed but as shared fields like ->toread,
->towrite are checked it is safer for now at least.

We leave the label after the old 'unlock' called "unlock" because it
will disappear in a few patches, so renaming seems pointless.

This leaves the stripe 'locked' for longer as we clear STRIPE_ACTIVE
later, but that is not a problem.

Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Namhyung Kim <namhyung@gmail.com>
83206d66b65118d995c38746f21edc2bb8564b49 26-Jul-2011 NeilBrown <neilb@suse.de> md/raid5: Remove use of sh->lock in sync_request

This is the start of a series of patches to remove sh->lock.

sync_request takes sh->lock before setting STRIPE_SYNCING to ensure
there is no race with testing it in handle_stripe[56].

Instead, use a new flag STRIPE_SYNC_REQUESTED and test it early
in handle_stripe[56] (after getting the same lock) and perform the
same set/clear operations if it was set.

Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Namhyung Kim <namhyung@gmail.com>
482c083492ddaa32ef5864bae3d143dc8bcdf7d1 18-Apr-2011 NeilBrown <neilb@suse.de> md - remove old plugging code.

md has some plugging infrastructure for RAID5 to use because the
normal plugging infrastructure required a 'request_queue', and when
called from dm, RAID5 doesn't have one of those available.

This relied on the ->unplug_fn callback which doesn't exist any more.

So remove all of that code, both in md and raid5. Subsequent patches
with restore the plugging functionality.

Signed-off-by: NeilBrown <neilb@suse.de>
7eaceaccab5f40bbfda044629a6298616aeaed50 10-Mar-2011 Jens Axboe <jaxboe@fusionio.com> block: remove per-queue plugging

Code has been converted over to the new explicit on-stack plugging,
and delay users have been converted to use the new API for that.
So lets kill off the old plugging along with aops->sync_page().

Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
e9c7469bb4f502dafc092166201bea1ad5fc0fbf 03-Sep-2010 Tejun Heo <tj@kernel.org> md: implment REQ_FLUSH/FUA support

This patch converts md to support REQ_FLUSH/FUA instead of now
deprecated REQ_HARDBARRIER. In the core part (md.c), the following
changes are notable.

* Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA don't interfere with
processing of other requests and thus there is no reason to mark the
queue congested while FLUSH/FUA is in progress.

* REQ_FLUSH/FUA failures are final and its users don't need retry
logic. Retry logic is removed.

* Preflush needs to be issued to all member devices but FUA writes can
be handled the same way as other writes - their processing can be
deferred to request_queue of member devices. md_barrier_request()
is renamed to md_flush_request() and simplified accordingly.

For linear, raid0 and multipath, the core changes are enough. raid1,
5 and 10 need the following conversions.

* raid1: Handling of FLUSH/FUA bio's can simply be deferred to
request_queues of member devices. Barrier related logic removed.

* raid5: Queue draining logic dropped. FUA bit is propagated through
biodrain and stripe resconstruction such that all the updated parts
of the stripe are written out with FUA writes if any of the dirtying
writes was FUA. preread_active_stripes handling in make_request()
is updated as suggested by Neil Brown.

* raid10: FUA bit needs to be propagated to write clones.

linear, raid0, 1, 5 and 10 tested.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Neil Brown <neilb@suse.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
9f7c2220017771253d7d10b3cc017cb79eeac0fb 25-Jul-2010 NeilBrown <neilb@suse.de> md/raid5: export raid5 unplugging interface.

Also remove remaining accesses to ->queue and ->gendisk when ->queue
is NULL (As it is in a DM target).

Signed-off-by: NeilBrown <neilb@suse.de>
2ac8740151b082f045e58010eb92560c3a23a0e9 01-Jun-2010 NeilBrown <neilb@suse.de> md/raid5: add simple plugging infrastructure.

md/raid5 uses the plugging infrastructure provided by the block layer
and 'struct request_queue'. However when we plug raid5 under dm there
is no request queue so we cannot use that.

So create a similar infrastructure that is much lighter weight and use
it for raid5.

Signed-off-by: NeilBrown <neilb@suse.de>
11d8a6e3719519fbc0e2c9d61b6fa931b84bf813 26-Jul-2010 NeilBrown <neilb@suse.de> md/raid5: export is_congested test

the dm module will need this for dm-raid45.

Also only access ->queue->backing_dev_info->congested_fn
if ->queue actually exists. It won't in a dm target.

Signed-off-by: NeilBrown <neilb@suse.de>
f4be6b43f1ac60dff00ef0923ee43b0e08872947 01-Jun-2010 NeilBrown <neilb@suse.de> md/raid5: ensure we create a unique name for kmem_cache when mddev has no gendisk

We will shortly allow md devices with no gendisk (they are attached to
a dm-target instead). That will cause mdname() to return 'mdX'.
There is one place where mdname really needs to be unique: when
creating the name for a slab cache.
So in that case, if there is no gendisk, you the address of the mddev
formatted in HEX to provide a unique name.

Signed-off-by: NeilBrown <neilb@suse.de>
c41d4ac40df0d01bf9c383ff28f194d1df2d4fd9 01-Jun-2010 NeilBrown <neilb@suse.de> md/raid5: factor out code for changing size of stripe cache.

Separate the actual 'change' code from the sysfs interface
so that it can eventually be called internally.

Signed-off-by: NeilBrown <neilb@suse.de>
a29d8b8e2d811a24bbe49215a0f0c536b72ebc18 02-Feb-2010 Tejun Heo <tj@kernel.org> percpu: add __percpu sparse annotations to what's left

Add __percpu sparse annotations to places which didn't make it in one
of the previous patches. All converions are trivial.

These annotations are to make sparse consider percpu variables to be
in a different address space and warn if accessed without going
through percpu accessors. This patch doesn't affect normal builds.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Borislav Petkov <borislav.petkov@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Len Brown <lenb@kernel.org>
Cc: Neil Brown <neilb@suse.de>
e4424fee1815f996dccd36be44d68ca160ec3e1b 16-Oct-2009 NeilBrown <neilb@suse.de> md: fix problems with RAID6 calculations for DDF.

Signed-off-by: NeilBrown <neilb@suse.de>
417b8d4ac868cf58d6c68f52d72f7648413e0edc 16-Oct-2009 Dan Williams <dan.j.williams@intel.com> md/raid456: downlevel multicore operations to raid_run_ops

The percpu conversion allowed a straightforward handoff of stripe
processing to the async subsytem that initially showed some modest gains
(+4%). However, this model is too simplistic and leads to stripes
bouncing between raid5d and the async thread pool for every invocation
of handle_stripe(). As reported by Holger this can fall into a
pathological situation severely impacting throughput (6x performance
loss).

By downleveling the parallelism to raid_run_ops the pathological
stripe_head bouncing is eliminated. This version still exhibits an
average 11% throughput loss for:

mdadm --create /dev/md0 /dev/sd[b-q] -n 16 -l 6
echo 1024 > /sys/block/md0/md/stripe_cache_size
dd if=/dev/zero of=/dev/md0 bs=1024k count=2048

...but the results are at least stable and can be used as a base for
further multicore experimentation.

Reported-by: Holger Kiehl <Holger.Kiehl@dwd.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
ac6b53b6e6acab27e4f3e2383f9ac1f0d7c6200b 14-Jul-2009 Dan Williams <dan.j.williams@intel.com> md/raid6: asynchronous raid6 operations

[ Based on an original patch by Yuri Tikhonov ]

The raid_run_ops routine uses the asynchronous offload api and
the stripe_operations member of a stripe_head to carry out xor+pq+copy
operations asynchronously, outside the lock.

The operations performed by RAID-6 are the same as in the RAID-5 case
except for no support of STRIPE_OP_PREXOR operations. All the others
are supported:
STRIPE_OP_BIOFILL
- copy data into request buffers to satisfy a read request
STRIPE_OP_COMPUTE_BLK
- generate missing blocks (1 or 2) in the cache from the other blocks
STRIPE_OP_BIODRAIN
- copy data out of request buffers to satisfy a write request
STRIPE_OP_RECONSTRUCT
- recalculate parity for new data that has entered the cache
STRIPE_OP_CHECK
- verify that the parity is correct

The flow is the same as in the RAID-5 case, and reuses some routines, namely:
1/ ops_complete_postxor (renamed to ops_complete_reconstruct)
2/ ops_complete_compute (updated to set up to 2 targets uptodate)
3/ ops_run_check (renamed to ops_run_check_p for xor parity checks)

[neilb@suse.de: fixes to get it to pass mdadm regression suite]
Reviewed-by: Andre Noll <maan@systemlinux.org>
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
ad283ea4a3ce82cda2efe33163748a397b31b1eb 30-Aug-2009 Dan Williams <dan.j.williams@intel.com> async_tx: add sum check flags

Replace the flat zero_sum_result with a collection of flags to contain
the P (xor) zero-sum result, and the soon to be utilized Q (raid6 reed
solomon syndrome) zero-sum result. Use the SUM_CHECK_ namespace instead
of DMA_ since these flags will be used on non-dma-zero-sum enabled
platforms.

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
d6f38f31f3ad4b0dd33fe970988f14e7c65ef702 14-Jul-2009 Dan Williams <dan.j.williams@intel.com> md/raid5,6: add percpu scribble region for buffer lists

Use percpu memory rather than stack for storing the buffer lists used in
parity calculations. Include space for dma address conversions and pass
that to async_tx via the async_submit_ctl.scribble pointer.

[ Impact: move memory pressure from stack to heap ]

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
36d1c6476be51101778882897b315bd928c8c7b5 14-Jul-2009 Dan Williams <dan.j.williams@intel.com> md/raid6: move the spare page to a percpu allocation

In preparation for asynchronous handling of raid6 operations move the
spare page to a percpu allocation to allow multiple simultaneous
synchronous raid6 recovery operations.

Make this allocation cpu hotplug aware to maximize allocation
efficiency.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
09c9e5fa1b93ad5b81c9dcf8ce3a5b9ae2ac31e4 18-Jun-2009 Andre Noll <maan@systemlinux.org> md: convert conf->chunk_size and conf->prev_chunk to sectors.

This kills some more shifts.

Signed-off-by: Andre Noll <maan@systemlinux.org>
Signed-off-by: NeilBrown <neilb@suse.de>
070ec55d07157a3041f92654135c3c6e2eaaf901 16-Jun-2009 NeilBrown <neilb@suse.de> md: remove mddev_to_conf "helper" macro

Having a macro just to cast a void* isn't really helpful.
I would must rather see that we are simply de-referencing ->private,
than have to know what the macro does.

So open code the macro everywhere and remove the pointless cast.

Signed-off-by: NeilBrown <neilb@suse.de>
c8f517c444e4f9f55b5b5ca202b8404691a35805 31-Mar-2009 NeilBrown <neilb@suse.de> md/raid5 revise rules for when to update metadata during reshape

We currently update the metadata :
1/ every 3Megabytes
2/ When the place we will write new-layout data to is recorded in
the metadata as still containing old-layout data.

Rule one exists to avoid having to re-do too much reshaping in the
face of a crash/restart. So it should really be time based rather
than size based. So change it to "every 10 seconds".

Rule two turns out to be too harsh when restriping an array
'in-place', as in that case the metadata much be updates for every
stripe.
For the in-place update, it can only possibly be safe from a crash if
some user-space program data a backup of every e.g. few hundred
stripes before allowing them to be reshaped. In that case, the
constant metadata update is pointless.
So only update the metadata if the new metadata will report that the
end of the 'old-layout' data is beyond where we are currently
writing 'new-layout' data.

Signed-off-by: NeilBrown <neilb@suse.de>
e183eaedd53807e33f02ee80573e2833890e1f21 31-Mar-2009 NeilBrown <neilb@suse.de> md/raid5: prepare for allowing reshape to change layout

Add prev_algo to raid5_conf_t along the same lines as prev_chunk
and previous_raid_disks.

Signed-off-by: NeilBrown <neilb@suse.de>
784052ecc6ade6b6acf4f67e4ada8e5f2e6df446 31-Mar-2009 NeilBrown <neilb@suse.de> md/raid5: prepare for allowing reshape to change chunksize.

Add "prev_chunk" to raid5_conf_t, similar to "previous_raid_disks", to
remember what the chunk size was before the reshape that is currently
underway.

This seems like duplication with "chunk_size" and "new_chunk" in
mddev_t, and to some extent it is, but there are differences.
The values in mddev_t are always defined and often the same.
The prev* values are only defined if a reshape is underway.

Also (and more significantly) the raid5_conf_t values will be changed
at the same time (inside an appropriate lock) that the reshape is
started by setting reshape_position. In contrast, the new_chunk value
is set when the sysfs file is written which could be well before the
reshape starts.

Signed-off-by: NeilBrown <neilb@suse.de>
86b42c713be3e5f6807aa14b4cbdb005d35c64d5 31-Mar-2009 NeilBrown <neilb@suse.de> md/raid5: clearly differentiate 'before' and 'after' stripes during reshape.

During a raid5 reshape, we have some stripes in the cache that are
'before' the reshape (and are still to be processed) and some that are
'after'. They are currently differentiated by having different
->disks values as the only reshape current supported involves changing
the number of disks.

However we will soon support reshapes that do not change the number
of disks (chunk parity or chunk size). So make the difference more
explicit with a 'generation' number.

Signed-off-by: NeilBrown <neilb@suse.de>
fef9c61fdfabf97a307c2cf3621a6949f0a4b995 31-Mar-2009 NeilBrown <neilb@suse.de> md/raid5: change reshape-progress measurement to cope with reshaping backwards.

When reducing the number of devices in a raid4/5/6, the reshape
process has to start at the end of the array and work down to the
beginning. So we need to handle expand_progress and expand_lo
differently.

This patch renames "expand_progress" and "expand_lo" to avoid the
implication that anything is getting bigger (expand->reshape) and
every place they are used, we make sure that they are used the right
way depending on whether delta_disks is positive or negative.

Signed-off-by: NeilBrown <neilb@suse.de>
34e04e87fb8b2c62c9e8868f41c8179d0e15f51a 31-Mar-2009 NeilBrown <neilb@suse.de> md/raid5: drop qd_idx from r6_state

We now have this value in stripe_head so we don't need to duplicate
it.

Signed-off-by: NeilBrown <neilb@suse.de>
f701d589aa34d7531183c9ac6f7713ba14212b02 31-Mar-2009 Dan Williams <dan.j.williams@intel.com> md/raid6: move raid6 data processing to raid6_pq.ko

Move the raid6 data processing routines into a standalone module
(raid6_pq) to prepare them to be called from async_tx wrappers and other
non-md drivers/modules. This precludes a circular dependency of raid456
needing the async modules for data processing while those modules in
turn depend on raid456 for the base level synchronous raid6 routines.

To support this move:
1/ The exportable definitions in raid6.h move to include/linux/raid/pq.h
2/ The raid6_call, recovery calls, and table symbols are exported
3/ Extra #ifdef __KERNEL__ statements to enable the userspace raid6test to
compile

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
91adb56473febeeb3ef657bb5147ddd355465700 31-Mar-2009 NeilBrown <neilb@suse.de> md/raid5: refactor raid5 "run"

.. so that the code to create the private data structures is separate.
This will help with future code to change the level of an active
array.

Signed-off-by: NeilBrown <neilb@suse.de>
67cc2b8165857ba019920d1f00d64bcc4140075d 31-Mar-2009 NeilBrown <neilb@suse.de> md/raid5: finish support for DDF/raid6

DDF requires RAID6 calculations over different devices in a different
order.
For md/raid6, we calculate over just the data devices, starting
immediately after the 'Q' block.
For ddf/raid6 we calculate over all devices, using zeros in place of
the P and Q blocks.

This requires unfortunately complex loops...

Signed-off-by: NeilBrown <neilb@suse.de>
99c0fb5f92828ae96909d390f2df137b89093b37 31-Mar-2009 NeilBrown <neilb@suse.de> md/raid5: Add support for new layouts for raid5 and raid6.

DDF uses different layouts for P and Q blocks than current md/raid6
so add those that are missing.
Also add support for RAID6 layouts that are identical to various
raid5 layouts with the simple addition of one device to hold all of
the 'Q' blocks.
Finally add 'raid5' layouts to match raid4.
These last to will allow online level conversion.

Note that this does not provide correct support for DDF/raid6 yet
as the order in which data blocks are summed to produce the Q block
is significant and different between current md code and DDF
requirements.

Signed-off-by: NeilBrown <neilb@suse.de>
d0dabf7e577411c2bf6b616c751544dc241213d4 31-Mar-2009 NeilBrown <neilb@suse.de> md/raid6: remove expectation that Q device is immediately after P device.


Code currently assumes that the devices in a raid6 stripe are
0 1 ... N-1 P Q
in some rotated order. We will shortly add new layouts in which
this strict pattern is broken.
So remove this expectation. We still assume that the data disks
are roughly in-order. However P and Q can be inserted anywhere within
that order.

Signed-off-by: NeilBrown <neilb@suse.de>
bff61975b3d6c18ee31457cc5b4d73042f44915f 31-Mar-2009 NeilBrown <neilb@suse.de> md: move lots of #include lines out of .h files and into .c

This makes the includes more explicit, and is preparation for moving
md_k.h to drivers/md/md.h

Remove include/raid/md.h as its only remaining use was to #include
other files.

Signed-off-by: NeilBrown <neilb@suse.de>
ef740c372dfd80e706dbf955d4e4aedda6c0c148 31-Mar-2009 Christoph Hellwig <hch@lst.de> md: move headers out of include/linux/raid/

Move the headers with the local structures for the disciplines and
bitmap.h into drivers/md/ so that they are more easily grepable for
hacking and not far away. md.h is left where it is for now as there
are some uses from the outside.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: NeilBrown <neilb@suse.de>