3f5e0a34daed197aa55d0c6b466bb4cd03babb4f |
|
23-Jan-2014 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Kill dead cgroup code This hasn't been used or even enabled in ages. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
da415a096fc06e49d1a15f7a06bcfe6ad44c5d38 |
|
10-Jan-2014 |
Nicholas Swenson <nks@daterainc.com> |
bcache: Fix moving_gc deadlocking with a foreground write Deadlock happened because a foreground write slept, waiting for a bucket to be allocated. Normally the gc would mark buckets available for invalidation. But the moving_gc was stuck waiting for outstanding writes to complete. These writes used the bcache_wq, the same queue foreground writes used. This fix gives moving_gc its own work queue, so it was still finish moving even if foreground writes are stuck waiting for allocation. It also makes work queue a parameter to the data_insert path, so moving_gc can use its workqueue for writes. Signed-off-by: Nicholas Swenson <nks@daterainc.com> Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
a5ae4300c15c778722c139953c825cd24d6ff517 |
|
11-Sep-2013 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Zero less memory Another minor performance optimization Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
2599b53b7b0ea6103d1661dca74d35480cb8fa1f |
|
25-Jul-2013 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Move sector allocator to alloc.c Just reorganizing things a bit. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
220bb38c21b83e2f7b842f33220bf727093eca89 |
|
11-Sep-2013 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Break up struct search With all the recent refactoring around struct btree op struct search has gotten rather large. But we can now easily break it up in a different way - we break out struct btree_insert_op which is for inserting data into the cache, and that's now what the copying gc code uses - struct search is now specific to request.c Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
6054c6d4da1940c7bf8870c6393773aa794f53d8 |
|
25-Jul-2013 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Don't use op->insert_collision When we convert bch_btree_insert() to bch_btree_map_leaf_nodes(), we won't be passing struct btree_op to bch_btree_insert() anymore - so we need a different way of returning whether there was a collision (really, a replace collision). Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
1b207d80d5b986fb305bc899357435d319319513 |
|
11-Sep-2013 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Kill op->replace This is prep work for converting bch_btree_insert to bch_btree_map_leaf_nodes() - we have to convert all its arguments to actual arguments. Bunch of churn, but should be straightforward. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
b54d6934da7857f87b092df9b77dc1f42818ba94 |
|
25-Jul-2013 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Kill op->cl This isn't used for waiting asynchronously anymore - so this is a fairly trivial refactoring. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
c18536a72ddd7fe30d63e6c1500b5c930ac14594 |
|
25-Jul-2013 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Prune struct btree_op Eventual goal is for struct btree_op to contain only what is necessary for traversing the btree. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
2c1953e201a05ddfb1ea53f23d81a492c6513028 |
|
25-Jul-2013 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Convert bch_btree_read_async() to bch_btree_map_keys() This is a fairly straightforward conversion, mostly reshuffling - op->lookup_done goes away, replaced by MAP_DONE/MAP_CONTINUE. And the code for handling cache hits and misses wasn't really btree code, so it gets moved to request.c. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
0b93207abb40d3c42bb83eba1e1e7edc1da77810 |
|
25-Jul-2013 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Move keylist out of btree_op Slowly working on pruning struct btree_op - the aim is for it to only contain things that are actually necessary for traversing the btree. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
a34a8bfd4e6358c646928320d37b0425c0762f8a |
|
25-Oct-2013 |
Kent Overstreet <kmo@daterainc.com> |
bcache: Refactor journalling flow control Making things less asynchronous that don't need to be - bch_journal() only has to block when the journal or journal entry is full, which is emphatically not a fast path. So make it a normal function that just returns when it finishes, to make the code and control flow easier to follow. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
c37511b863f36c1cc6e18440717fd4cc0e881b8a |
|
27-Apr-2013 |
Kent Overstreet <koverstreet@google.com> |
bcache: Fix/revamp tracepoints The tracepoints were reworked to be more sensible, and fixed a null pointer deref in one of the tracepoints. Converted some of the pr_debug()s to tracepoints - this is partly a performance optimization; it used to be that with DEBUG or CONFIG_DYNAMIC_DEBUG pr_debug() was an empty macro; but at some point it was changed to an empty inline function. Some of the pr_debug() statements had rather expensive function calls as part of the arguments, so this code was getting run unnecessarily even on non debug kernels - in some fast paths, too. Signed-off-by: Kent Overstreet <koverstreet@google.com>
|
cafe563591446cf80bfbc2fe3bc72a2e36cf1060 |
|
24-Mar-2013 |
Kent Overstreet <koverstreet@google.com> |
bcache: A block layer cache Does writethrough and writeback caching, handles unclean shutdown, and has a bunch of other nifty features motivated by real world usage. See the wiki at http://bcache.evilpiepirate.org for more. Signed-off-by: Kent Overstreet <koverstreet@google.com>
|