Skip to content

Commit

Permalink
Merge tag 'for-6.4/block-2023-04-21' of git:https://git.kernel.dk/linux
Browse files Browse the repository at this point in the history
Pull block updates from Jens Axboe:

 - drbd patches, bringing us closer to unifying the out-of-tree version
   and the in tree one (Andreas, Christoph)

 - support for auto-quiesce for the s390 dasd driver (Stefan)

 - MD pull request via Song:
      - md/bitmap: Optimal last page size (Jon Derrick)
      - Various raid10 fixes (Yu Kuai, Li Nan)
      - md: add error_handlers for raid0 and linear (Mariusz Tkaczyk)

 - NVMe pull request via Christoph:
      - Drop redundant pci_enable_pcie_error_reporting (Bjorn Helgaas)
      - Validate nvmet module parameters (Chaitanya Kulkarni)
      - Fence TCP socket on receive error (Chris Leech)
      - Fix async event trace event (Keith Busch)
      - Minor cleanups (Chaitanya Kulkarni, zhenwei pi)
      - Fix and cleanup nvmet Identify handling (Damien Le Moal,
        Christoph Hellwig)
      - Fix double blk_mq_complete_request race in the timeout handler
        (Lei Yin)
      - Fix irq locking in nvme-fcloop (Ming Lei)
      - Remove queue mapping helper for rdma devices (Sagi Grimberg)

 - use structured request attribute checks for nbd (Jakub)

 - fix blk-crypto race conditions between keyslot management (Eric)

 - add sed-opal support for reading read locking range attributes
   (Ondrej)

 - make fault injection configurable for null_blk (Akinobu)

 - clean up the request insertion API (Christoph)

 - clean up the queue running API (Christoph)

 - blkg config helper cleanups (Tejun)

 - lazy init support for blk-iolatency (Tejun)

 - various fixes and tweaks to ublk (Ming)

 - remove hybrid polling. It hasn't really been useful since we got
   async polled IO support, and these days we don't support sync polled
   IO at all (Keith)

 - misc fixes, cleanups, improvements (Zhong, Ondrej, Colin, Chengming,
   Chaitanya, me)

* tag 'for-6.4/block-2023-04-21' of git:https://git.kernel.dk/linux: (118 commits)
  nbd: fix incomplete validation of ioctl arg
  ublk: don't return 0 in case of any failure
  sed-opal: geometry feature reporting command
  null_blk: Always check queue mode setting from configfs
  block: ublk: switch to ioctl command encoding
  blk-mq: fix the blk_mq_add_to_requeue_list call in blk_kick_flush
  block, bfq: Fix division by zero error on zero wsum
  fault-inject: fix build error when FAULT_INJECTION_CONFIGFS=y and CONFIGFS_FS=m
  block: store bdev->bd_disk->fops->submit_bio state in bdev
  block: re-arrange the struct block_device fields for better layout
  md/raid5: remove unused working_disks variable
  md/raid10: don't call bio_start_io_acct twice for bio which experienced read error
  md/raid10: fix memleak of md thread
  md/raid10: fix memleak for 'conf->bio_split'
  md/raid10: fix leak of 'r10bio->remaining' for recovery
  md/raid10: don't BUG_ON() in raise_barrier()
  md: fix soft lockup in status_resync
  md: add error_handlers for raid0 and linear
  md: Use optimal I/O size for last bitmap page
  md: Fix types in sb writer
  ...
  • Loading branch information
torvalds committed Apr 26, 2023
2 parents 5b9a7bb + 55793ea commit 9dd6956
Show file tree
Hide file tree
Showing 97 changed files with 2,255 additions and 1,768 deletions.
15 changes: 4 additions & 11 deletions Documentation/ABI/stable/sysfs-block
Original file line number Diff line number Diff line change
Expand Up @@ -336,18 +336,11 @@ What: /sys/block/<disk>/queue/io_poll_delay
Date: November 2016
Contact: [email protected]
Description:
[RW] If polling is enabled, this controls what kind of polling
will be performed. It defaults to -1, which is classic polling.
[RW] This was used to control what kind of polling will be
performed. It is now fixed to -1, which is classic polling.
In this mode, the CPU will repeatedly ask for completions
without giving up any time. If set to 0, a hybrid polling mode
is used, where the kernel will attempt to make an educated guess
at when the IO will complete. Based on this guess, the kernel
will put the process issuing IO to sleep for an amount of time,
before entering a classic poll loop. This mode might be a little
slower than pure classic polling, but it will be more efficient.
If set to a value larger than 0, the kernel will put the process
issuing IO to sleep for this amount of microseconds before
entering classic polling.
without giving up any time.
<deprecated>


What: /sys/block/<disk>/queue/io_timeout
Expand Down
3 changes: 1 addition & 2 deletions Documentation/block/inline-encryption.rst
Original file line number Diff line number Diff line change
Expand Up @@ -270,8 +270,7 @@ Request queue based layered devices like dm-rq that wish to support inline
encryption need to create their own blk_crypto_profile for their request_queue,
and expose whatever functionality they choose. When a layered device wants to
pass a clone of that request to another request_queue, blk-crypto will
initialize and prepare the clone as necessary; see
``blk_crypto_insert_cloned_request()``.
initialize and prepare the clone as necessary.

Interaction between inline encryption and blk integrity
=======================================================
Expand Down
8 changes: 8 additions & 0 deletions Documentation/fault-injection/fault-injection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,14 @@ Available fault injection capabilities
status code is NVME_SC_INVALID_OPCODE with no retry. The status code and
retry flag can be set via the debugfs.

- Null test block driver fault injection

inject IO timeouts by setting config items under
/sys/kernel/config/nullb/<disk>/timeout_inject,
inject requeue requests by setting config items under
/sys/kernel/config/nullb/<disk>/requeue_inject, and
inject init_hctx() errors by setting config items under
/sys/kernel/config/nullb/<disk>/init_hctx_fault_inject.

Configure fault-injection capabilities behavior
-----------------------------------------------
Expand Down
2 changes: 2 additions & 0 deletions arch/s390/include/uapi/asm/dasd.h
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ typedef struct dasd_information2_t {
* 0x040: give access to raw eckd data
* 0x080: enable discard support
* 0x100: enable autodisable for IFCC errors (default)
* 0x200: enable requeue of all requests on autoquiesce
*/
#define DASD_FEATURE_READONLY 0x001
#define DASD_FEATURE_USEDIAG 0x002
Expand All @@ -88,6 +89,7 @@ typedef struct dasd_information2_t {
#define DASD_FEATURE_USERAW 0x040
#define DASD_FEATURE_DISCARD 0x080
#define DASD_FEATURE_PATH_AUTODISABLE 0x100
#define DASD_FEATURE_REQUEUEQUIESCE 0x200
#define DASD_FEATURE_DEFAULT DASD_FEATURE_PATH_AUTODISABLE

#define DASD_PARTN_BITS 2
Expand Down
5 changes: 0 additions & 5 deletions block/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -215,11 +215,6 @@ config BLK_MQ_VIRTIO
depends on VIRTIO
default y

config BLK_MQ_RDMA
bool
depends on INFINIBAND
default y

config BLK_PM
def_bool PM

Expand Down
1 change: 0 additions & 1 deletion block/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ obj-$(CONFIG_BLK_DEV_INTEGRITY) += bio-integrity.o blk-integrity.o
obj-$(CONFIG_BLK_DEV_INTEGRITY_T10) += t10-pi.o
obj-$(CONFIG_BLK_MQ_PCI) += blk-mq-pci.o
obj-$(CONFIG_BLK_MQ_VIRTIO) += blk-mq-virtio.o
obj-$(CONFIG_BLK_MQ_RDMA) += blk-mq-rdma.o
obj-$(CONFIG_BLK_DEV_ZONED) += blk-zoned.o
obj-$(CONFIG_BLK_WBT) += blk-wbt.o
obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o
Expand Down
1 change: 1 addition & 0 deletions block/bdev.c
Original file line number Diff line number Diff line change
Expand Up @@ -419,6 +419,7 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
bdev->bd_inode = inode;
bdev->bd_queue = disk->queue;
bdev->bd_stats = alloc_percpu(struct disk_stats);
bdev->bd_has_submit_bio = false;
if (!bdev->bd_stats) {
iput(inode);
return NULL;
Expand Down
20 changes: 7 additions & 13 deletions block/bfq-cgroup.c
Original file line number Diff line number Diff line change
Expand Up @@ -497,15 +497,9 @@ static struct blkcg_policy_data *bfq_cpd_alloc(gfp_t gfp)
bgd = kzalloc(sizeof(*bgd), gfp);
if (!bgd)
return NULL;
return &bgd->pd;
}

static void bfq_cpd_init(struct blkcg_policy_data *cpd)
{
struct bfq_group_data *d = cpd_to_bfqgd(cpd);

d->weight = cgroup_subsys_on_dfl(io_cgrp_subsys) ?
CGROUP_WEIGHT_DFL : BFQ_WEIGHT_LEGACY_DFL;
bgd->weight = CGROUP_WEIGHT_DFL;
return &bgd->pd;
}

static void bfq_cpd_free(struct blkcg_policy_data *cpd)
Expand Down Expand Up @@ -1111,9 +1105,11 @@ static ssize_t bfq_io_set_device_weight(struct kernfs_open_file *of,
struct bfq_group *bfqg;
u64 v;

ret = blkg_conf_prep(blkcg, &blkcg_policy_bfq, buf, &ctx);
blkg_conf_init(&ctx, buf);

ret = blkg_conf_prep(blkcg, &blkcg_policy_bfq, &ctx);
if (ret)
return ret;
goto out;

if (sscanf(ctx.body, "%llu", &v) == 1) {
/* require "default" on dfl */
Expand All @@ -1135,7 +1131,7 @@ static ssize_t bfq_io_set_device_weight(struct kernfs_open_file *of,
ret = 0;
}
out:
blkg_conf_finish(&ctx);
blkg_conf_exit(&ctx);
return ret ?: nbytes;
}

Expand Down Expand Up @@ -1301,8 +1297,6 @@ struct blkcg_policy blkcg_policy_bfq = {
.legacy_cftypes = bfq_blkcg_legacy_files,

.cpd_alloc_fn = bfq_cpd_alloc,
.cpd_init_fn = bfq_cpd_init,
.cpd_bind_fn = bfq_cpd_init,
.cpd_free_fn = bfq_cpd_free,

.pd_alloc_fn = bfq_pd_alloc,
Expand Down
19 changes: 10 additions & 9 deletions block/bfq-iosched.c
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,6 @@
#include "elevator.h"
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-tag.h"
#include "blk-mq-sched.h"
#include "bfq-iosched.h"
#include "blk-wbt.h"
Expand Down Expand Up @@ -649,6 +648,8 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
sched_data->service_tree[i].wsum;
}
}
if (!wsum)
continue;
limit = DIV_ROUND_CLOSEST(limit * entity->weight, wsum);
if (entity->allocated >= limit) {
bfq_log_bfqq(bfqq->bfqd, bfqq,
Expand Down Expand Up @@ -6232,7 +6233,7 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
static struct bfq_queue *bfq_init_rq(struct request *rq);

static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
bool at_head)
blk_insert_t flags)
{
struct request_queue *q = hctx->queue;
struct bfq_data *bfqd = q->elevator->elevator_data;
Expand All @@ -6255,11 +6256,10 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,

trace_block_rq_insert(rq);

if (!bfqq || at_head) {
if (at_head)
list_add(&rq->queuelist, &bfqd->dispatch);
else
list_add_tail(&rq->queuelist, &bfqd->dispatch);
if (flags & BLK_MQ_INSERT_AT_HEAD) {
list_add(&rq->queuelist, &bfqd->dispatch);
} else if (!bfqq) {
list_add_tail(&rq->queuelist, &bfqd->dispatch);
} else {
idle_timer_disabled = __bfq_insert_request(bfqd, rq);
/*
Expand Down Expand Up @@ -6289,14 +6289,15 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}

static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *list, bool at_head)
struct list_head *list,
blk_insert_t flags)
{
while (!list_empty(list)) {
struct request *rq;

rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
bfq_insert_request(hctx, rq, at_head);
bfq_insert_request(hctx, rq, flags);
}
}

Expand Down
1 change: 0 additions & 1 deletion block/bfq-iosched.h
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@

#define BFQ_DEFAULT_QUEUE_IOPRIO 4

#define BFQ_WEIGHT_LEGACY_DFL 100
#define BFQ_DEFAULT_GRP_IOPRIO 0
#define BFQ_DEFAULT_GRP_CLASS IOPRIO_CLASS_BE

Expand Down
Loading

0 comments on commit 9dd6956

Please sign in to comment.