diff options
author | Sagi Grimberg <sagi@grimberg.me> | 2021-03-15 14:04:27 -0700 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2021-03-18 05:39:01 +0100 |
commit | c4c6df5fc84659690d4391d1fba155cd94185295 (patch) | |
tree | 308001a7f823c5e4822b5ac3292c23a3abddfa1b /drivers | |
parent | 72f572428b83d0bc7028e7c4326d1a5f45205e44 (diff) |
nvme-rdma: fix possible hang when failing to set io queues
We only setup io queues for nvme controllers, and it makes absolutely no
sense to allow a controller (re)connect without any I/O queues. If we
happen to fail setting the queue count for any reason, we should not allow
this to be a successful reconnect as I/O has no chance in going through.
Instead just fail and schedule another reconnect.
Reported-by: Chao Leng <lengchao@huawei.com>
Fixes: 711023071960 ("nvme-rdma: add a NVMe over Fabrics RDMA host driver")
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chao Leng <lengchao@huawei.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'drivers')
-rw-r--r-- | drivers/nvme/host/rdma.c | 7 |
1 files changed, 5 insertions, 2 deletions
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 7855103e05d2..be905d4fdb47 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -736,8 +736,11 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl) return ret; ctrl->ctrl.queue_count = nr_io_queues + 1; - if (ctrl->ctrl.queue_count < 2) - return 0; + if (ctrl->ctrl.queue_count < 2) { + dev_err(ctrl->ctrl.device, + "unable to set any I/O queues\n"); + return -ENOMEM; + } dev_info(ctrl->ctrl.device, "creating %d I/O queues.\n", nr_io_queues); |