diff options
author | Julian Wiedmann <jwi@linux.ibm.com> | 2020-07-03 15:20:03 +0200 |
---|---|---|
committer | Martin K. Petersen <martin.petersen@oracle.com> | 2020-07-08 00:50:56 -0400 |
commit | c3bfffa5ec6901f758160edfb046bdc56d89e8d2 (patch) | |
tree | e2aaadb89761b255669bd3234b1d95f8d6e632b7 /drivers/s390 | |
parent | 6bcb7c171a0ce0b0114aa088404e552b49d85576 (diff) |
scsi: zfcp: Avoid benign overflow of the Request Queue's free-level
zfcp_qdio_send() and zfcp_qdio_int_req() run concurrently, adding and
completing SBALs on the Request Queue. There's a theoretical race where
zfcp_qdio_int_req() completes a number of SBALs & increments the queue's
free-level _before_ zfcp_qdio_send() was able to decrement it.
This can cause ->req_q_free to momentarily hold a value larger than
QDIO_MAX_BUFFERS_PER_Q. Luckily zfcp_qdio_send() is always called under
->req_q_lock, and all readers of the free-level also take this lock. So we
can trust that zfcp_qdio_send() will clean up such a temporary overflow
before anyone can actually observe it.
But it's still confusing and annoying to worry about. So adjust the code to
avoid this race.
Link: https://lore.kernel.org/r/7f61f59a1f8db270312e64644f9173b8f1ac895f.1593780621.git.bblock@linux.ibm.com
Reviewed-by: Steffen Maier <maier@linux.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Benjamin Block <bblock@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Diffstat (limited to 'drivers/s390')
-rw-r--r-- | drivers/s390/scsi/zfcp_qdio.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c index d3d110a04884..e78d65bd46b1 100644 --- a/drivers/s390/scsi/zfcp_qdio.c +++ b/drivers/s390/scsi/zfcp_qdio.c @@ -260,17 +260,20 @@ int zfcp_qdio_send(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req) zfcp_qdio_account(qdio); spin_unlock(&qdio->stat_lock); + atomic_sub(sbal_number, &qdio->req_q_free); + retval = do_QDIO(qdio->adapter->ccw_device, QDIO_FLAG_SYNC_OUTPUT, 0, q_req->sbal_first, sbal_number); if (unlikely(retval)) { + /* Failed to submit the IO, roll back our modifications. */ + atomic_add(sbal_number, &qdio->req_q_free); zfcp_qdio_zero_sbals(qdio->req_q, q_req->sbal_first, sbal_number); return retval; } /* account for transferred buffers */ - atomic_sub(sbal_number, &qdio->req_q_free); qdio->req_q_idx += sbal_number; qdio->req_q_idx %= QDIO_MAX_BUFFERS_PER_Q; |