Skip to content

Commit c5af865

Browse files
Jay Freyenseesagigrimberg
authored andcommitted
nvme-rdma: fix sqsize/hsqsize per spec
Per NVMe-over-Fabrics 1.0 spec, sqsize is represented as a 0-based value. Also per spec, the RDMA binding values shall be set to sqsize, which makes hsqsize 0-based values. Thus, the sqsize during NVMf connect() is now: [root@fedora23-fabrics-host1 for-48]# dmesg [ 318.720645] nvme_fabrics: nvmf_connect_admin_queue(): sqsize for admin queue: 31 [ 318.720884] nvme nvme0: creating 16 I/O queues. [ 318.810114] nvme_fabrics: nvmf_connect_io_queue(): sqsize for i/o queue: 127 Finally, current interpretation implies hrqsize is 1's based so set it appropriately. Reported-by: Daniel Verkamp <[email protected]> Signed-off-by: Jay Freyensee <[email protected]> Signed-off-by: Sagi Grimberg <[email protected]>
1 parent f994d9d commit c5af865

File tree

1 file changed

+10
-4
lines changed

1 file changed

+10
-4
lines changed

drivers/nvme/host/rdma.c

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -645,7 +645,8 @@ static int nvme_rdma_init_io_queues(struct nvme_rdma_ctrl *ctrl)
645645
int i, ret;
646646

647647
for (i = 1; i < ctrl->queue_count; i++) {
648-
ret = nvme_rdma_init_queue(ctrl, i, ctrl->ctrl.sqsize);
648+
ret = nvme_rdma_init_queue(ctrl, i,
649+
ctrl->ctrl.opts->queue_size);
649650
if (ret) {
650651
dev_info(ctrl->ctrl.device,
651652
"failed to initialize i/o queue: %d\n", ret);
@@ -1286,8 +1287,13 @@ static int nvme_rdma_route_resolved(struct nvme_rdma_queue *queue)
12861287
priv.hrqsize = cpu_to_le16(NVMF_AQ_DEPTH);
12871288
priv.hsqsize = cpu_to_le16(NVMF_AQ_DEPTH - 1);
12881289
} else {
1290+
/*
1291+
* current interpretation of the fabrics spec
1292+
* is at minimum you make hrqsize sqsize+1, or a
1293+
* 1's based representation of sqsize.
1294+
*/
12891295
priv.hrqsize = cpu_to_le16(queue->queue_size);
1290-
priv.hsqsize = cpu_to_le16(queue->queue_size);
1296+
priv.hsqsize = cpu_to_le16(queue->ctrl->ctrl.sqsize);
12911297
}
12921298

12931299
ret = rdma_connect(queue->cm_id, &param);
@@ -1825,7 +1831,7 @@ static int nvme_rdma_create_io_queues(struct nvme_rdma_ctrl *ctrl)
18251831

18261832
memset(&ctrl->tag_set, 0, sizeof(ctrl->tag_set));
18271833
ctrl->tag_set.ops = &nvme_rdma_mq_ops;
1828-
ctrl->tag_set.queue_depth = ctrl->ctrl.sqsize;
1834+
ctrl->tag_set.queue_depth = ctrl->ctrl.opts->queue_size;
18291835
ctrl->tag_set.reserved_tags = 1; /* fabric connect */
18301836
ctrl->tag_set.numa_node = NUMA_NO_NODE;
18311837
ctrl->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
@@ -1923,7 +1929,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
19231929
spin_lock_init(&ctrl->lock);
19241930

19251931
ctrl->queue_count = opts->nr_io_queues + 1; /* +1 for admin queue */
1926-
ctrl->ctrl.sqsize = opts->queue_size;
1932+
ctrl->ctrl.sqsize = opts->queue_size - 1;
19271933
ctrl->ctrl.kato = opts->kato;
19281934

19291935
ret = -ENOMEM;

0 commit comments

Comments
 (0)