Skip to content

Commit 9d11b51

Browse files
chuckleverJ. Bruce Fields
authored andcommitted
svcrdma: Fix send_reply() scatter/gather set-up
The Linux NFS server returns garbage in the data payload of inline NFS/RDMA READ replies. These are READs of under 1000 bytes or so where the client has not provided either a reply chunk or a write list. The NFS server delivers the data payload for an NFS READ reply to the transport in an xdr_buf page list. If the NFS client did not provide a reply chunk or a write list, send_reply() is supposed to set up a separate sge for the page containing the READ data, and another sge for XDR padding if needed, then post all of the sges via a single SEND Work Request. The problem is send_reply() does not advance through the xdr_buf when setting up scatter/gather entries for SEND WR. It always calls dma_map_xdr with xdr_off set to zero. When there's more than one sge, dma_map_xdr() sets up the SEND sge's so they all point to the xdr_buf's head. The current Linux NFS/RDMA client always provides a reply chunk or a write list when performing an NFS READ over RDMA. Therefore, it does not exercise this particular case. The Linux server has never had to use more than one extra sge for building RPC/RDMA replies with a Linux client. However, an NFS/RDMA client _is_ allowed to send small NFS READs without setting up a write list or reply chunk. The NFS READ reply fits entirely within the inline reply buffer in this case. This is perhaps a more efficient way of performing NFS READs that the Linux NFS/RDMA client may some day adopt. Fixes: b432e6b ('svcrdma: Change DMA mapping logic to . . .') BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=285 Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
1 parent ff79c74 commit 9d11b51

File tree

1 file changed

+9
-1
lines changed

1 file changed

+9
-1
lines changed

net/sunrpc/xprtrdma/svc_rdma_sendto.c

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -384,6 +384,7 @@ static int send_reply(struct svcxprt_rdma *rdma,
384384
int byte_count)
385385
{
386386
struct ib_send_wr send_wr;
387+
u32 xdr_off;
387388
int sge_no;
388389
int sge_bytes;
389390
int page_no;
@@ -418,8 +419,8 @@ static int send_reply(struct svcxprt_rdma *rdma,
418419
ctxt->direction = DMA_TO_DEVICE;
419420

420421
/* Map the payload indicated by 'byte_count' */
422+
xdr_off = 0;
421423
for (sge_no = 1; byte_count && sge_no < vec->count; sge_no++) {
422-
int xdr_off = 0;
423424
sge_bytes = min_t(size_t, vec->sge[sge_no].iov_len, byte_count);
424425
byte_count -= sge_bytes;
425426
ctxt->sge[sge_no].addr =
@@ -457,6 +458,13 @@ static int send_reply(struct svcxprt_rdma *rdma,
457458
}
458459
rqstp->rq_next_page = rqstp->rq_respages + 1;
459460

461+
/* The loop above bumps sc_dma_used for each sge. The
462+
* xdr_buf.tail gets a separate sge, but resides in the
463+
* same page as xdr_buf.head. Don't count it twice.
464+
*/
465+
if (sge_no > ctxt->count)
466+
atomic_dec(&rdma->sc_dma_used);
467+
460468
if (sge_no > rdma->sc_max_sge) {
461469
pr_err("svcrdma: Too many sges (%d)\n", sge_no);
462470
goto err;

0 commit comments

Comments
 (0)