Skip to content

Commit 655fec6

Browse files
chuckleveramschuma-ntap
authored andcommitted
xprtrdma: Use gathered Send for large inline messages
An RPC Call message that is sent inline but that has a data payload (ie, one or more items in rq_snd_buf's page list) must be "pulled up:" - call_allocate has to reserve enough RPC Call buffer space to accommodate the data payload - call_transmit has to memcopy the rq_snd_buf's page list and tail into its head iovec before it is sent As the inline threshold is increased beyond its current 1KB default, however, this means data payloads of more than a few KB are copied by the host CPU. For example, if the inline threshold is increased just to 4KB, then NFS WRITE requests up to 4KB would involve a memcpy of the NFS WRITE's payload data into the RPC Call buffer. This is an undesirable amount of participation by the host CPU. The inline threshold may be much larger than 4KB in the future, after negotiation with a peer server. Instead of copying the components of rq_snd_buf into its head iovec, construct a gather list of these components, and send them all in place. The same approach is already used in the Linux server's RPC-over-RDMA reply path. This mechanism also eliminates the need for rpcrdma_tail_pullup, which is used to manage the XDR pad and trailing inline content when a Read list is present. This requires that the pages in rq_snd_buf's page list be DMA-mapped during marshaling, and unmapped when a data-bearing RPC is completed. This is slightly less efficient for very small I/O payloads, but significantly more efficient as data payload size and inline threshold increase past a kilobyte. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
1 parent c8b920b commit 655fec6

File tree

5 files changed

+207
-185
lines changed

5 files changed

+207
-185
lines changed

net/sunrpc/xprtrdma/backchannel.c

Lines changed: 3 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -206,7 +206,6 @@ int rpcrdma_bc_marshal_reply(struct rpc_rqst *rqst)
206206
struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt);
207207
struct rpcrdma_req *req = rpcr_to_rdmar(rqst);
208208
struct rpcrdma_msg *headerp;
209-
size_t rpclen;
210209

211210
headerp = rdmab_to_msg(req->rl_rdmabuf);
212211
headerp->rm_xid = rqst->rq_xid;
@@ -218,36 +217,10 @@ int rpcrdma_bc_marshal_reply(struct rpc_rqst *rqst)
218217
headerp->rm_body.rm_chunks[1] = xdr_zero;
219218
headerp->rm_body.rm_chunks[2] = xdr_zero;
220219

221-
rpclen = rqst->rq_svec[0].iov_len;
222-
223-
#ifdef RPCRDMA_BACKCHANNEL_DEBUG
224-
pr_info("RPC: %s: rpclen %zd headerp 0x%p lkey 0x%x\n",
225-
__func__, rpclen, headerp, rdmab_lkey(req->rl_rdmabuf));
226-
pr_info("RPC: %s: RPC/RDMA: %*ph\n",
227-
__func__, (int)RPCRDMA_HDRLEN_MIN, headerp);
228-
pr_info("RPC: %s: RPC: %*ph\n",
229-
__func__, (int)rpclen, rqst->rq_svec[0].iov_base);
230-
#endif
231-
232-
if (!rpcrdma_dma_map_regbuf(&r_xprt->rx_ia, req->rl_rdmabuf))
233-
goto out_map;
234-
req->rl_send_iov[0].addr = rdmab_addr(req->rl_rdmabuf);
235-
req->rl_send_iov[0].length = RPCRDMA_HDRLEN_MIN;
236-
req->rl_send_iov[0].lkey = rdmab_lkey(req->rl_rdmabuf);
237-
238-
if (!rpcrdma_dma_map_regbuf(&r_xprt->rx_ia, req->rl_sendbuf))
239-
goto out_map;
240-
req->rl_send_iov[1].addr = rdmab_addr(req->rl_sendbuf);
241-
req->rl_send_iov[1].length = rpclen;
242-
req->rl_send_iov[1].lkey = rdmab_lkey(req->rl_sendbuf);
243-
244-
req->rl_send_wr.num_sge = 2;
245-
220+
if (!rpcrdma_prepare_send_sges(&r_xprt->rx_ia, req, RPCRDMA_HDRLEN_MIN,
221+
&rqst->rq_snd_buf, rpcrdma_noch))
222+
return -EIO;
246223
return 0;
247-
248-
out_map:
249-
pr_err("rpcrdma: failed to DMA map a Send buffer\n");
250-
return -EIO;
251224
}
252225

253226
/**

0 commit comments

Comments
 (0)