-
Notifications
You must be signed in to change notification settings - Fork 13.5k
high-level tcp bindings for std #2409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
let server_data_ptr = uv::ll::get_data_for_uv_handle(handle) | ||
as *tcp_listen_fc_data; | ||
let kill_ch = (*server_data_ptr).kill_ch; | ||
alt (*server_data_ptr).active { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about if
here?
How safe is |
@brson ok, so once the Ideally, the user would keep the |
* tweaked the layout of sockaddr_in6 struct in anticipation of future use * changed several uv:ll fn signatures to use generics and be more flexible with ptr types they get passed * add uv_err_data and a help fn to return it.. packages up err_name and err_msg info from uv_get_last_error() stuff..
they're changed into a net::tcp::tcp_err_data record, for now. once the scope of possible tcp errors, from libuv, is established ill create an err type for each one and return those where they might occur
more flexibility..
still need implementation for parsing/output formatting and (perhaps?) representation (for now, i just followef the ipv4 variant's lead and am representing it as a tuple of 8x u16). parsing an ipv6 addr is way more complex than parsing an ipv4 addr, so i'm putting off an implementation here, for now. candidate solutions: - could use getaddrinfo() (exists on both POSIX and windows), but with incompatible fn signatures. - libuv has a way to parse an ipv6 string into a sockaddr_in6, but it also requires a port, so it's probably not aprop for ip_addr
also whitespace cleanup .. for now, the test just spins up the server and listens for messages, echoing them back to an output port. there's a "kill" msg that it will listen for. need to point the tcp client and server test impls at each other for a loopback server/client test, like how its done in uv::ll once ipv6 parse/format lands, i can add another test using the entirely same codebase, but substituting an ip_addr ipv6 varient for the ipv4 varient used in the existing code still need some other plumbing to get the client/server tests to work together.
.. going to rework the listen() API to be non-blocking.
I need these in the context of doing various malloc/free operations for libuv structs that need to live in the heap, because of API workflow (there's no stack to put them in). This has cropped up several times when impl'ing the high-level API for things like timers, but I've decided to take the plunge and use this approach for the net::tcp module. Technically, this can be avoided by spawning a new task that contains the needed memory structures on its stack and then having it block for the duration of the time we need that memory to be valid (this is what I did in std::timer). Exposing this API provides a much lower overhead way to address the issue, albeit with safety concerns. The main mitigation policy should be to use malloc/free with libuv handles only when the handles, are then associated with a resource or class-with-dtor. So we have a finite lifetime for the object and can gaurantee a free(), barring a runtime crash (in which case you have bigger problems!)
.. turns out that, without the export, the modules aren't accessible outside of the crate, itself. I thought that, by importing some module into another (nesting it) and exporting from that nested module (which is, itself, exported from std.rc) that my mod would be in the build artifact. This doesn't appear to be the case. learning is fun!
- we now have two interfaces for the TCP/IP server/listener workflow, based on different user approaches surrounding how to deal with the flow of accept a new tcp connection: 1. the "original" API closely mimics the low-level libuv API, in that we have an on_connect_cb that the user provides *that is ran on the libuv thread*. In this callback, the user can accept() a connection, turning it into a tcp_socket.. of course, before accepting, they have the option of passing it to a new task, provided they *make the cb block until the accept is done* .. this is because, in libuv, you have to do the uv_accept call in the span of that on_connect_cb callback that gets fired when a new connection comes in. thems the breaks.. I wanted to just get rid of this API, because the general proposition of users always running code on the libuv thread sounds like an invitation for many future headaches. the API restriction to have to choose to immediately accept a connection (and allow the user to block libuv as needed) isn't too bad for power users who could conceive of circumstances where they would drop an incoming TCP connection and know what they're doing, in general. but as a general API, I thought this was a bit cumbersome, so I ended up devising.. 2. an API that is initiated with a call to `net::tcp::new_listener()` .. has a similar signature to `net::tcp::listen()`, except that is just returns an object that sort of behaves like a `comm::port`. Users can block on the `tcp_conn_port` to receive new connections, either in the current task or in a new task, depending on which API route they take (`net::tcp::conn_recv` or `net::tcp::conn_recv_spawn` respectively).. there is also a `net::tcp::conn_peek` function that will do a peek on the underlying port to see if there are pending connections. The main difference, with this API, is that the low-level libuv glue is going to *accept every connection attempt*, along with the overhead that that brings. But, this is a much more hassle-free API for 95% of use cases and will probably be the one that most users will want to reach for.
.. this test fails frequently, locally, when ran with the batch of other global_loop tests running due to how valgrind deals with multithreading in the test app. not sure what to do, here.
* there are a few places where I was experimenting w/ using `alt` in places where `if`/`else` would've sufficed. don't drink the koolaid! * I had an unneeded `else` structure (the `if` branch that preceeded concluded with a `fail` statement.. I added the `fail` later in the dev cycle for this branch, so I forgot to remove the `else` after doing so) * consistent wrt `prop_name: value` vs. `prop_name : value` in record decl and initialization * change an `alt` exp on an `ip_addr` to actually be exhaustive, instead of using a catch-all clause
… write also: read_future ala write_future .. woooooohooooooooo
…them - change port of tcp server test in uv_ll to avoid conflict w/ test in net::tcp - a few places the tcp::read fn is used in test w/ a timeout.. suspend use of the timeout from here on out.
Merged! Thanks for this huge amount of work and for writing such of thorough description in the pull request. I've read a fair bit of the code now and I am happy with the direction we're going. The overall design seems to get a little clearer every iteration. I have yet to take a close look at the final tcp interface, though I do intend to. Integrating it with servo might help me get a better feel for where we are - this week I will try to do some experiments with it. Some miscellaneous things we should consider for the future:
I am going to leave this pull request open a while longer because I want to come back to it again. |
thanks for fixing timer, with the send/copy kind change. I did a fix last night, but was running tests overnight, so you beat me to it. 100% agreement on all points. we're getting there. |
cargo-miri: set RUSTC to us Works around rust-lang/cargo#10885.
These flags allow users to select which visualizations to run---a subset of those in the configuration file. Typically this would be used when the `error_on_regression` is used in CI, or any other visualization that terminates benchcomp with a non-zero return code. The idea is that users can first run all visualizations, except for ones that cause benchcomp to terminate, using `--except error_on_regression`. Users can then save any output files from visualizations, before running `benchcomp visualize --only error_on_regression`.
quick summary
This pull req includes high-level TCP/IP bindings for the rust stdlib (both server and client API), under the
std::net::tcp
module. I also did some reshuffling of the IP stuff and pushed it into thenet::ip
module.also: adding
result::unwrap
(from a patch from @nmatsakis) and ignore'd a test on std::timer that is starting to fail more and more, as perf chokes due to valgrind and threading behavior (the test is time-sensitive, so probably poorly conceived to begin with)only slightly more in-depth
An example demonstrating the TCP client/request API can be found here.
Interestingly, because of API-style friction between libuv and rust, I found it neccesary to expose two separate UIs to the TCP server API. Quickly, with examples, they are:
net::tcp::listen_for_conn
entry point, an example of which is here. This call is blocking for the lifetime of the TCP server connection and uses a callback running on the libuv loop (!).net::tcp::new_listener
entry point, an example of which can be found here. This bit of API, on successful setup, returns a (vaguely)comm::port
-like object that uses can peek/recv on (using custom API), as needed, to get new connections.The
new_listener
API varies, subtly, from thelisten_for_conn
entry point because it accepts new connections immediately (with the overhead that this entails), whilelisten_for_conn
gives a user the chance to manuallyaccept
a connection, or just drop it, in line with the default behavior of libuv.To more fully articulate this, I'm just going to paste my commit msg from
95f0b90
:based on different user approaches surrounding how to deal with the
flow of accept a new tcp connection:
have an on_connect_cb that the user provides that is ran on the libuv
thread. In this callback, the user can accept() a connection, turning it
into a tcp_socket.. of course, before accepting, they have the option
of passing it to a new task, provided they make the cb block until
the accept is done .. this is because, in libuv, you have to do the
uv_accept call in the span of that on_connect_cb callback that gets fired
when a new connection comes in. thems the breaks..
I wanted to just get rid of this API, because the general proposition of
users always running code on the libuv thread sounds like an invitation
for many future headaches. the API restriction to have to choose to
immediately accept a connection (and allow the user to block libuv as
needed) isn't too bad for power users who could conceive of circumstances
where they would drop an incoming TCP connection and know what they're
doing, in general.
but as a general API, I thought this was a bit cumbersome, so I ended up
devising..
net::tcp::new_listener()
..has a similar signature to
net::tcp::listen()
, except that is justreturns an object that sort of behaves like a
comm::port
. Users canblock on the
tcp_conn_port
to receive new connections, either in thecurrent task or in a new task, depending on which API route they take
(
net::tcp::conn_recv
ornet::tcp::conn_recv_spawn
respectively).. thereis also a
net::tcp::conn_peek
function that will do a peek on theunderlying port to see if there are pending connections.
The main difference, with this API, is that the low-level libuv glue is
going to accept every connection attempt, along with the overhead that
that brings. But, this is a much more hassle-free API for 95% of use
cases and will probably be the one that most users will want to reach for.
what's needed to fill this out