-
Notifications
You must be signed in to change notification settings - Fork 35
Add bind modes tests and run on multi-numa machine #415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add bind modes tests and run on multi-numa machine #415
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you going to address the following in this PR?:
We should check that policies like 'preferred' actually fall back to other nodes in case there is not enough memory on the specified node. Also, for modes that should fail if there is not enough memory we should test they actually do fail. One way to do that would be to add some kind of env variable which, if set, we could use to specify NUMA node on which we want to allocate (and use it for all binding tests). Then, we could run those tests in an environment with some custom NUMA nodes
// Test for allocations on numa nodes. This test will be executed for all numa nodes | ||
// available on the system. The available nodes are returned in vector from the | ||
// get_available_numa_nodes_numbers() function and passed to test as parameters. | ||
TEST_P(testNumaNodes, checkNumaNodesAllocations) { | ||
TEST_P(testNumaOnAllNodes, checkNumaNodesAllocations) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be good to have a test case for BIND where you set multiple NUMA nodes in node mask. This is allowed according to the documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, but I'm not sure which exactly node should be retrieved.
According to man mbind
:
If nodemask specifies more than one node, page allocations
will come from the node with sufficient free memory that
is closest to the node where the allocation takes place.
According to man set_mempolicy
:
If nodemask specifies more than one node, page allocations will come from
the node with the lowest numeric node ID first
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we rely on the mbind behavior (proximity of the nodes) so it would be good to verify that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense, you're right. Done.
192a5bc
to
18d3493
Compare
18d3493
to
895f108
Compare
895f108
to
7876930
Compare
7876930
to
c3bdc9c
Compare
works as expected (on my fork): https://github.com/lukaszstolarczuk/unified-memory-framework/actions/runs/8816696117 |
2033b40
to
c53884a
Compare
Extend current tests with more checks and more cases. Run selected cases on all CPUs or all NUMA nodes.
The version we install is probably too old for multi-numa tests.
c53884a
to
9c4ddf7
Compare
Description
Continuation of #241.
Ref: #242
Checklist