Skip to content

Initial memspace and memory_target APIs #53

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jan 11, 2024

Conversation

igchor
Copy link
Member

@igchor igchor commented Dec 1, 2023

This is an initial draft of APIs for memspace and memory_target (without any actual implementation).

A few things to discuss:

  1. Should memory_target_ops be a public API? Exposing it, would allow us to implement support for some targets outside of this repo. E.g. we could implement support for UR directly where it's needed.
  2. Where should the logic for creating pool/provider from a memspace reside? In this PR I proposed to make it a part of memory_target implementation. This won't work if a memspace consists of memory_targets of different_types (but we haven't explored this too much yet anyway)
  3. Do we want to have type-specific queries (.e.g node_id for numa memory_target)?

@vinser52
Copy link
Contributor

vinser52 commented Dec 4, 2023

What you described makes sense but to break it down into several independent steps I suggest the following:

  1. Expose only predefined Memspaces for CPU (e.g. Highest Bandwidth, Highest Capacity, etc.). Keep the ops structure and underlying implementation internal. The first step allows us to define API for pool creation from the memspace and replace Memkind.
  2. Think about what we need to expose to support UR.
  3. Think about what we need to expose to support user-defined memory spaces and memory targets.

Perhaps, steps 2 & 3 are dependant.

Let's expose only required APIs for each step but design the implementation with all future use cases in mind.

@igchor igchor force-pushed the memspace_api branch 4 times, most recently from 402c168 to c61e914 Compare December 4, 2023 21:05
@igchor igchor force-pushed the memspace_api branch 2 times, most recently from 1b501c9 to b818231 Compare December 4, 2023 21:15
@igchor
Copy link
Member Author

igchor commented Dec 5, 2023

Implementation for NUMA is here: igchor#1

@igchor igchor force-pushed the memspace_api branch 3 times, most recently from d78c8fc to ad09620 Compare December 13, 2023 00:08
@igchor
Copy link
Member Author

igchor commented Dec 13, 2023

Depends on: #69

We can also not implement umfPoolCreateFromMemspace in this PR if you think we need to discuss the approach in #69 more.

@igchor igchor marked this pull request as ready for review December 13, 2023 00:10
@igchor igchor requested a review from a team as a code owner December 13, 2023 00:10
@igchor igchor force-pushed the memspace_api branch 2 times, most recently from b81f25d to 978584d Compare December 18, 2023 18:24
@igchor igchor requested a review from vinser52 December 18, 2023 18:24
@igchor igchor force-pushed the memspace_api branch 2 times, most recently from 4b99aea to 801276c Compare December 20, 2023 17:52
@bratpiorka
Copy link
Contributor

@vinser52 I think we could go with this code - I know it still needs some changes in the future, but I think it is a good basis for introducing the memspace API

/// \brief Destroys memspace
/// \param hMemspace handle to memspace
///
void umfMemspaceDestroy(umf_memspace_handle_t hMemspace);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the corresponding create function?

Another question: if we consider only predefined memspaces as a first step do we need Create/Destroy functions for memspaces in a public API?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now we only have umfMemspaceCreateFromNumaArray that is defined in memspace_numa.h

We could move that function to memspace.h for now but in the long term, I think those create functions should be in separate headers (since we might want to have the memspaces in separate libs).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another question: if we consider only predefined memspaces as a first step do we need Create/Destroy functions for memspaces in a public API?

Ok, good point. @bratpiorka for openMP, we won't need umfMemspaceCreateFromNumaArray right? If that's the case I can make it internal for now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I've made those functions private for now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR does not expose the predefined memspaces but this will be added in next PR (it requires integrating with hwloc).

@bratpiorka
Copy link
Contributor

bratpiorka commented Jan 9, 2024

@igchor please rebase
@vinser52 can we merge this PR? I think we could implement the next steps in separate PRs.

@igchor
Copy link
Member Author

igchor commented Jan 9, 2024

@igchor please rebase @vinser52 can we merge this PR? I think we could implement the next steps in separate PRs.

Rebased.

@igchor
Copy link
Member Author

igchor commented Jan 10, 2024

@vinser52 please take a look. We'd like to unblock next step which is implementing HOST_MEMSPACE

umf_memspace_policy_handle_t policy, umf_memory_pool_handle_t *pool);

umf_result_t (*memory_provider_create_from_memspace)(
umf_memspace_handle_t memspace, void **memoryTargets, size_t numTargets,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if memspace consists of multiple memory_targets? How we will choose which memory_target should we ask to create a pool or provider from memspace?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's why we only support memory targets of the same type for now. See here:

assert(verifyMemTargetsTypes(memspace) == UMF_RESULT_SUCCESS);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that today we support only memory targets of the same type (same ops structure). My question is mostly about the architecture/design, not the current implementation. if we are creating pool/provider from memspace perhaps the current design is wrong and memspace should be responsible for pool/provider creation? In the future I can imagine memspace which consists of GPU memory target and numa memory target - in that case I expect that we will choose GPU memory provider (L0 or cuda) and create USM shared pool.

What do you think?

@vinser52 vinser52 merged commit da4ae27 into oneapi-src:main Jan 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants