-
Notifications
You must be signed in to change notification settings - Fork 17
add thread local cache for brgemm #350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -137,24 +150,38 @@ void dnnl_brgemm_tilerelease() { | |||
void dnnl_brgemm_execute(int64_t kernel_idx, void *A, uint64_t A_offset, | |||
void *B, uint64_t B_offset, void *C, uint64_t C_offset, | |||
int num) { | |||
auto it = tl_cache.find(kernel_idx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better not define tl_cache as global static, define it here as a function static.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the advice, fixed.
if (it != tl_cache.end()) { | ||
desc_ptr = &it->second.desc; | ||
kernel = it->second.kernel; | ||
} else { | ||
read_lock_guard_t g(g_brgemm_lock); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it's thread local, do we still need this lock?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when the target brgemm kernel is not found in thread_local cache, we still need to lock the global cache to get the target brgemm.
struct brgemm_cache_info_t { | ||
std::shared_ptr<brgemm_desc_t> desc; | ||
std::shared_ptr<brgemm_kernel_t> kernel; | ||
std::shared_ptr<char> palette; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could use shared_ptr<char[]>
for palette
, and for desc
and kernel
we don't need to change, since they are not managed by smart ptr, and storing ptr to vector elements is dangerous as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
std::shared_ptr<char> palette; | ||
|
||
brgemm_cache_info_t() = default; | ||
brgemm_cache_info_t(brgemm_desc_t *d, brgemm_kernel_t *k, char *p) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ideally we need to change the unique_ptr
in global palette pool to shared_ptr
as well, and pass the shared_ptr
of palette here for construction
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
struct brgemm_cache_info_t { | ||
std::shared_ptr<brgemm_desc_t> desc; | ||
std::shared_ptr<brgemm_kernel_t> kernel; | ||
std::shared_ptr<char> palette; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we created brgemm_cache_info_t
to store desc/kernel/palette together thread locally, would it be a better manner to also use brgemm_cache_info_t
for global management?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's a good idea, we can unify the struct used in both thead-local and global
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use brgemm_cache_info_t
for both thread local and global cache.
return; | ||
} | ||
palette_buffer = g_brgemm_palette[kernel_idx].get(); | ||
info = {&g_brgemm_desc_list[kernel_idx], g_brgemm_kernel_list[kernel_idx], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not safe to assign the raw pointer to the shared_ptr in struct which will release the pointer when the ref_count == 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
@@ -93,33 +102,33 @@ int64_t dnnl_brgemm_dispatch(int64_t M, int64_t N, int64_t K, int64_t LDA, | |||
brgemm_desc_set_attr(&desc, dnnl_attrs); | |||
|
|||
// TODO(haixin): Reuse identical palettes across kernels | |||
char *palette_buffer = nullptr; | |||
std::shared_ptr<char[]> palette_buffer(new char[PALETTE_SIZE], | |||
std::default_delete<char[]>()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We only need to new
palette buffer when desc.is_tmm
is true
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
|
||
// TODO(haixin): use syscall to determine page size? | ||
static constexpr size_t SCRATCH_SIZE = 2 * 4096; | ||
// TODO(haixin): need to use custom thread management for scratch in the future? | ||
static thread_local char scratch[SCRATCH_SIZE] = {0}; | ||
|
||
static std::unordered_map<int64_t, brgemm_cache_info_t> &get_tl_cache() { | ||
thread_local std::unordered_map<int64_t, brgemm_cache_info_t> tl_cache; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I am late for the party. Can we use std::vector for better performance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "key" here might not be contiguous?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haixin originally used a vector to hold the kernels. I think he tried to make them contiguous. Need to double check that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In global, it's contiguous, but in thread local cache it might be not.
But I think we can still use vector
for thread local cache, with empty 'hole's inside the vector.
Using unordered_map indeed would bring some extra cost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous design was able to use a vector for access because there was only a single global cache storing the BRGEMM information. This PR introduces a new thread-local cache, and the indices in this cache may not necessarily align with those in the global cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it still profitable to use a vector. It is contiguous in memory and in most of time, it should be dense (will it be common when a thread calls brgemm A, and another calls brgemm B?) Please note that std::unordered_map is slow and space-consuming. It stores k-v for each pair and the pairs are stored in a linked list. That is at least 3 times of the space of a vector.
Tracking Issue#323