-
Notifications
You must be signed in to change notification settings - Fork 36
[WIP] Refactoring of KernelSum and KernelProduct #73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This seems reasonable to me @theogf -- it's what Stheno has always done for precisely the same reasons. I wonder whether we can do better though. Another option is to allow for one of a couple of containers, rather than requiring only two elements -- specifically either a
(The above also apply to Not requiring the container to be a tuple type allows for large combinations of kernels. Say you want to implement an
My intuition has always been that this optimisation is likely not worth it. Do you have any particular kernels in mind for which this is going to be helpful? |
I suggested this since I was wondering if it could be helpful, e.g., for the computation of In the case of two components or a tuple of components, the compiler could figure out if such an optimization is possible if the |
Okay, my question was more of the form: can you give me some kernels for which this optimisation would be helpful that I can't trivially re-write as a more efficient kernel, and that I actually care about. For example, |
Yeah that's a fair point. Theoretically it is possible, and it would lead to a performance improvement but in practice such a case would never be found! To stretch it to a maximum, in the case of |
Yeah, this is the only case that I can see this helping in. It might actually come up fairly often in 1 / low dimensions if you've got a sum of isotropic kernels with different length scales or something. No idea whether it's ever a bottleneck though. In any case, I'm certainly not opposed to someone figuring whether this is really worth it / out how to make it work in cases that we care about, but it doesn't (probably shouldn't) happen in this PR. |
Regarding what you said in |
Not entirely sure. There is a number somewhere in wherever
I'm not sure that I follow. For example, the struct Sum{Tks} <: Kernel
ks::Tks
end
function kernelmatrix(
κ::KernelSum,
X::AbstractMatrix,
Y::AbstractMatrix;
obsdim::Int = defaultobs,
)
return sum(map(k -> _kernelmatrix(k, X, Y, obsdim), κ.ks))
end Have I missed something? |
The whole thing of having an "arbitrary" threshold of kernels, seems a bit weird. And I thought having two different parametrizations would be complicated but you just showed that the common interface is actually pretty simple. |
Closed in favor of #146 |
Following #68, we decided to refactor KernelSum and KernelProduct to contain two kernels only to avoid abstract types and avoid unnecessary dispatch. There is still work to see if for similar metric AND transform, one can save some computation time.