Multifidelity Bayesian Optimization with KG #1320
-
Hi, I have a question about multifidelity BO with KG. It seems that the algorithm only finds the global minimum/maximum instead of minimum/maximum of high fidelity. If for example, the low fidelity has lower/higher values than high fidelity, botorch finds the minimum/maximum of low fidelity even if we want the high fidelity minimum/maximum. Am I correct? or I missed something? Please let me know. I appreciate your time and help on this. Thanks, |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 14 replies
-
Hi @S3anaaaaz.
Optimizing |
Beta Was this translation helpful? Give feedback.
-
Thank you for your answer. The dimension of the example I am using is 1. The minimum of high-fidelity is x=-1.5 and y=-5.72, but the function finds the value y=-7.2 which is the minimum of low-fidelity. Thanks a lot |
Beta Was this translation helpful? Give feedback.
-
Thank you so much for your answer. |
Beta Was this translation helpful? Give feedback.
-
Hi, Sorry I have another problem with Botorch. I tried with different initializations, but it didn't help. What should I do? |
Beta Was this translation helpful? Give feedback.
-
Thank you so much for your answer.
I attached the code. I would appreciate it if you take time and help me in
solving the problem.
…On Tue, Aug 2, 2022 at 7:40 PM Max Balandat ***@***.***> wrote:
This is numerical issues that happens if the covariances involved in the
GP computations are very ill conditioned. This often happens in cases where
there are a lot of repeated points. It can happen in different places of
the code path though, so to understand what is going on we'd need a full
stack trace or repro.
—
Reply to this email directly, view it on GitHub
<https://urldefense.com/v3/__https://github.com/pytorch/botorch/discussions/1320*discussioncomment-3312536__;Iw!!CzAuKJ42GuquVTTmVmPViYEvSg!M0gBq8U2QGUYvTbTTJrVnCGWn0kMPQcjI5mBnMUONsABylJ87uyE4CQW-teKjE6elsWjfn3ZWyjkheama1nTn9_m$>,
or unsubscribe
<https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AY2WGMOGNMCABD55OTMM6ZLVXHLZRANCNFSM5355FYNA__;!!CzAuKJ42GuquVTTmVmPViYEvSg!M0gBq8U2QGUYvTbTTJrVnCGWn0kMPQcjI5mBnMUONsABylJ87uyE4CQW-teKjE6elsWjfn3ZWyjkheamaydhdTDh$>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
Hi @S3anaaaaz.
qMultiFidelityKnowledgeGradient
supports multiple use cases beyond the multi-fidelity included in its name. For multi-fidelity use cases, you should be using theproject
operator (https://github.com/pytorch/botorch/blob/main/botorch/acquisition/knowledge_gradient.py#L373-L376) to map to the target fidelity. You can find example usage of this in the tutorials here and here.Optimizing
qMultiFidelityKnowledgeGradient
will give you the point that offers the most information / improvement (as defined by the acquisition function) across the whole design space. In the …