You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/train-kernel-parameters/script.jl
+24-24Lines changed: 24 additions & 24 deletions
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,9 @@
1
1
# # Train Kernel Parameters
2
2
3
-
# Here we show a few ways to train (optimize) the kernel (hyper)parameters at the example of kernel-based regression using KernelFunctions.jl.
4
-
# All options are functionally identical, but differ a little in readability, dependencies, and computational cost.
3
+
# Here we show a few ways to train (optimize) the kernel (hyper)parameters at the example of kernel-based regression using KernelFunctions.jl.
4
+
# All options are functionally identical, but differ a little in readability, dependencies, and computational cost.
5
5
6
-
# We load KernelFunctions and some other packages. Note that while we use `Zygote` for automatic differentiation and `Flux.optimise` for optimization, you should be able to replace them with your favourite autodiff framework or optimizer.
6
+
# We load KernelFunctions and some other packages. Note that while we use `Zygote` for automatic differentiation and `Flux.optimise` for optimization, you should be able to replace them with your favourite autodiff framework or optimizer.
# If we don't want to write an explicit function to construct the kernel, we can alternatively use the `Flux.destructure` function.
189
-
# Again, we need to ensure that the parameters are positive. Note that the `exp` function is now part of the loss function, instead of part of the kernel construction.
188
+
# If we don't want to write an explicit function to construct the kernel, we can alternatively use the `Flux.destructure` function.
189
+
# Again, we need to ensure that the parameters are positive. Note that the `exp` function is now part of the loss function, instead of part of the kernel construction.
190
190
191
-
# We could also use ParameterHandling.jl here.
192
-
# To do so, one would remove the `exp`s from the loss function below and call `loss ∘ unflatten` as above.
191
+
# We could also use ParameterHandling.jl here.
192
+
# To do so, one would remove the `exp`s from the loss function below and call `loss ∘ unflatten` as above.
0 commit comments