-
Notifications
You must be signed in to change notification settings - Fork 36
Update docstrings of linear and polynomial kernel and fix their constraints #228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
These changes broadly look good, but I'm also a little concerned about the default values not being differentiable. There's presently some debate around how to treat |
Sure, it might be better to keep the default values for now. I only realized that there could potentially be a problem after I had changed it. It is a bit unfortunate since julia> f(x, d) = x^d
julia> @code_typed f(3.0, 2)
CodeInfo(
1 ── %1 = (d === -1)::Bool
└─── goto #3 if not %1
2 ── %3 = Base.div_float(1.0, x)::Float64
└─── goto #12
3 ── %5 = (d === 0)::Bool
└─── goto #5 if not %5
4 ── goto #12
5 ── %8 = (d === 1)::Bool
└─── goto #7 if not %8
6 ── goto #12
7 ── %11 = (d === 2)::Bool
└─── goto #9 if not %11
8 ── %13 = Base.mul_float(x, x)::Float64
└─── goto #12
9 ── %15 = (d === 3)::Bool
└─── goto #11 if not %15
10 ─ %17 = Base.mul_float(x, x)::Float64
│ %18 = Base.mul_float(%17, x)::Float64
└─── goto #12
11 ─ %20 = Base.sitofp(Float64, d)::Float64
│ %21 = $(Expr(:foreigncall, "llvm.pow.f64", Float64, svec(Float64, Float64), 0, :(:llvmcall), :(x), :(%20), :(%20), :(x)))::Float64
└─── goto #12
12 ┄ %23 = φ (#2 => %3, #4 => 1.0, #6 => _2, #8 => %13, #10 => %18, #11 => %21)::Float64
└─── return %23
) => Float64
julia> @code_typed f(3.0, 2.0)
CodeInfo(
1 ─ %1 = $(Expr(:foreigncall, "llvm.pow.f64", Float64, svec(Float64, Float64), 0, :(:llvmcall), :(x), :(d), :(d), :(x)))::Float64
│ %2 = Base.ne_float(%1, %1)::Bool
│ %3 = Base.add_float(x, d)::Float64
│ %4 = Base.ne_float(%3, %3)::Bool
│ %5 = Base.not_int(%4)::Bool
│ %6 = Base.and_int(%2, %5)::Bool
└── goto #3 if not %6
2 ─ invoke Base.Math.throw_exp_domainerror(_2::Float64)::Union{}
└── $(Expr(:unreachable))::Union{}
3 ┄ goto #4
4 ─ return %1
) => Float64 |
Alternatively, maybe we could just tell users to ensure that all parameters are of the correct type before training (this would also be convenient if one wants to use e.g. using Flux
k = .... # some kernel
k_float = fmap(float, k) # `fmap` is reexported by Flux
ps = Flux.params(k_float)
... # training loop |
I am convinced now that we should only allow polynomials of degree |
Moreover, it seems that at least for odd degrees |
BTW the restriction |
Similarly, we should ensure that constant |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This PR updates the docstrings of the linear and polynomial kernel, consistent with the recently updated documentation of
PiecewisePolynomialKernel
.Moreover, the default parameters are changed to0
and2
instead of0.0
and2.0
to avoid promotions e.g. of inputs of typeFloat32
.Two things:
offset
anddegree
, similar to recent changes inPiecewisePolynomialKernel
?- Maybe default parameters of typeInt
are problematic for parameter optimization? SinceFunctors.fmap
reconstructs kernels for the updated parameter types and not necessarily of the same type as the original kernel, it might still be fine but I haven't checked it. Even if it does not work automatically, it might not be too bad if the parameters have to be specified explicitly in such a case. At least users would get an error message if it fails whereas currently promotion happens silently.Edit: As discussed below, the constant offset is constrained to be non-negative and the degreee of the polynomial degree is constrained to be a natural number (one could also allow zero but it does not seem particularly useful, it would just be a
ConstantKernel
). I renamedd
todegree
and reverted the default values ofc
to floating point numbers.