-
Notifications
You must be signed in to change notification settings - Fork 36
params(), Flux/Zygote style #41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hmm that was quite some time ago, so I don't remember anything about |
BTW quite disappointing that all parameters have to be vectors. I guess there's no way around that currently? |
src/generic.jl
Outdated
|
||
params!(ps,x::AbstractArray) = push!(ps,x) | ||
|
||
trainable(x) = () |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
trainable(x) = () |
IMO the implementation of trainable
should be removed, since it is already defined in https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/functor.jl#L40.
src/generic.jl
Outdated
function params(k::Kernel) | ||
ps = [] | ||
params!(ps,k) | ||
return ps | ||
end | ||
|
||
function params!(ps,k::Kernel) | ||
for child in trainable(k) | ||
params!(ps,k) | ||
end | ||
end | ||
|
||
params!(ps,x::AbstractArray) = push!(ps,x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function params(k::Kernel) | |
ps = [] | |
params!(ps,k) | |
return ps | |
end | |
function params!(ps,k::Kernel) | |
for child in trainable(k) | |
params!(ps,k) | |
end | |
end | |
params!(ps,x::AbstractArray) = push!(ps,x) |
Isn't that already defined in https://github.com/FluxML/Flux.jl/blob/a874bef6f9d8eb1ab2fe376cb8ad068eb36baf33/src/functor.jl#L76-L88? It should be sufficient to define trainable
for kernels that have trainable parameters.
I don't think so, |
I reimplemented |
Ah I see. Are |
Not necessarily. The problem with |
Ah one easy option is to design an internal function |
If the whole purpose of |
Ok then that's what I did. I just need to write tests now. |
I replaced all scalars by vectors so they can be mutable, and copied the Flux approach of using
trainable
andparams
to get out an array of array of parameters to be optimized. Left to be done is to figure out how to fix parameters. @devmotion I saw you helped in making aFixedKernel
in GaussianProcesses.jl do you think this is the right solution?