Skip to content

Introduce JuliaFormatter with style=blue #220

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Jan 11, 2021
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .JuliaFormatter.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
style = "blue"
26 changes: 26 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,3 +47,29 @@ jobs:
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
path-to-lcov: ./lcov.info

format:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separate from the others as format-testing should be independent of the host/version matrix and it seems more sensible to only run it once (and then it's easier to distinguish format failure from the "regular" tests)

runs-on: ubuntu-latest
steps:
- uses: julia-actions/setup-julia@latest
with:
version: 1

- uses: actions/checkout@v1
- name: Install JuliaFormatter and format
# The version of JuliaFormatter used is just to show how to specify the version. The latest
# version would be preferrable.
run: |
julia -e 'using Pkg; Pkg.add(PackageSpec(name="JuliaFormatter", version="0.12.2"))'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like that the version number has to be updated manually if there is a new version of JuliaFortmatter. We could just install the latest version but this means we would end up with breaking changes without realizing. Ideally, we would add a Project.toml file in some directory and be notified about breaking releases by CompatHelper.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, that'd be nice. I'm not sure how you'd go about sorting it out with the Project.toml (feel free to make those changes on this branch or a separate PR). I do think it's overall better to pin the version of the formatter, as otherwise it can be hard to figure out how exactly to get the local formatter be in sync with the cloud format-checker..

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how you'd go about sorting it out with the Project.toml

Similar to the package, tests, and documentation, we can specify the SemVer-compatible versions in a separate project environment. CompatHelper will automatically create a PR on Github if a new breaking release is available (we are not notified about non-breaking releases).

it can be hard to figure out how exactly to get the local formatter be in sync with the cloud format-checker.

I think specifying the version that we use in a project environment is the best we can do. We can't control which version of JuliaFormatter developers use in their editors or install globally but if they load our project environment it will always be compatible.

julia -e 'using JuliaFormatter; format(".", verbose=true)'
- name: Format check
run: |
julia -e '
out = Cmd(`git diff --name-only`) |> read |> String
if out == ""
exit(0)
else
@error "Some files have not been formatted !!!"
write(stdout, out)
exit(1)
end'
10 changes: 6 additions & 4 deletions benchmark/MLKernels.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,18 @@ using MLKernels
SUITE["MLKernels"] = BenchmarkGroup()

mlkernelnames = ["SquaredExponentialKernel"]
kernels=Dict{String,MLKernels.Kernel}()
kernels = Dict{String,MLKernels.Kernel}()
for k in mlkernelnames
SUITE["MLKernels"][k] = BenchmarkGroup()
kernels[k] = eval(Meta.parse("MLKernels."*k*"(alpha)"))
kernels[k] = eval(Meta.parse("MLKernels." * k * "(alpha)"))
end

for k in mlkernelnames
SUITE["MLKernels"][k]["k(X,Y)"] = @benchmarkable MLKernels.kernelmatrix($(kernels[k]),$X,$Y)
SUITE["MLKernels"][k]["k(X,Y)"] = @benchmarkable MLKernels.kernelmatrix(
$(kernels[k]), $X, $Y
)
# SUITE["MLKernels"][k][kt]["k!(X,Y)"] = @benchmarkable MLKernels.kernelmatrix!(KXY,$(kernels[k][kt]),$X,$Y) setup=(KXY=copy($KXY))
SUITE["MLKernels"][k]["k(X)"] = @benchmarkable MLKernels.kernelmatrix($(kernels[k]),$X)
SUITE["MLKernels"][k]["k(X)"] = @benchmarkable MLKernels.kernelmatrix($(kernels[k]), $X)
# SUITE["MLKernels"][k][kt]["k!(X)"] = @benchmarkable MLKernels.kernelmatrix!(KX,$(kernels[k][kt]),$X) setup=(KX=copy($KX))
# SUITE["MLKernels"][k][kt]["kdiag(X)"] = @benchmarkable MLKernels.kerneldiagmatrix($(kernels[k][kt]),$X)
# SUITE["MLKernels"][k][kt]["kdiag!(X)"] = @benchmarkable MLKernels.kerneldiagmatrix!(kX,$(kernels[k][kt]),$X) setup=(kX=copy($kX))
Expand Down
15 changes: 8 additions & 7 deletions benchmark/benchmarks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,17 @@ const SUITE = BenchmarkGroup()
Random.seed!(1234)

dim = 50
N1 = 1000; N2 = 500;
N1 = 1000;
N2 = 500;
alpha = 2.0

X = rand(Float64,N1,dim)
Y = rand(Float64,N2,dim)
X = rand(Float64, N1, dim)
Y = rand(Float64, N2, dim)

KXY = rand(Float64,N1,N2)
KX = rand(Float64,N1,N1)
sKX = Symmetric(rand(Float64,N1,N1))
kX = rand(Float64,N1)
KXY = rand(Float64, N1, N2)
KX = rand(Float64, N1, N1)
sKX = Symmetric(rand(Float64, N1, N1))
kX = rand(Float64, N1)

include("kernelmatrix.jl")
include("MLKernels.jl")
22 changes: 17 additions & 5 deletions benchmark/kernelmatrix.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,34 @@ using KernelFunctions
SUITE["KernelFunctions"] = BenchmarkGroup()

kernelnames = ["SqExponentialKernel"]
kerneltypes = ["ARD","ISO"]
kernels=Dict{String,Dict{String,KernelFunctions.Kernel}}()
kerneltypes = ["ARD", "ISO"]
kernels = Dict{String,Dict{String,KernelFunctions.Kernel}}()
for k in kernelnames
kernels[k] = Dict{String,KernelFunctions.Kernel}()
SUITE["KernelFunctions"][k] = BenchmarkGroup()
for kt in kerneltypes
SUITE["KernelFunctions"][k][kt] = BenchmarkGroup()
kernels[k][kt] = eval(Meta.parse("KernelFunctions."*k*"("*(kt == "ARD" ? "alpha*ones(Float64,dim)" : "alpha" )*")"))
kernels[k][kt] = eval(
Meta.parse(
"KernelFunctions." *
k *
"(" *
(kt == "ARD" ? "alpha*ones(Float64,dim)" : "alpha") *
")",
),
)
end
end

for k in kernelnames
for kt in kerneltypes
SUITE["KernelFunctions"][k][kt]["k(X,Y)"] = @benchmarkable KernelFunctions.kernelmatrix($(kernels[k][kt]),$X,$Y,obsdim=1)
SUITE["KernelFunctions"][k][kt]["k(X,Y)"] = @benchmarkable KernelFunctions.kernelmatrix(
$(kernels[k][kt]), $X, $Y; obsdim=1
)
# SUITE["KernelFunctions"][k][kt]["k!(X,Y)"] = @benchmarkable KernelFunctions.kernelmatrix!(KXY,$(kernels[k][kt]),$X,$Y) setup=(KXY=copy($KXY))
SUITE["KernelFunctions"][k][kt]["k(X)"] = @benchmarkable KernelFunctions.kernelmatrix($(kernels[k][kt]),$X,obsdim=1)
SUITE["KernelFunctions"][k][kt]["k(X)"] = @benchmarkable KernelFunctions.kernelmatrix(
$(kernels[k][kt]), $X; obsdim=1
)
# SUITE["KernelFunctions"][k][kt]["k!(X)"] = @benchmarkable KernelFunctions.kernelmatrix!(KX,$(kernels[k][kt]),$X) setup=(KX=copy($KX))
# SUITE["KernelFunctions"][k][kt]["kdiag(X)"] = @benchmarkable KernelFunctions.kerneldiagmatrix($(kernels[k][kt]),$X)
# SUITE["KernelFunctions"][k][kt]["kdiag!(X)"] = @benchmarkable KernelFunctions.kerneldiagmatrix!(kX,$(kernels[k][kt]),$X) setup=(kX=copy($kX))
Expand Down
137 changes: 83 additions & 54 deletions docs/create_kernel_plots.jl
Original file line number Diff line number Diff line change
@@ -1,77 +1,106 @@
using Plots; pyplot();
using Plots;
pyplot();
using Distributions
using LinearAlgebra
using KernelFunctions
# Translational invariants kernels

default(lw=3.0,titlefontsize=28,tickfontsize=18)
default(; lw=3.0, titlefontsize=28, tickfontsize=18)

x₀ = 0.0; l = 0.1
x₀ = 0.0;
l = 0.1;
n_grid = 101
fill(x₀,n_grid,1)
xrange = reshape(collect(range(-3,3,length=n_grid)),:,1)
fill(x₀, n_grid, 1)
xrange = reshape(collect(range(-3, 3; length=n_grid)), :, 1)

k = transform(SqExponentialKernel(),1.0)
K1 = kernelmatrix(k,xrange,obsdim=1)
p = heatmap(K1,yflip=true,colorbar=false,framestyle=:none,background_color=RGBA(0.0,0.0,0.0,0.0))
savefig(joinpath(@__DIR__,"src","assets","heatmap_sqexp.png"))
k = transform(SqExponentialKernel(), 1.0)
K1 = kernelmatrix(k, xrange; obsdim=1)
p = heatmap(
K1;
yflip=true,
colorbar=false,
framestyle=:none,
background_color=RGBA(0.0, 0.0, 0.0, 0.0),
)
savefig(joinpath(@__DIR__, "src", "assets", "heatmap_sqexp.png"))

k = @kernel Matern32Kernel FunctionTransform(x -> (sin.(x)) .^ 2)
K2 = kernelmatrix(k, xrange; obsdim=1)
p = heatmap(
K2;
yflip=true,
colorbar=false,
framestyle=:none,
background_color=RGBA(0.0, 0.0, 0.0, 0.0),
)
savefig(joinpath(@__DIR__, "src", "assets", "heatmap_matern.png"))

k = @kernel Matern32Kernel FunctionTransform(x->(sin.(x)).^2)
K2 = kernelmatrix(k,xrange,obsdim=1)
p = heatmap(K2,yflip=true,colorbar=false,framestyle=:none,background_color=RGBA(0.0,0.0,0.0,0.0))
savefig(joinpath(@__DIR__,"src","assets","heatmap_matern.png"))
k = transform(PolynomialKernel(; c=0.0, d=2.0), LinearTransform(randn(3, 1)))
K3 = kernelmatrix(k, xrange; obsdim=1)
p = heatmap(
K3;
yflip=true,
colorbar=false,
framestyle=:none,
background_color=RGBA(0.0, 0.0, 0.0, 0.0),
)
savefig(joinpath(@__DIR__, "src", "assets", "heatmap_poly.png"))

k =
0.5 * SqExponentialKernel() * transform(LinearKernel(), 0.5) +
0.4 * (@kernel Matern32Kernel() FunctionTransform(x -> sin.(x)))
K4 = kernelmatrix(k, xrange; obsdim=1)
p = heatmap(
K4;
yflip=true,
colorbar=false,
framestyle=:none,
background_color=RGBA(0.0, 0.0, 0.0, 0.0),
)
savefig(joinpath(@__DIR__, "src", "assets", "heatmap_prodsum.png"))

k = transform(PolynomialKernel(c=0.0,d=2.0), LinearTransform(randn(3,1)))
K3 = kernelmatrix(k,xrange,obsdim=1)
p = heatmap(K3,yflip=true,colorbar=false,framestyle=:none,background_color=RGBA(0.0,0.0,0.0,0.0))
savefig(joinpath(@__DIR__,"src","assets","heatmap_poly.png"))

k = 0.5*SqExponentialKernel()*transform(LinearKernel(),0.5) + 0.4*(@kernel Matern32Kernel() FunctionTransform(x->sin.(x)))
K4 = kernelmatrix(k,xrange,obsdim=1)
p = heatmap(K4,yflip=true,colorbar=false,framestyle=:none,background_color=RGBA(0.0,0.0,0.0,0.0))
savefig(joinpath(@__DIR__,"src","assets","heatmap_prodsum.png"))

plot(heatmap.([K1,K2,K3,K4],yflip=true,colorbar=false)...,layout=(2,2))
savefig(joinpath(@__DIR__,"src","assets","heatmap_combination.png"))
plot(heatmap.([K1, K2, K3, K4], yflip=true, colorbar=false)...; layout=(2, 2))
savefig(joinpath(@__DIR__, "src", "assets", "heatmap_combination.png"))

##

for k in [SqExponentialKernel,ExponentialKernel]
K = kernelmatrix(k(),xrange,obsdim=1)
v = rand(MvNormal(K+1e-7I))
plot(xrange,v,lab="",title="f(x)",framestyle=:none) |> display
savefig(joinpath(@__DIR__,"src","assets","GP_sample_$(k).png"))
plot(xrange,kernel.(k(),x₀,xrange),lab="",ylims=(0,1.1),title="k(0,x)") |> display
savefig(joinpath(@__DIR__,"src","assets","kappa_function_$(k).png"))
for k in [SqExponentialKernel, ExponentialKernel]
K = kernelmatrix(k(), xrange; obsdim=1)
v = rand(MvNormal(K + 1e-7I))
display(plot(xrange, v; lab="", title="f(x)", framestyle=:none))
savefig(joinpath(@__DIR__, "src", "assets", "GP_sample_$(k).png"))
display(plot(xrange, kernel.(k(), x₀, xrange); lab="", ylims=(0, 1.1), title="k(0,x)"))
savefig(joinpath(@__DIR__, "src", "assets", "kappa_function_$(k).png"))
end

for k in [GammaExponentialKernel(1.0,1.5)]
sparse =1
while !isposdef(kernelmatrix(k,xrange*sparse,obsdim=1) + 1e-5I); sparse += 1; end
v = rand(MvNormal(kernelmatrix(k,xrange*sparse,obsdim=1)+1e-7I))
plot(xrange,v,lab="",title="f(x)",framestyle=:none) |> display
savefig(joinpath(@__DIR__,"src","assets","GP_sample_GammaExponentialKernel.png"))
plot(xrange,kernel.(k,x₀,xrange),lab="",ylims=(0,1.1),title="k(0,x)") |> display
savefig(joinpath(@__DIR__,"src","assets","kappa_function_GammaExponentialKernel.png"))
for k in [GammaExponentialKernel(1.0, 1.5)]
sparse = 1
while !isposdef(kernelmatrix(k, xrange * sparse; obsdim=1) + 1e-5I)
sparse += 1
end
v = rand(MvNormal(kernelmatrix(k, xrange * sparse; obsdim=1) + 1e-7I))
display(plot(xrange, v; lab="", title="f(x)", framestyle=:none))
savefig(joinpath(@__DIR__, "src", "assets", "GP_sample_GammaExponentialKernel.png"))
display(plot(xrange, kernel.(k, x₀, xrange); lab="", ylims=(0, 1.1), title="k(0,x)"))
savefig(
joinpath(@__DIR__, "src", "assets", "kappa_function_GammaExponentialKernel.png")
)
end

for k in [MaternKernel,Matern32Kernel,Matern52Kernel]
K = kernelmatrix(k(),xrange,obsdim=1)
v = rand(MvNormal(K+1e-7I))
plot(xrange,v,lab="",title="f(x)",framestyle=:none) |> display
savefig(joinpath(@__DIR__,"src","assets","GP_sample_$(k).png"))
plot(xrange,kernel.(k(),x₀,xrange),lab="",ylims=(0,1.1),title="k(0,x)") |> display
savefig(joinpath(@__DIR__,"src","assets","kappa_function_$(k).png"))
for k in [MaternKernel, Matern32Kernel, Matern52Kernel]
K = kernelmatrix(k(), xrange; obsdim=1)
v = rand(MvNormal(K + 1e-7I))
display(plot(xrange, v; lab="", title="f(x)", framestyle=:none))
savefig(joinpath(@__DIR__, "src", "assets", "GP_sample_$(k).png"))
display(plot(xrange, kernel.(k(), x₀, xrange); lab="", ylims=(0, 1.1), title="k(0,x)"))
savefig(joinpath(@__DIR__, "src", "assets", "kappa_function_$(k).png"))
end


for k in [RationalQuadraticKernel]
K = kernelmatrix(k(),xrange,obsdim=1)
v = rand(MvNormal(K+1e-7I))
plot(xrange,v,lab="",title="f(x)",framestyle=:none) |> display
savefig(joinpath(@__DIR__,"src","assets","GP_sample_$(k).png"))
plot(xrange,kernel.(k(),x₀,xrange),lab="",ylims=(0,1.1),title="k(0,x)") |> display
savefig(joinpath(@__DIR__,"src","assets","kappa_function_$(k).png"))
K = kernelmatrix(k(), xrange; obsdim=1)
v = rand(MvNormal(K + 1e-7I))
display(plot(xrange, v; lab="", title="f(x)", framestyle=:none))
savefig(joinpath(@__DIR__, "src", "assets", "GP_sample_$(k).png"))
display(plot(xrange, kernel.(k(), x₀, xrange); lab="", ylims=(0, 1.1), title="k(0,x)"))
savefig(joinpath(@__DIR__, "src", "assets", "kappa_function_$(k).png"))
end
31 changes: 16 additions & 15 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,22 +8,23 @@ DocMeta.setdocmeta!(
recursive=true,
)

makedocs(
sitename = "KernelFunctions",
format = Documenter.HTML(),
modules = [KernelFunctions],
pages = ["Home"=>"index.md",
"User Guide" => "userguide.md",
"Examples"=>"example.md",
"Kernel Functions"=>"kernels.md",
"Input Transforms"=>"transform.md",
"Metrics"=>"metrics.md",
"Theory"=>"theory.md",
"Custom Kernels"=>"create_kernel.md",
"API"=>"api.md"]
makedocs(;
sitename="KernelFunctions",
format=Documenter.HTML(),
modules=[KernelFunctions],
pages=[
"Home" => "index.md",
"User Guide" => "userguide.md",
"Examples" => "example.md",
"Kernel Functions" => "kernels.md",
"Input Transforms" => "transform.md",
"Metrics" => "metrics.md",
"Theory" => "theory.md",
"Custom Kernels" => "create_kernel.md",
"API" => "api.md",
],
)

deploydocs(;
repo = "github.com/JuliaGaussianProcesses/KernelFunctions.jl.git",
push_preview = true,
repo="github.com/JuliaGaussianProcesses/KernelFunctions.jl.git", push_preview=true
)
33 changes: 19 additions & 14 deletions examples/deepkernellearning.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,26 +10,31 @@ Flux.@functor KernelSum
Flux.@functor Matern32Kernel
Flux.@functor FunctionTransform

neuralnet = Chain(Dense(1,3),Dense(3,2))
neuralnet = Chain(Dense(1, 3), Dense(3, 2))
k = SqExponentialKernel(FunctionTransform(neuralnet))
xmin = -3; xmax = 3
x = range(xmin,xmax,length=100)
x_test = rand(Uniform(xmin,xmax),200)
x,y = noisy_function(sinc,x;noise=0.1)
X = reshape(x,:,1)
xmin = -3;
xmax = 3;
x = range(xmin, xmax; length=100)
x_test = rand(Uniform(xmin, xmax), 200)
x, y = noisy_function(sinc, x; noise=0.1)
X = reshape(x, :, 1)
λ = [0.1]
f(x,k,λ) = kernelmatrix(k,X,x,obsdim=1)*inv(kernelmatrix(k,X,obsdim=1)+exp(λ[1])*I)*y
f(X,k,1.0)
loss(k,λ) = f(X,k,λ) |> ŷ ->sum(y-ŷ)/length(y)+exp(λ[1])*norm(ŷ)
loss(k,λ)
function f(x, k, λ)
return kernelmatrix(k, X, x; obsdim=1) *
inv(kernelmatrix(k, X; obsdim=1) + exp(λ[1]) * I) *
y
end
f(X, k, 1.0)
loss(k, λ) = ŷ -> sum(y - ŷ) / length(y) + exp(λ[1]) * norm(ŷ)(f(X, k, λ))
loss(k, λ)
ps = Flux.params(k)
# push!(ps,λ)
opt = Flux.Momentum(1.0)
##
for i in 1:10
grads = Zygote.gradient(()->loss(k,λ),ps)
Flux.Optimise.update!(opt,ps,grads)
p = Plots.scatter(x,y,lab="data",title="Loss = $(loss(k,λ))")
Plots.plot!(x,f(X,k,λ),lab="Prediction",lw=3.0)
grads = Zygote.gradient(() -> loss(k, λ), ps)
Flux.Optimise.update!(opt, ps, grads)
p = Plots.scatter(x, y; lab="data", title="Loss = $(loss(k,λ))")
Plots.plot!(x, f(X, k, λ); lab="Prediction", lw=3.0)
display(p)
end
Loading