Skip to content

SGConv #154

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Apr 15, 2022
Merged

SGConv #154

merged 11 commits into from
Apr 15, 2022

Conversation

rbSparky
Copy link
Contributor

@rbSparky rbSparky commented Apr 9, 2022

@codecov
Copy link

codecov bot commented Apr 9, 2022

Codecov Report

Merging #154 (97bb11c) into master (104b6b9) will decrease coverage by 0.07%.
The diff coverage is 65.00%.

❗ Current head 97bb11c differs from pull request most recent head 28ee1de. Consider uploading reports for the commit 28ee1de to get more accurate results

@@            Coverage Diff             @@
##           master     #154      +/-   ##
==========================================
- Coverage   85.52%   85.45%   -0.08%     
==========================================
  Files          15       14       -1     
  Lines        1285     1299      +14     
==========================================
+ Hits         1099     1110      +11     
- Misses        186      189       +3     
Impacted Files Coverage Δ
src/layers/conv.jl 77.23% <65.00%> (-1.47%) ⬇️
src/GNNGraphs/gnngraph.jl 76.00% <0.00%> (-4.00%) ⬇️
src/GNNGraphs/sampling.jl 100.00% <0.00%> (ø)
src/deprecations.jl
src/GNNGraphs/transform.jl 96.74% <0.00%> (+0.03%) ⬆️
src/GNNGraphs/query.jl 93.02% <0.00%> (+0.04%) ⬆️
src/GNNGraphs/convert.jl 90.40% <0.00%> (+0.31%) ⬆️
src/GNNGraphs/utils.jl 84.72% <0.00%> (+10.23%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2801b51...28ee1de. Read the comment docs.

@rbSparky
Copy link
Contributor Author

What I understand from the paper is that we create a layer, which has the same feature propagation with the normalised adjency matrix step as GCNConv, but for the k hops, we propagate k times (which can be done with a loop?), and then in the end, we return the return value without applying the activation function.

Can you confirm if I have the right idea? I will try to test it and then make the next commit.

@CarloLucibello
Copy link
Member

You put the wrong reference in the opening post, the correct one is https://arxiv.org/pdf/1902.07153.pdf

@CarloLucibello
Copy link
Member

What I understand from the paper is that we create a layer, which has the same feature propagation with the normalised adjency matrix step as GCNConv, but for the k hops, we propagate k times (which can be done with a loop?), and then in the end, we return the return value without applying the activation function.

Yes that's correct. Instead of copying over the implementation of GCNConv forward pass, we could actually store a GCNConv with no activation inside of the SGConv and use that K times in the forward pass.

@rbSparky
Copy link
Contributor Author

rbSparky commented Apr 11, 2022

Yes that's correct. Instead of copying over the implementation of GCNConv forward pass, we could actually store a GCNConv with no activation inside of the SGConv and use that K times in the forward pass.

Makes sense, I'll do that.

EDIT: Previous comment(which I removed since I wasn't quite sure about it):
For SGConv, after removing the non linearity, we have the equation but we can then further collapse the linear transformation into one of such , wouldn't using GCNConv k times implement the first equation?

You put the wrong reference in the opening post, the correct one is https://arxiv.org/pdf/1902.07153.pdf

Oh yes, my apologies, I've fixed that now.

@CarloLucibello
Copy link
Member

ouch sorry, I was mistaken, applying the GCNConv multiple times is not equivalent to SGConv since we don't want to multiple by the weights multiple times but only once. Sorry for derailing you!

@rbSparky
Copy link
Contributor Author

rbSparky commented Apr 12, 2022

I think I should now move on to testing this?, if there are any changes to be made please let me know. Should I write some tests similar to the ones for other layers, since I'm not very sure how the specifics of this would be tested though(like the final result for example, or if we even need to test that). What do you suggest?

@CarloLucibello
Copy link
Member

The implementation looks good. For testing you can follow the example of the other layers. We test output size, equality between cpu an gpu outputs, differentiability.

@rbSparky
Copy link
Contributor Author

I've added the tests, is there anything else that needs to be done before merging?

@rbSparky rbSparky marked this pull request as ready for review April 14, 2022 13:01
@CarloLucibello CarloLucibello changed the title SGConv [WIP] SGConv Apr 15, 2022
@CarloLucibello CarloLucibello merged commit eb43bc9 into JuliaGraphs:master Apr 15, 2022
@CarloLucibello
Copy link
Member

thanks for this!

@rbSparky rbSparky deleted the SGConv branch January 1, 2024 20:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants