-
Notifications
You must be signed in to change notification settings - Fork 51
SGConv #154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SGConv #154
Conversation
Codecov Report
@@ Coverage Diff @@
## master #154 +/- ##
==========================================
- Coverage 85.52% 85.45% -0.08%
==========================================
Files 15 14 -1
Lines 1285 1299 +14
==========================================
+ Hits 1099 1110 +11
- Misses 186 189 +3
Continue to review full report at Codecov.
|
What I understand from the paper is that we create a layer, which has the same feature propagation with the normalised adjency matrix step as Can you confirm if I have the right idea? I will try to test it and then make the next commit. |
You put the wrong reference in the opening post, the correct one is https://arxiv.org/pdf/1902.07153.pdf |
Yes that's correct. Instead of copying over the implementation of GCNConv forward pass, we could actually store a GCNConv with no activation inside of the SGConv and use that K times in the forward pass. |
Makes sense, I'll do that. EDIT: Previous comment(which I removed since I wasn't quite sure about it):
Oh yes, my apologies, I've fixed that now. |
ouch sorry, I was mistaken, applying the GCNConv multiple times is not equivalent to SGConv since we don't want to multiple by the weights multiple times but only once. Sorry for derailing you! |
I think I should now move on to testing this?, if there are any changes to be made please let me know. Should I write some tests similar to the ones for other layers, since I'm not very sure how the specifics of this would be tested though(like the final result for example, or if we even need to test that). What do you suggest? |
The implementation looks good. For testing you can follow the example of the other layers. We test output size, equality between cpu an gpu outputs, differentiability. |
I've added the tests, is there anything else that needs to be done before merging? |
thanks for this! |
Adding SGConv from the paper Semi-Supervised Classification with Graph Convolutional Networks.