Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spec 10 15 #172

Merged
merged 7 commits into from
Nov 30, 2023
Merged

Spec 10 15 #172

merged 7 commits into from
Nov 30, 2023

Conversation

Ying-1106
Copy link
Contributor

add specformer.py ,spec_trainer.py and readme.md into GammaGL

from tensorlayerx.model import WithLoss


def feature_normalize(x):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you use this function?

rowsum = x.sum(axis=1, keepdims=True)
rowsum = np.clip(rowsum, 1, 1e10)
return x / rowsum
def normalize_graph(g): # input : g is adj matrix , return the L matrix of graph
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these functions do not belong to this file, you may put these into the example/specformer/utils which means you need to create a new folder and put these functions into this folder.


return train_index, valid_index, test_index
# utils
class DotProductAttention_tlx(tlx.nn.Module):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this class should be put into another file. You may put this into gammagl/models/attention.py as other users may use this class. And remember to add the docstring to explain the function and parameters.

scores = tlx.ops.bmm(querys, tlx.nn.Transpose(perm=[0, 2, 1])(keys)) / math.sqrt(d)
self.attn_weights = tlx.nn.Softmax(axis=-1)(scores)
return tlx.ops.bmm(self.attn_weights, values)
class MultiHeadAttention_tlx(tlx.nn.Module):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

eeig = tlx.cast(eeig, dtype=tlx.float32)
# print(eeig)
return self.eig_w(eeig)
class FeedForwardNetwork_tlx(tlx.nn.Module):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this doing? Why not directly use a two-layer linear layer in the following class?

@dddg617 dddg617 merged commit 7b5f388 into BUPT-GAMMA:main Nov 30, 2023
dddg617 added a commit that referenced this pull request Jan 12, 2024
* add specformer.py and spec_trainer.py

* add readme.md

* integrate 3 directories into 1 python file.

* [Model] Update models

* nightly update

* nightly update

---------

Co-authored-by: dddg617 <[email protected]>
Co-authored-by: dddg617 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants