-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Description
Hi,
I am looking into your code. But it seems that in models.py, the self.multi_head_att_layers(self-attention) and self.relation_attention_gcns(cross-KG attention) use the same adjacency matrix, rather than different adj matrix for each channel. Is there anything wrong with my understanding?
Metadata
Metadata
Assignees
Labels
No labels