You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since attention_mask is already used as the input of the model (StructureTokenEncoder), and the affine_mask is also obtained in the forward process of the model, why do we need to input sequence_id and chain_id to ensure that the model masks irrelevant parts when calculating attention? I am also confused about the format of sequence_id and chain_id. Will each residue have the same id? Will the position that does not need to be paid attention to be 0?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Since attention_mask is already used as the input of the model (StructureTokenEncoder), and the affine_mask is also obtained in the forward process of the model, why do we need to input sequence_id and chain_id to ensure that the model masks irrelevant parts when calculating attention? I am also confused about the format of sequence_id and chain_id. Will each residue have the same id? Will the position that does not need to be paid attention to be 0?
Beta Was this translation helpful? Give feedback.
All reactions