Tom_Drummond on Nostr: nprofile1q…r59k6 As I understand it, masking is used to obtain causality so each ...
nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqc9m22hkc5h6zgrwkz48crhcpw6vch2rf6j97746ugl3neys86jeqcr59k6 (nprofile…59k6) As I understand it, masking is used to obtain causality so each token only attends to the tokens preceding it (and itself).
Each token emits a query and a key and the code computes all pairwise dot products between queries and keys (using matmul) to create the attention matrix. This is then 'masked' by subtracting a large number from all the anti-causal logits in the matrix (everything above the leading diagonal if rows correspond to queries).
So the first token attends only to itself, the second attends to the first and itself - and so on. In general this uses half of the attention matrix. So the early tokens are being very wasteful, those in the middle are a little bit wasteful and those near the end are barely wasteful at all.
But maybe I have misunderstood your question...?
Published at
2024-11-18 01:54:07Event JSON
{
"id": "505f191613760fafcdae2b993aefa256255ce189317f9084fa13a44ce4e1777b",
"pubkey": "456108b0f4f4455ef20f22e008e5ddb9701bda81a9d299a454b84f75c3059b64",
"created_at": 1731894847,
"kind": 1,
"tags": [
[
"p",
"c176a55ed8a5f4240dd6154f81df0176998ba869d48bef575c47e33c9207d4b2",
"wss://relay.mostr.pub"
],
[
"p",
"3422fcbc32f333fb2d3481b2e981258af8a0b571869cbfe93c42962410e232ef",
"wss://relay.mostr.pub"
],
[
"e",
"c9e474f0a0e7f38bbc0e83aaca69f37154bb64827f1648bd09880c3dd75faf21",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://mathstodon.xyz/users/Tom_Drummond/statuses/113501460732703459",
"activitypub"
]
],
"content": "nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqc9m22hkc5h6zgrwkz48crhcpw6vch2rf6j97746ugl3neys86jeqcr59k6 As I understand it, masking is used to obtain causality so each token only attends to the tokens preceding it (and itself).\n\nEach token emits a query and a key and the code computes all pairwise dot products between queries and keys (using matmul) to create the attention matrix. This is then 'masked' by subtracting a large number from all the anti-causal logits in the matrix (everything above the leading diagonal if rows correspond to queries).\n\nSo the first token attends only to itself, the second attends to the first and itself - and so on. In general this uses half of the attention matrix. So the early tokens are being very wasteful, those in the middle are a little bit wasteful and those near the end are barely wasteful at all.\n\nBut maybe I have misunderstood your question...?",
"sig": "82664fb2e995c13b04ab73c7d904662aa3cd8158ff40ece51386fcec622644f48d52a651a9fc152972f8f0e6fc20594aa799c383e34a9060e5499957ec4730a4"
}