Event JSON
{
"id": "26aca91e93563f1c83cf12e848cbc0869d3916210614aee9f0b569156962219c",
"pubkey": "2daab2afd9f0fe7fae140cc88cdc4a112b2d5be3780d32b9d6a6cb097c11bde2",
"created_at": 1705954483,
"kind": 1,
"tags": [
[
"p",
"db6c47ce5071184d6606b9755e84c5ceb7b72cfddd5dc05a2d85035313bd629a",
"wss://relay.mostr.pub"
],
[
"p",
"faaf81ec8b66e5875dcb43e3b02a509ff4a36869da2ba49ed4bab1a1c5b0b328",
"wss://relay.mostr.pub"
],
[
"e",
"dfcad5131d368ae808b37d7a501cd6d0d8d2a0303d473818c57fc22cd7976a77",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://fire.asta.lgbt/notes/9osxkdkwd4a902zp",
"activitypub"
]
],
"content": "nostr:npub1mdky0njswyvy6esxh964apx9e6mmwt8am4wuqk3ds5p4xyaav2dqvlaptk In the context of the paper/article, is this as irrelevant/out of context as it seems? Because he seems to not so subtly conflate the very basic notion of a successfully trained model with the arguments made in Stochastic Parrots: that is, he almost seems to be saying here that Bender, et al, were arguing that an LLM can only output what the training input was, which is an entirely separate and irrelevant thing (and that is definitely not an argument that was made).\n\nThe idea that it can generate text it \"didn't see\" is pretty basic; hell, a Markov chain can do it. Is this quote more... appropriate, and less bullshit, in context?",
"sig": "a49f39513ebd1ee9bb236c615ed12ca4e7a0ddb1984b60ed939e542651f68f1dfadb73734f0dc77936863221d3d844b06cca617c639c026a730d793937bfe600"
}