⚗️alchemist⚗️ on Nostr: nemesis :blackhole: The question on my mind is how do we know that "all of math" ...
nemesis :blackhole: (npub10zr…yg26) The question on my mind is how do we know that "all of math" won't turn out to be a narrow problem in the same sense? Naively, it seems like the main question is whether the higher-level abstractions which guide mathematical thought can be implicitly encoded in some general-purpose neural architecture or whether the ML system needs to be able to encode them explicitly. Maybe every neural architecture has a finite capacity to store higher-level abstractions.
(To be explicit, the higher-level abstractions in geometry would be concepts like orthocenter or nine-point circle, etc. which allow you to reason about the problem in a compressed state. From their paper, it sounds like some of these abstractions were hard-coded (like "construct an excircle"), but I can believe that the LLM was also learning its own abstractions in some unintelligible form.)
Published at
2024-01-19 18:16:36Event JSON
{
"id": "75fa88451064f5a04e523b2c09b6addfa9cad89a9517cd202f1607f114dfc12c",
"pubkey": "0af5f8f4be4b08e199bf1fa4f01e4ab7dd35cd11a62afc72f251b7036c5a2eb8",
"created_at": 1705688196,
"kind": 1,
"tags": [
[
"p",
"7887899ffc4f349a118163fff0128319d56429df6d62a97665488f11c266989d",
"wss://relay.mostr.pub"
],
[
"e",
"43c865adb6a440bb1c4ae0cd643284084e18d9ea0576c451cd592c769beb5962",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://cawfee.club/objects/3c4f3dc5-2ef3-4758-9b79-50e12336691a",
"activitypub"
]
],
"content": "nostr:npub10zrcn8lufu6f5yvpv0llqy5rr82kg2wld432jan9fz83rsnxnzws2xyg26 The question on my mind is how do we know that \"all of math\" won't turn out to be a narrow problem in the same sense? Naively, it seems like the main question is whether the higher-level abstractions which guide mathematical thought can be implicitly encoded in some general-purpose neural architecture or whether the ML system needs to be able to encode them explicitly. Maybe every neural architecture has a finite capacity to store higher-level abstractions. \n\n(To be explicit, the higher-level abstractions in geometry would be concepts like orthocenter or nine-point circle, etc. which allow you to reason about the problem in a compressed state. From their paper, it sounds like some of these abstractions were hard-coded (like \"construct an excircle\"), but I can believe that the LLM was also learning its own abstractions in some unintelligible form.)",
"sig": "4b5416e46e51f6fca53176bb4b427f4f731b4c0022373425a8a7701669db015c6ef1eb56fd78dc1ba5bb7b84cf58da5e01e9da79d71fb58ea20b78f1a44b90b1"
}