Why Nostr? What is Njump?
2025-03-10 16:40:39

Terence Tao on Nostr: Today I was trying to reconstruct the proof of a standard (to experts) result in ...

Today I was trying to reconstruct the proof of a standard (to experts) result in graph theory, namely that the "triangle removal lemma" of Ruzsa and Szemeredi implied that a graph on n vertices composed of n induced matchines contained only o(n^2) edges. My main three options here were: try to work it out by pen and paper; do a traditional web search; or ask a large language model. In this case, I chose the third option, and in a few seconds I received a perfectly good answer that explained the implication correctly: https://chatgpt.com/share/67cf13cf-53dc-800e-a382-e4ece8341a6d

This satisfied my immediate request - and indicated a good use case for LLMs for quickly providing details of some arguments standard in one's field, that one can then verify for correctness - but then I got curious and asked the model to also explain another standard consequence of the triangle removal lemma, namely the (6,3) theorem of Ruzsa and Szemeredi concerning the size of 3-uniform hypergraph with certain forbidden configurations. Here the results were decidely more mixed. The initial answer had the right general strategy - use the hypergraph to encode a graph - but lacked all the key details. When I pressed it further, it did not mention the two most essential ideas - the use of the (6,3) condition to limit unwanted triangles in the encoded graph, or the initial reduction to linear hypergraphs - but with additional prompting, it was able to reconstruct these components and eventually provide an essentially correct proof of the derivation. But I had to guide it with rather explicit hints, which I could only do because I had looked up the proof first via a traditional web search. (1/2)
Author Public Key
npub1hsf727dlfy55vvm5wuqwyh457uwsc24pxn5f7vxnd4lpvv8phw3sjm7r3k