Mark Pesce on Nostr: "We explore the novel application of LLMs to code optimization. We present a ...
"We explore the novel application of LLMs to code optimization. We present a 7B-parameter transformer model trained from scratch to optimize LLVM assembly for code size...Our approach achieves a 3.0% improvement in reducing instruction counts over the compiler, outperforming two state-of-the-art baselines that require thousands of compilations. Furthermore, the model shows surprisingly strong code reasoning abilities, generating compilable code 91% of the time...
https://arxiv.org/abs/2309.07062Published at
2023-09-18 08:08:49Event JSON
{
"id": "1ec308e9272449164baf85d9b71a63caee00fab289ed7424d2c5f9e06744d627",
"pubkey": "8b0ef49d11c147634fe81e5df544407f35ac0ca01d288082da576d0404fbbd8f",
"created_at": 1695024529,
"kind": 1,
"tags": [
[
"proxy",
"https://arvr.social/users/mpesce/statuses/111085127584312552",
"activitypub"
]
],
"content": "\"We explore the novel application of LLMs to code optimization. We present a 7B-parameter transformer model trained from scratch to optimize LLVM assembly for code size...Our approach achieves a 3.0% improvement in reducing instruction counts over the compiler, outperforming two state-of-the-art baselines that require thousands of compilations. Furthermore, the model shows surprisingly strong code reasoning abilities, generating compilable code 91% of the time... \n\nhttps://arxiv.org/abs/2309.07062",
"sig": "7b6a95e90db1306035503fe3ed3e7dd7dae344ec03cd23709a666ac6a72998fc83c752255518e2542266483f502d77e415985c4d19bc9a627b2fbc1663e5eb95"
}