Blake Houghton š on Nostr: Finally? I guess it's good taht they finally entered,l the industry, but it's looking ...
Finally? I guess it's good taht they finally entered,l the industry, but it's looking like AMD will always play the catch-up game. How long can they keep that up for, I wonder? š¤
_____________________________________________
AMD unveils its first 1B-parameter large language model, AMD OLMo, with strong reasoning capabilities
ā¢ AMD has introduced AMD OLMo, its first series of fully open-source 1-billion-parameter large language models (LLMs), trained on the company's Instinct MI250 GPUs. These LLMs offer strong reasoning, instruction-following, and chat capabilities, aiming to enhance AMD's position in the AI industry and enable clients to deploy them with AMD hardware.
ā¢ The AMD OLMo models were trained on a vast dataset of 1.3 trillion tokens, utilizing 16 nodes with four AMD Instinct MI250 GPUs each. The training process involved three steps: pre-training on a subset of Dolma v1.7, supervised fine-tuning on various datasets, and alignment to human preferences using Direct Preference Optimization.
ā¢ In testing, AMD OLMo models demonstrated impressive performance against similar open-source models in general reasoning and multi-task understanding benchmarks. The two-phase supervised fine-tuning approach significantly improved accuracy, and the final AMD OLMo 1B SFT DPO model outperformed other chat models by at least 2.60% on average.
ā¢ AMD also evaluated the models on responsible AI benchmarks, such as toxicity, bias, and truthfulness, and found that they were comparable to similar models in handling ethical and responsible AI tasks.
https://www.tomshardware.com/tech-industry/artificial-intelligence/amd-unveils-amd-olmo-its-first-1b-parameter-llm-with-strong-reasoningPublished at
2024-11-06 16:03:45Event JSON
{
"id": "55f5241c0c3a912c24b8b2a779d08af7febd8f5d0d6e9c0befdd360593d58d40",
"pubkey": "0d1dd56ae3204328e45f78b1a64ac8f06d227129f775493ebe84cf28250d1ec6",
"created_at": 1730909025,
"kind": 1,
"tags": [
[
"r",
"https://www.tomshardware.com/tech-industry/artificial-intelligence/amd-unveils-amd-olmo-its-first-1b-parameter-llm-with-strong-reasoning"
]
],
"content": "Finally? I guess it's good taht they finally entered,l the industry, but it's looking like AMD will always play the catch-up game. How long can they keep that up for, I wonder? š¤\n\n_____________________________________________\nAMD unveils its first 1B-parameter large language model, AMD OLMo, with strong reasoning capabilities\n\nā¢ AMD has introduced AMD OLMo, its first series of fully open-source 1-billion-parameter large language models (LLMs), trained on the company's Instinct MI250 GPUs. These LLMs offer strong reasoning, instruction-following, and chat capabilities, aiming to enhance AMD's position in the AI industry and enable clients to deploy them with AMD hardware.\n\nā¢ The AMD OLMo models were trained on a vast dataset of 1.3 trillion tokens, utilizing 16 nodes with four AMD Instinct MI250 GPUs each. The training process involved three steps: pre-training on a subset of Dolma v1.7, supervised fine-tuning on various datasets, and alignment to human preferences using Direct Preference Optimization.\n\nā¢ In testing, AMD OLMo models demonstrated impressive performance against similar open-source models in general reasoning and multi-task understanding benchmarks. The two-phase supervised fine-tuning approach significantly improved accuracy, and the final AMD OLMo 1B SFT DPO model outperformed other chat models by at least 2.60% on average.\n\nā¢ AMD also evaluated the models on responsible AI benchmarks, such as toxicity, bias, and truthfulness, and found that they were comparable to similar models in handling ethical and responsible AI tasks.\n\nhttps://www.tomshardware.com/tech-industry/artificial-intelligence/amd-unveils-amd-olmo-its-first-1b-parameter-llm-with-strong-reasoning",
"sig": "284e1d3e7b87739502b6cf40150726745cefa7740a8ac8efc69987adc40c0c564f29d1b79470ae3176bd32b5c420a4c3e08bcc8e4827e699c8a6a4c9abada34f"
}