michabbb on Nostr: š #LightningAI Research Engineer Sebastian #Raschka delivers comprehensive ...
š #LightningAI Research Engineer Sebastian #Raschka delivers comprehensive workshop on building #LLMs from ground up
ā¢ š§ Technical specs: 12 transformer layers, 768-dimensional embeddings, 50,257 vocabulary size using #OpenAI's BPE tokenizer
ā¢ š» Hands-on coding focus: Data preparation, architecture construction, pre-training implementation, and fine-tuning processes
ā¢ š Core components covered: Input data structuring, tokenization methods, model checkpoint management, and text generation
ā¢ š Practical approach emphasizes understanding through building rather than using pre-made #ML libraries
Source:
https://youtu.be/quh7z1q7-uc?si=TUD43-uVtGLjCuq1Published at
2024-10-29 23:31:40Event JSON
{
"id": "7b1f31787f6ddeef2b5d8bdac160fbb00e15f2440948abe78fae0bd74b583783",
"pubkey": "129f83898c7008d335771fe681ecf979e7767ad958c552ff85de962ba2f775be",
"created_at": 1730244700,
"kind": 1,
"tags": [
[
"t",
"lightningai"
],
[
"t",
"raschka"
],
[
"t",
"llms"
],
[
"t",
"openai"
],
[
"t",
"ml"
],
[
"proxy",
"https://social.vivaldi.net/users/michabbb/statuses/113393316687038261",
"activitypub"
]
],
"content": "š #LightningAI Research Engineer Sebastian #Raschka delivers comprehensive workshop on building #LLMs from ground up\n\nā¢ š§ Technical specs: 12 transformer layers, 768-dimensional embeddings, 50,257 vocabulary size using #OpenAI's BPE tokenizer\n\nā¢ š» Hands-on coding focus: Data preparation, architecture construction, pre-training implementation, and fine-tuning processes\n\nā¢ š Core components covered: Input data structuring, tokenization methods, model checkpoint management, and text generation\n\nā¢ š Practical approach emphasizes understanding through building rather than using pre-made #ML libraries\n\nSource: https://youtu.be/quh7z1q7-uc?si=TUD43-uVtGLjCuq1",
"sig": "888199a021c185f5a7247f759f3fab8f72d0ad93fdb714c22a051f763f09e3a1ee91b4443de883dd4c795951401c2da8d213af490e13ceccf98144033c4cd65e"
}