GΛËLDUVΛL on Nostr: Imagine a #LLM running locally on a smartphone... It could be used to fuel an offline ...
Imagine a #LLM running locally on a smartphone...
It could be used to fuel an offline assistant that would be able to easily add an appointment to your calendar, open an app, etc. without #privacy issues.
This has come to reality with this proof of concept using the Phi-2 2.8B transformer model running on /e/OS.
It is slow, so not very usable until we have dedicated chips on SoCs, but works.
{
"id":"60e19b01e19564a360912bed867f352dba286c85147e3919a6ae51fd1b8775b0",
"pubkey":"43a1ad89dd46954c41b25a1d939ac3dae203a9228fe29b194c6406aa414e319f",
"created_at":1710263591,
"kind":1,
"tags": [
[
"t",
"llm"
],
[
"t",
"privacy"
],
[
"t",
"ai"
],
[
"t",
"mobile"
],
[
"proxy",
"https://mastodon.social/users/gael/statuses/112083834721975376",
"activitypub"
]
],
"content":"Imagine a #LLM running locally on a smartphone...\n\nIt could be used to fuel an offline assistant that would be able to easily add an appointment to your calendar, open an app, etc. without #privacy issues.\n\nThis has come to reality with this proof of concept using the Phi-2 2.8B transformer model running on /e/OS.\n\nIt is slow, so not very usable until we have dedicated chips on SoCs, but works.\n\n🙏 Stypox \n\n#AI on #mobile 👇 👇 👇\n\nhttps://files.mastodon.social/media_attachments/files/112/083/822/794/671/982/original/1ccef2a7bd2ac7b0.mp4",
"sig":"cfe34b45f0e21cf2ae1687da2e7f327cc22ae411e1ee21de64b930875b1a4db37517f406fd0b152ef938b9955a8fc75b0a4dbd062f412dca8e39a4df14812dfe"
}