Danie on Nostr: This Raspberry Pi project uses AI to tell visually impaired people what's around them ...
This Raspberry Pi project uses AI to tell visually impaired people what's around them
The idea behind this "third eye" project involves the patient wearing glasses with a little camera on them. This camera feeds information to a Seeed Studio XIAO ESP32S3 Sense and a Raspberry Pi 1 Model B+, which uses object recognition to work out what's in front of the person. The boards then convert a text-based description of what's going on in front of the wearer and relay it via text-to-speech through the headphones to the wearer.
It would be interesting to hear how the nature of what is described, audibly, is useful to the listener. But I'd imagine too that anything like this can be further trained and improved.
See
https://www.xda-developers.com/raspberry-pi-ai-visually-impaired/#technology #opensource #AI #vision
Published at
2024-08-26 09:13:54Event JSON
{
"id": "ebb75c5a27d15a19da6f79b825eb60bafacb5254d7356a0cffe90b2a47858c28",
"pubkey": "42a41978c51cb00695a18de6c9754b90e208dd31d2851e7c69104899c1aea03e",
"created_at": 1724663634,
"kind": 1,
"tags": [
[
"t",
"technology"
],
[
"t",
"opensource"
],
[
"t",
"AI"
],
[
"t",
"vision"
],
[
"imeta",
"url https://image.nostr.build/c286c501664dc6a38615b4103b47fe123b55d2e9ee46d14b0514bef8fb766e60.jpg",
"ox c286c501664dc6a38615b4103b47fe123b55d2e9ee46d14b0514bef8fb766e60",
"x 08b357823da6566f678622dbdbe2e71d4c70da903c5326eff5f9336f5614973e",
"m image/jpeg",
"dim 745x414",
"bh LI8Zc5.T%#yCs;RPo#bbWBaKt8kC",
"blurhash LI8Zc5.T%#yCs;RPo#bbWBaKt8kC",
"thumb https://image.nostr.build/thumb/c286c501664dc6a38615b4103b47fe123b55d2e9ee46d14b0514bef8fb766e60.jpg"
]
],
"content": "This Raspberry Pi project uses AI to tell visually impaired people what's around them\n\nThe idea behind this \"third eye\" project involves the patient wearing glasses with a little camera on them. This camera feeds information to a Seeed Studio XIAO ESP32S3 Sense and a Raspberry Pi 1 Model B+, which uses object recognition to work out what's in front of the person. The boards then convert a text-based description of what's going on in front of the wearer and relay it via text-to-speech through the headphones to the wearer.\n\nIt would be interesting to hear how the nature of what is described, audibly, is useful to the listener. But I'd imagine too that anything like this can be further trained and improved.\n\nSee https://www.xda-developers.com/raspberry-pi-ai-visually-impaired/\n\n#technology #opensource #AI #vision\nhttps://image.nostr.build/c286c501664dc6a38615b4103b47fe123b55d2e9ee46d14b0514bef8fb766e60.jpg",
"sig": "0459c76f83c6137c7f43b61beadea740237deb41150d3952922bad2967194518aced5b0d2fdcfb9784b4f03c09132b0b102c7bf52fd487a5133100f9f5e52ba1"
}