
Synchron company, specialized in brain-computer interface (BCI) technologies, announced the development of 'Kiral', the first cognitive AI model that relies on direct neural data from the human brain, marking a significant step towards the era of 'cognitive AI'. The announcement was made during the NVIDIA GTC 2025 conference, where the company showcased its new technology in collaboration with NVIDIA Holoscan and Apple Vision Pro, enabling users to control digital and physical environments directly through thought using Stentrode technology. A live video demonstrated a patient with amyotrophic lateral sclerosis (ALS) controlling their smart home without using their hands, voice, or touch, solely through mental commands. The user was able to control the lighting, music, temperature, and appliances in their home by following their eye movements and direct thought. Synchron's CEO and founder, Dr. Tom Oxley, explained that the new model relies on neural data-driven learning technologies to develop cognitive functions that can improve users' lives. He added, 'We can collect vast amounts of data thanks to our implantable device technology, making this technology a future standard like pacemakers.' The company outlined a three-stage roadmap to achieve cognitive AI, in collaboration with NVIDIA: the first stage involves achieving high-resolution neural signal decoding using Holoscan, the second stage involves integrating environmental awareness through Omniverse and Cosmos models, and the third stage involves training the 'Kiral' model using data collected from clinical trials. Synchron emphasized that this technology will significantly impact our understanding and interpretation of human intentions, citing its potential to collect data on a population scale, accelerating the development of intelligent models that learn and improve continuously. David Niu, NVIDIA's head of business development, said, 'We are entering a new era of computing where technology not only responds to humans but also grants them new, unprecedented abilities, especially for people with disabilities, enhancing their independence and ability to communicate with the world.' Synchron expects this development to lay the foundation for building a massive brain model, similar to current models trained on text and images, with the next step being training AI on human brain signals.
https://www.aletihad.ae/news/%D8%AB%D9%82%D8%A7%D9%81%D8%A9/4561214/%D8%B9%D8%B5%D8%B1-%D8%AC%D8%AF%D9%8A%D8%AF-%D9%8A%D8%A8%D8%AF%D8%A3----%D9%83%D9%8A%D8%B1%D8%A7%D9%84--%D8%A3%D9%88%D9%84-%D9%86%D9%85%D9%88%D8%B0%D8%AC-%D9%84%D9%84%D8%B0%D9%83%D8%A7%D8%A1-%D8%A7%D9%84%D8%A7%D8%B5%D8%B7%D9%86%D8%A7%D8%B9%D9%8A-%D9%8A%D9%82%D8%B1%D8%A3-%D8%A7%D9%84%D8%B9%D9%82