Researchers at the University of Texas at Austin have developed an AI-based decoder that can translate brain activity into text without invasive procedures. This breakthrough allows thoughts to be read non-invasively, opening up possibilities for speech restoration in individuals with conditions such as stroke or motor neurone disease. Unlike previous language decoding systems that required surgical implants, this decoder utilises fMRI scan data and can accurately reconstruct speech while individuals listen to or imagine a story. The lead neuroscientist behind the project expressed surprise and excitement at its success after 15 years of work. The development overcomes a limitation of fMRI, as the decoder employs large language models like OpenAI's ChatGPT to interpret the semantic meaning of speech from patterns of neuronal activity, rather than tracking activity word by word in real-time.
In a groundbreaking study published in Nature Chemical Biology, scientists from McMaster University and the Massachusetts Institute of Technology harnessed the power of artificial intelligence (AI) to discover a new antibiotic named abaucin. Using an AI algorithm, the researchers screened a vast array of antibacterial molecules, swiftly narrowing down the candidates. The AI-guided analysis led to the identification of abaucin among several hundred compounds, ultimately highlighting its potential as an antibiotic. Laboratory tests confirmed abaucin's efficacy against Acinetobacter baumannii, a critical superbug according to the World Health Organization. In a wound infection model using mice, abaucin effectively suppressed the infection. This AI-driven breakthrough showcases the remarkable impact of AI technology in accelerating the discovery of novel treatments and combating antibiotic-resistant pathogens.
Nvidia's valuation soared to $1tn in May, propelling it to become the fifth most valuable American company and a notable beneficiary of the AI frenzy. With 30 years of industry presence, Nvidia has been a dominant force across various sectors. However, its prominence reached new heights with the current surge of interest in generative AI. Originally founded in 1993, Nvidia gained prominence by creating graphics processing units (GPUs) for video games. Over time, the company expanded its reach by repurposing these chips to fuel automated driving systems like those in Tesla vehicles and power data centres. While major competitors fought for market share in established sectors such as data centre operations, Nvidia invested in developing chips tailored for diverse AI applications. This strategy has now granted Nvidia a significant advantage in the AI landscape.
An AI system called BacterAI has transformed scientific experimentation, enabling robots to autonomously conduct up to 10,000 daily experiments. While focused on bacteria research, this breakthrough holds immense potential for accelerating discoveries in various fields, including medicine, agriculture, and environmental science. Conventional methods have left roughly 90% of bacteria unexplored, primarily due to the time and resource-intensive nature of research – however, automated experimentation offers a solution. With support from the National Institutes of Health and NVIDIA, the research team utilised the BacterAI platform to run up to 10,000 experiments a day that continuously refined its approach by adjusting the research each morning, informed by the previous day's results. In just nine days, it achieved a remarkable 90% accuracy in generating positive predictions.
Advancements in machine learning and speech recognition tech have enhanced information accessibility, especially for individuals who are dependent on voice-tech to access information. However, the scarcity of labelled data for numerous languages presented a major hurdle in creating top-rate machine-learning models. Addressing this challenge, the Meta-led Massively Multilingual Speech (MMS) project has made significant progress in broadening language coverage and enhancing the performance of speech recognition. Through the integration of self-supervised learning methods and a diverse dataset such as religious readings, which have been rigorously translated into multiple languages, the MMS project has achieved remarkable outcomes by expanding language support from approximately 100 languages in existing speech recognition models, to an impressive 1,100 languages.
Back to Blogs