ChatGPT - AI Merge with Human Brain Cells Exploration

Worktual AI Bulletin31st July 2023

 AI systems are able to provide personalised and relevant responses via chat

Research project receives $600k grant to explore merging human brain cells with AI, using lab-grown brain cells capable of playing video games as a programmable biological computing platform. ChatGPT outperforms undergraduate students in problem-solving tasks, scoring an impressive 80% on reasoning problems compared to the human participants' average of just below 60%. Anthropic launches Claude 2, a chatbot rivalling ChatGPT, operating on safety principles drawn from human rights declarations and modern documents like Apple's terms of service. UK universities adopt guiding principles for AI literacy and responsible AI use among students and staff while addressing concerns related to generative AI in education. Lastly, AI-powered audiology advancements offer hope for millions affected by hearing loss, leveraging AI's ability to mimic human hearing processes and optimise sound in real-time.

Key takeaways

  • Research project receives grant to explore merging human brain cells with AI.
  • ChatGPT outperforms undergraduate students in problem-solving tasks.
  • Anthropic launches Claude 2, a chatbot rivalling ChatGPT, operating on safety principles.
  • UK universities adopt guiding principles for AI literacy and responsible AI use.
  • AI-powered audiology advancements offer hope for millions affected by hearing loss.

Research project receives grant to explore merging human brain cells with AI

A research project exploring the fusion of human brain cells with artificial intelligence secured a $600,000 grant from defence and the Office of National Intelligence (ONI). The team, led by Monash University and Cortical Labs, previously developed DishBrain, which consists of brain cells capable of playing vintage video games like Pong. Their groundbreaking work combines artificial intelligence and synthetic biology to create programmable biological computing platforms. Hundreds of thousands of live, lab-grown brain cells learn various tasks, such as playing Pong, using a multi-electrode array to provide feedback on the ‘paddle’ movements. This innovative study aims to bridge the gap between biological systems and AI, potentially unlocking new horizons in computing capabilities.

ChatGPT outperforms undergraduate students in problem-solving tasks

According to a new study, ChatGPT, the language model powering the chatbot, exhibits problem-solving capabilities on par with or surpassing those of undergraduate students. The researchers conducted tests to evaluate GPT-3's performance in solving reasoning problems akin to those found in intelligence tests and exams like the SAT. Utilising a complex array of shapes, the psychologists from the University of California, transformed the images into a text format that GPT-3 could comprehend, ensuring that the model had not encountered these questions before. Remarkably, GPT-3 successfully answered 80% of the problems, outperforming the human undergraduates who averaged just below 60% correct responses.

Anthropic launches Claude 2, a chatbot rivalling ChatGPT, operating on safety principles

Anthropic, a US AI company, has launched Claude 2, a new chatbot rivalling ChatGPT, capable of summarising large blocks of text and guided by a set of safety principles inspired by documents like the Universal Declaration of Human Rights and Apple's terms of service. The chatbot, operating under the concept of ‘Constitutional AI’, aims to address safety and societal concerns related to AI. By utilising principles from various sources, including the 1948 UN declaration, Claude 2 aims to make informed judgments while producing text, taking into account contemporary issues like data privacy and impersonation. Anthropic has made the chatbot publicly available in the US and the UK amid ongoing debates surrounding AI safety and risks.

UK universities adopt guiding principles for AI literacy and responsible AI use

In response to the growing use of generative artificial intelligence in education, UK universities, including the 24 Russell Group research-intensive institutions, have established guiding principles to ensure AI literacy among students and staff. The code aims to strike a balance, enabling universities to embrace AI's opportunities while safeguarding academic integrity. Instead of banning AI tools like ChatGPT, the guidance encourages teaching students to use AI responsibly while being mindful of plagiarism, bias, and accuracy concerns in generative AI. Faculty will receive training to support students effectively, especially as many already incorporate ChatGPT in their assignments. The education sector anticipates the emergence of new assessment methods to mitigate cheating risks and adapt to AI's influence on teaching and evaluation.

AI-powered audiology advancements offer hope for millions affected by hearing loss

Advancements in computational power and data availability have fueled the disruption of various fields by Artificial Intelligence (AI), and audiology is experiencing transformative changes as well. With an estimated one in ten people being affected by hearing loss by 2050, including millions of children, AI offers a glimmer of hope through recent technological breakthroughs. AI's ability to learn and mimic human hearing processes has paved the way for adaptive, user-focused hearing solutions and AI-driven hearing aids have showcased remarkable results, improving hearing in noisy environments by up to 55%.

Innovative companies like Starkey are leading the charge with their AI-powered hearing aid, Genesis AI. This cutting-edge device utilises real-time AI analysis to optimise sound, incorporating a Neuro Processor with a Deep Neural Network (DNN) accelerator engine akin to the human cerebral cortex. The result is enhanced speech and reduced noise, revolutionising the audiology landscape.

Back to Blogs

Let’s get to Worktual!

Choose your plan