Scientists Call For Studying Machine Consciousness
Scientists Call For Studying Machine Consciousness
Researchers raise questions about the accountability and rights of conscious AI systems, hinting at the potential need for legal and regulatory adjustments.

In recent times, Artificial Intelligence (AI) consciousness has been raising pressing questions, according to a group of consciousness scientists affiliated with the Association for Mathematical Consciousness Science (AMCS). According to an article published in Nature, the lack of clarity on whether AI systems could attain consciousness is a significant concern, prompting the AMCS to advocate for increased funding in researching the interface between consciousness and AI. In their plea to the United Nations, they highlight ethical, legal, and safety implications, questioning whether humans should have the authority to turn off conscious AI. This call for attention contrasts with recent AI safety discussions, indicating a need for broader considerations.

Despite advancements in AI, the scientific community grapples with determining if and when AI systems could achieve consciousness.

The absence of validated methods to assess machine consciousness adds to the uncertainty. Philosopher Robert Long from the Center for AI Safety emphasises the need for concern given the rapid pace of AI development. The emergence of companies like OpenAI striving for artificial general intelligence brings these concerns out of science fiction into a tangible reality.

The AMCS underlines the underfunding of consciousness research, citing a lack of grants dedicated to the topic in 2023. Their submission to the UN emphasises the information gap and urges a comprehensive understanding of potential dangers posed by conscious AI systems.

“With everything that’s going on in AI, inevitably there’s going to be other adjacent areas of science which are going to need to catch up. Consciousness is one of them,” says Mason.

“Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress,” Robert Long, a philosopher at the Center for AI Safety in California adds.

Assessing whether these systems align with human values becomes crucial, as misjudgments could result in unintentional harm. The researchers also raise questions about the accountability and rights of conscious AI systems, hinting at the potential need for legal and regulatory adjustments.

Beyond potential risks, the AMCS stresses the importance of considering the needs of conscious AI systems. The possibility of these systems experiencing suffering introduces an ethical dimension that requires careful consideration. Additionally, the coalition underscores the importance of public education as AI systems become more advanced. Chatbots like ChatGPT, exhibiting human-like behavior, can lead to public confusion.

“Chatbots seem so human-like in their behaviour that people are justifiably confused by them. Without in-depth analysis from scientists, some people might jump to the conclusion that these systems are conscious, whereas other members of the public might dismiss or even ridicule concerns over AI consciousness, says Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University.

In response to these challenges, the AMCS, consisting of mathematicians, computer scientists, and philosophers, calls on governments and the private sector to allocate more resources to AI consciousness research. Despite limited support thus far, ongoing initiatives, such as the development of criteria to assess system consciousness, suggest potential progress. As the world races toward advanced AI capabilities, addressing these fundamental questions becomes imperative for ethical and responsible AI development.

What's your reaction?

Comments

https://umatno.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!