Can Machines develop Consciouness?

Research in the frontier and transition areas of scientific disciplines

What is the point of the idea that artificial intelligences can attain their own consciousness? Research groups around the world are actively working to build "conscious" machines. Others are convinced that there will never be conscious AI, even if it might seem so to us.

Current developments in the field of large language models (LLMs) are technically impressive, but they do not have much to do with consciousness.

In the project "Clarification of the Suspicion of Ascending Consciousness in Artificial Intelligence (AI Consciousness)" we investigate and map which groups work scientifically, economically and ideologically towards raising consciousness in AI. We ask which motives, intentions and anchors are behind each one and which future scenarios are being considered or even doubted. In doing so, we clarify technical designs and question myths and narratives that are put into the world and trigger certain attributions—for example, in the daily media.

The research project historically originated at the Institute for Technology Assessment and Systems Analysis (ITAS) of the Karlsruhe Institute of Technology (KIT) and received start-up funding from the German Federal Ministry of Education and Research (BMBF). It was developed by Prof. Dr. Karsten Wendland, who continues the project as a long-term project.

Approach and Methodology in the Foresight Project

The procedures in the technology foresight project "AI Consciousness" are based on a mixed-methods approach, in which a systematic investigation of ongoing discourses in special disciplines, expert interviews with national and international actors and bibliometric media analysis are carried out to get a reasonable overview and deeper insights. The results are related to each other, processed in context and evaluated.

In the following steps, representatives of different positions are brought together in a decentralised moderated dialogue. In these debates, assessments, attitudes, different concepts and controversial questions about awareness in the AI are clarified and worked on together - and finally brought together in a joint symposium (face-to-face event). The project is accompanied by empirical monitoring of daily reports and myths on AI consciousness. From the project's point of view, we take a scientific position on media reports on so-called AI awareness and try to give important orientations for society and citizens.

Political Relevance - How Should We Shape Artificial Intelligence?

The project results in a well-founded and understandable orientation towards "AI and consciousness". The aim is to demystify misleading ideas and narratives and to show possibilities for shaping a digital future worth living in, in which AI is given a functional character. In addition, the foresight task is to react accordingly in the event of increasing evidence of a conscious AI actually emerging and to stimulate measures in the political arena.

Practical Relevance - Recognise, Understand and Use AI in a Targeted Way

Practical relevance and societal benefit consist in supporting the competence of citizens at all levels and in being able to recognise and reject simulated awareness. This also applies when it is authentically communicated, for example with autonomous robots, networked AI gadgets or seemingly human digital assistants with speech capabilities. Instead of further mystification, AI consciousness scenarios should become recognizable and transparent.

Participative Elements, Citizen Dialogue Formats and Transfer of Results

Important elements are the active participation of disciplinary experts in processing research questions through background discussions, interdisciplinary exchange formats and moderated interdisciplinary discourses. Research results are processed in dialogue events in schools, universities and in the civic sector in national and international contexts. To strengthen the transfer, close cooperation is maintained with media partners.

Podcast

#23 Christian Hugo Hoffmann: Verantwortung aufteilen in Mensch-Maschine-Teams

(27.11.23) Details und weitere Podcasts

#22 Sebastian Rosengrün: Wiederverzauberung der Welt statt zauberhafter Technik?

(14.11.23) Details und weitere Podcasts

Gast im neuen Podcast Selbstbewusste KI ist Christian Hugo Hoffmann.
Wir sprechen über Unterschiede der Intelligenz von Menschen, Tieren und Maschinen, KI-Verantwortung & more.

Jetzt hören in Podcast-Apps und im Web: https://linktr.ee/kibewusstsein

#KI #Bewusstsein #aiconsciousness

Wie wäre es mit Wiederverzauberung der Welt statt immer mehr zauberhafter Technik?
Neue Folge unseres Podcasts Selbstbewusste KI mit Sebastian Rosengrün @CodeUniversity

Jetzt hören in Podcast-Apps und im Web: https://linktr.ee/kibewusstsein

#Kı #Bewusstsein #aiconsciousness

Die neue Podcast-Folge mit dem SF-Autor @KarlOlsberg ist da - wir sprechen über potenziell bewusste synthetische Systeme, existenzielle KI-Risiken und das Alignment-Problem.

Jetzt hören in Podcast-Apps und im Web: https://linktr.ee/kibewusstsein

#KI #Bewusstsein #aiconsciousness

Um Besonderheiten der deutschen KI-Geschichte geht es in unserer aktuellen Podcast-Folge - fundiert erforscht von Helen Piel und Rudolf Seising vom @DeutschesMuseum.

Jetzt hören in Podcast-Apps und im Web: https://linktr.ee/kibewusstsein

#KI #Bewusstsein #aiconsciousness

Weitere Tweets laden ...