Podcast

"Self-Conscious AI" – Science Podcast at the Boundary between Man and Machine

The discussions in our science podcast have been conducted with the following specialist experts: Andreas Bischof, Antonio Chella, Andreas Eschbach, Sascha Benjamin Fink, Thomas Fuchs, Christian Hugo Hoffmann, Joachim Keppler, Christof Koch, Janina Loh, Thomas Metzinger, Klaus Mainzer, Vincent C. Müller, Karl Olsberg, Ralf Otte, Helen Piel, Sebastian Rosengrün, Frauke Rostalski, Rudolf Seising, Junichi Takeno, Christian Vater and Joachim Weinhardt.

We received great support within the episodes from ambitious questioners: Urs Andelfinger, Jascha Bareis, Gerard Blommestijn, Murad Futehally, Armin Grunwald, Jennifer Heier, Hyeongjoo Kim, Jürgen Manner, Michael Mörike, Munish Sharma and Hinrich Thölken.

Can artificial intelligence develop consciousness? How could this work at all, and what would it mean for us? 

Other sources to listen to the podcast episodes:

#23 Christian Hugo Hoffmann: Sharing responsibility in human-machine teams (German)

(27.11.2023) Feedback

The entrepreneur, philosopher, economist and publisher Christian Hugo Hoffmann sees blind spots in the development of AI. Together we talk about the differences in the intelligence of humans, animals and machines and how these differences motivated him to write his new book. We also discuss the extent to which intelligence research is on the wrong track, the unique characteristics of human intelligence, and simulation theory. For the future, Christian Hugo Hoffmann sees the attribution of responsibility not only to humans or machines, but also to human-machine teams.

Author: Karsten Wendland
Editing, recording and production: Karsten Wendland
Editorial assistance: Robin Herrmann

Licence: CC-BY

References mentioned in the podcast:

Website of Christian Hugo Hoffmann: https://www.christian-hugo-hoffmann.com

The Quest for a Universal Theory of Intelligence: https://www.degruyter.com/document/doi/10.1515/9783110756166/html

Human Intelligence and Exceptionalism Revisited by a Philosopher: 100 Years After 'Intelligence and its Measurement': https://www.ingentaconnect.com/content/imp/jcs/2022/00000029/f0020011/art00003

Judea Pearl: http://bayes.cs.ucla.edu/jp_home.html

Causality von Judea Pearl: http://bayes.cs.ucla.edu/BOOK-2K/

The Book of Why by Judea Pearl: http://bayes.cs.ucla.edu/WHY/

Yuval Noah Harari: https://www.ynharari.com/de/

Reality Plus by David Chalmers: https://www.suhrkamp.de/buch/david-j-chalmers-realitaet-t-9783518588000

Alpha Go: https://deepmind.google/technologies/alphago/

Technological Brave New World? Eschatological Narratives on Digitization and Their Flaws von Christian Hugo Hoffmann: https://scholarlypublishingcollective.org/psup/posthuman-studies/article-abstract/6/1/53/343530/Technological-Brave-New-World-Eschatological?redirectedFrom=fulltext

#22 Sebastian Rosengrün: Remagicizing the world instead of magical technology? (German)

(14.11.2023) Feedback

Philosopher and linguist Sebastian Rosengrün works on the opportunities and dangers of artificial intelligence in interaction with society. He warns against recognizing chatbots as moral authorities and insists on showing society the possible consequences of such attributions. Instead of indulging in magical science fiction stories, we should confront ourselves with real undesirable developments - in order to avert them. We talk about essays written by chatbots, the fallacies that can be projected into AI development, and why you should listen to the podcast Selbstbewusste KI.

Author: Karsten Wendland
Editing, recording and production: Karsten Wendland
Editorial assistance: Robin Herrmann

Licence: CC-BY

#21 Karl Olsberg: I expect uncontrolled malicious AIs rather than the solution to the alignment problem. (German)

(8.11.2023) Feedback

In his novels, science fiction author Karl von Wendt (pseudonym Karl Olsberg) describes future worlds in which humans are confronted with modern technology. He shows how control over this technology is slipping away from them - to the point where our species is in danger of complete annihilation. Unlike in his novels, however, it is not possible to write a hero into our real world who can save humanity from extinction. Karl von Wendt is concerned about future AI developments, especially the challenges of aligning AI with different value concepts of different groups of people. His novels are intended to entertain readers, but also to warn them. He also explains why he co-signed the Future of Life Institute's 6-month moratorium on AI development.

Author: Karsten Wendland
Editing, recording and production: Karsten Wendland
Editorial assistance: Robin Herrmann

Licence: CC-BY

References mentioned in the podcast:

Current book by Karl Olsberg: Virtua

Website Karl Olsberg (aka Karl von Wendt).

Moratorium on the Future of Life Institute's 6-month halt to AI development..

Canadian computer scientist and AI researcher Yoshua Bengio.

8 writing rules from US science fiction writer Kurt Vonnegut.

What it's like to be a bat? by US philosopher Thomas Nagel.

The movie Matrix in the International Movie Database  IMDb.

The young adult book Infernia by Karl Olsberg, in which the characters in the video game have a consciousness.

Open-AI founder Sam Altman.

AI AlphaFold, which can predict the folding structure of proteins.

 

#20 Helen Piel & Rudolf Seising: German AI history was much about "mind" and little about consciousness. (German)

(1.11.2023) Feedback

Helen Piel and Rudolf Seising are historians of science at the Deutsches Museum in Munich. We talk together about how research in the field of artificial intelligence has changed over the past decades. We talk about the emergence of the term Artificial Intelligence at the Dartmouth Conference in 1956, the AI winter of the early 1970s, and attempts to establish "Intellectics" as the connecting discipline between computer science and cognitive science. It is about blind spots in science, why research has often been and is still being done in the wrong directions for decades, and how old ideas can be brought back into the center of relevance.

Author: Karsten Wendland
Editing, recording and production: Karsten Wendland
Editorial assistance: Robin Herrmann

Licence: CC-BY

References mentioned in the podcast:

Registration for the final event of the IGGI research project in Munich: ⁠https://forms.gle/Lu38A9YZTTPi9xS7A⁠

Helen Piel: ⁠https://www.deutsches-museum.de/forschung/person/helen-piel-2⁠

Rudolf Seising: ⁠https://www.deutsches-museum.de/forschung/person/rudolf-seising-2⁠

IGGI - Engineering Spirit and Intellectual Engineers: A History of Artificial Intelligence in the Federal Republic of Germany: (IGGI – Ingenieursgeist und Geistesingenieure: Eine Geschichte der Künstlichen Intelligenz in der Bundesrepublik Deutschland): ⁠https://www.deutsches-museum.de/forschung/forschungsinstitut/projekte/detailseite/iggi-ingenieur-geist-und-geistes-ingenieure⁠

IGGI Subproject: Processing Images by Dinah Pfau: ⁠https://www.deutsches-museum.de/forschung/person/dinah-pfau-2⁠

Dartmouth Conference 1956: ⁠https://aitoolsexplorer.com/ai-history/the-dartmouth-conference-the-event-that-shaped-ai-research/⁠

René Descartes "Theory of the Two Worlds" (Dualism): ⁠https://www.dasgehirn.info/entdecken/meilensteine/rene-descartes-vater-der-leib-seele-theorie⁠

Wolfgang Bibel: ⁠http://www.intellektik.de/index/WolfgangBibel.htm⁠

Study Guide AI (Bibel/Eisinger/Schneeberger/Siekmann): ⁠https://link.springer.com/chapter/10.1007/978-3-642-72963-8_1⁠

Perceptron by Frank Rosenblatt: ⁠https://blog.hnf.de/frank-rosenblatt-und-das-perceptron/⁠

Mythos Golem: ⁠https://www.nzz.ch/nzzas/nzz-am-sonntag/kultur-mythos-golem-ld.120179⁠

Josef „Sepp“ Hochreiter: ⁠https://www.jku.at/institut-fuer-machine-learning/ueber-uns/team/sepp-hochreiter/⁠

Piel, R. Seising (2023): Perspectives on Artificial Intelligence in Europe. IEEE Annals of the History of Computing. ⁠https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10197444⁠

#19 Joachim Keppler: The brain is not the seat of consciousness - its coupling to the field of consciousness generates conscious states. (German)

(24.10.2023) Feedback

The theoretical physicist Joachim Keppler researches consciousness with strict natural science claims. In the podcast, the director of the DIWISS Institute explains how consciousness fits into physics, that consciousness does not have its exclusive place in the brain at all, and why it is rather worthwhile to consider the brain as a coupler to a physically describable field of consciousness. We discuss the question of how consciousness understood in this way could be manipulated or even hacked, and of course to what extent artificial consciousness would be possible in this way.

Author: Karsten Wendland
Editing, field recording and production: Karsten Wendland
Editorial assistance: Robin Herrmann

Licence: CC-BY

References mentioned in the podcast:

DIWISS Research Institute: https://www.diwiss.de

Scientific Conference "The Science of Consciousness 2023": https://tsc2023-taormina.it/

Keppler, Joachim (2023): Scrutinizing the Feasibility of Macroscopic Quantum Coherence in the Brain: A Field-Theoretical Model of Cortical Dynamics. Frontiers in Physics 11:1181416. doi: 10.3389/fphy.2023.1181416 (June 2023).
https://www.frontiersin.org/articles/10.3389/fphy.2023.1181416/full

Keppler, Joachim (2021): Building Blocks for the Development of a Self-Consistent Electromagnetic Field Theory of Consciousness. Frontiers in Human Neuroscience 15:723415. doi: 10.3389/fnhum.2021.723415 (September 2021).
https://www.frontiersin.org/articles/10.3389/fnhum.2021.723415/full

Keppler, Joachim (2020): The Common Basis of Memory and Consciousness: Understanding the Brain as a Write-Read Head Interacting With an Omnipresent Background Field. Frontiers in Psychology 10:2968. doi: 10.3389/fpsyg.2019.02968 (January 2020).
https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02968/full

Original paper on Shannon and Weaver's sender-receiver model: A Mathematical Theory of Communication. By C. E. SHANNON (1949).
https://web.archive.org/web/19980715013250/http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf

Zero-Point-Field:
https://www.scientificamerican.com/article/follow-up-what-is-the-zer/#

#18 Florian Arnold: AI Blessing Robots for Lost Sheep - Delegated Enjoyment, Media Gag or Dehumanization? (German)

(17.10.2023) Feedback

Philosopher and design theorist Florian Arnold takes a critical stance towards modern blessing robots or AI religions like Way of the Future. Together we talk about what motivates people to accept such offerings, how a deity made of source code works, how these structures come into being, as well as the future of the church. We also talk about interpassivity, how GPT created a worship service, and why you can't just pull the plug on an AI god.

Concept development, recording and production: Annelie Hirth, Robin Herrmann
Editor: Karsten Wendland

Licence: CC-BY

#17 Sascha Benjamin Fink: Compassion could be a useful technical function for machines. (German)

(10.10.2023) Feedback

Sascha Benjamin Fink knows a lot about pain experience, consciousness and psychedelics. The professor of neurophilosophy at the University of Magdeburg does research on neurophenomenal structuralism and can understand why one can suffer from never owning a Ferrari. We talk about the possibility of suffering machines and the "hard problem" of consciousness. He also reports from his new research project PsychedELSI, which is producing its first impressive results in controlled therapy with psychedelics. Together, we consider the extent to which this could also be something for conscious AI systems in perspective - High AI.

Author: Karsten Wendland
Editing, recording and production: Karsten Wendland
Editorial assistance: Robin Herrmann

Licence: CC-BY

#16 We should create technology that is retrievable. In conversation with Vincent C. Müller. (German)

(3.10.2023) Feedback

"Humane AI" is in the mission statement of Vincent C. Müller, director of the Centre for Philosophy and AI Research (PAIR) at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). We need to design technology in such a way that we can take it back if necessary when unforeseen technology consequences occur or values and goals change so that the technology no longer fits. For the Humboldt professor, ethical reflection is part of the professional actions of everyone who designs technology and puts it into the world. AI and digital systems should be used to actively address the fundamental challenges facing humanity.

Author: Karsten Wendland
Editing, recording and production: Karsten Wendland
Editorial assistance: Robin Herrmann

Licence: CC-BY

References mentioned in the podcast:

Centre for Philosophy and AI Research (PAIR) at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU): https://www.pair.fau.eu

Alan Turing Institute, https://www.turing.ac.uk

MÜLLER, Vincent C.; AYESH, Aladdin. Revisiting Turing and His Test: Comprehensiveness, Qualia, and the Real World. 2012. https://philpapers.org/archive/MLLRTA.pdf

On the Ford Pinto scandal: https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/ford-pinto-case

#15 The development of conscious systems is not particularly useful. In conversation with Klaus Mainzer. (German)

(26.09.2023) Feedback

Mathematician, physicist and philosopher Klaus Mainzer does not think it makes much sense to build machines that have an ego consciousness. For him, on the other hand, it is important that the population becomes more aware of technology, that functional principles are better understood and that limits can be better assessed. In this way, it might be possible to overcome the widespread skepticism about technology. Points of reference in the discussion include Leibniz, Hilbert, Gödel, Penrose and, of course, current technical AI developments.

Author: Karsten Wendland
Recording Manager and Production: Karsten Wendland

Licence: CC-BY

References mentioned in the podcast:

Klaus Mainzer, Reinhard Kahle: Grenzen der KI – theoretisch, praktisch, ethisch. Springer 2022.

Alan Turing: On computable numbers, with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, 2, 42 (1937), S. 230–265. Online-Fassung

Gottfried Wilhelm Leibniz: Theodizee (Originaltitel: Essais de Théodicée). Amsterdam 1710.

Chat GPT: https://chat.openai.com

Grundrechte: https://www.bundestag.de/gg/grundrechte

Kurt Gödel: Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. In: Monatshefte für Mathematik und Physik. 38, 1931, S. 173–198, doi:10.1007/BF01700692.

Ungelöste Probleme der Mathematik: https://mathworld.wolfram.com/topics/UnsolvedProblems.html

Pause Giant AI Experiments: An Open Letter, https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Roger Penrose: Computerdenken: die Debatte um künstliche Intelligenz, Bewußtsein und die Gesetze der Physik: Die Debatte um Künstliche Intelligenz, Bewusstsein und die Gesetze der Physik. Spektrum 2009.

#13 The big season finale. Panel discussion on conscious AI. School students in conversation with experts. (German)

(16.11.2021) Feedback

Care robots, autonomous driving cars and soulful beings - these are just a few of the many exciting topics that school students discussed together with experts in the last episode of the first podcast season " Self-Conscious AI". Some of our podcast guests from the previous episodes came together once again to enter into a dialogue with the students at eye level.

Discussion guests: Andreas Eschbach, Thomas Fuchs, Janina Loh, Thomas Metzinger, Ralf Otte, Christian Vater und Joachim Weinhardt

Publisher: Karsten Wendland

Editorial team: Tannaz Afshari Bakhsh, Franka Bockrath, Laura Müller, Anna Pallakst, Lea Riemann, Emma Rönnebeck, Eva Russow, Tamás Svajda und Renée Weisbach
Moderation: Lara Wolf und Luis Tillmann

Recording Manager:: Karsten Wendland
Production: Tobias Windmüller

Licence: CC-BY

 

References mentioned in the podcast:

Michael Chrichton: Prey, Harper Collins New York 2002. Leseprobe

Fraunhofer Care-O-bot: Mobile Roboterassistenten zur aktiven Unter-stützung des Menschen im häuslichen Umfeld.

TED-Talk by Kate Darling (MIT): Why we have an emotional connection to robots.

TA-Swiss: Robotik in Betreuung und Gesundheitsversorgung, vdf Hochschulverlag 2013, S. 192.

JürgenHabermas(1981): Theorie des kommunikativen Handelns. Band 2 Zur Kritik der funktionalistischen Vernunft. 8. Aufl. Frankfurt/a.M 1981.

Martin Krammer: "Unser Schreibzeug arbeitet mit an unseren Gedanken". Beitrag der Universität Wien über Friedrich Nietzsche.

Stefan Krempl: Ethikrat: Erosion von Verantwortung bei Robotereinsatz in der Pflege verhindern (Artikel bei Heise vom 11.03.2020).

Thomas Fuchs: Verteidigung des Menschen. Grundfragen einer verkörperten Anthropologie, Suhrkamp 2020.

 

#12 Robots will soon have consciousness. In conversation with Junichi Takeno. (German)

(9.11.2021) Feedback

Japanese robotics expert Prof. Junichi Takeno from Meiji University in Tokyo is convinced that conscious robots will soon be built. He himself has already constructed robots that can recognize their mirror image. In the latest official episode of our Self-Aware AI podcast, he explains the impressive experiments he has already conducted with his robots.

Questioner in this episode:
Jürgen Manner, Karlsruhe, Germany

Author: Karsten Wendland
Assistant Editors: Robert Sinitsyn, Matthias Gerz
Production: Matthias Gerz
Recording Manager: Karsten Wendland
Voice Actor: Konstantin Kleefoot

Licence: CC-BY, DOI: https://doi.org/10.5445/IR/1000139706

References mentioned in the podcast:

Book by Junichi Takeno: Creation of a Conscious Robot. Mirror Image Cognition and Self-Awareness: https://www.taylorfrancis.com/books/mono/10.1201/b12780/creation-conscious-robot-junichi-takeno.

Summary article about Junichi Takeno's research on Meiji University's website: https://english-meiji.net/articles/132/.

Article on "Astro Boy" at SRF.

The understanding of consciousness according to philosopher and mathematician Edmund Husserl: Article by Prof. Dr. Christian Beyer, Georg-August-Universität Göttingen.

Book by physicist Michio Kaku: Future of the mind: https://www.researchgate.net/publication/295829741_Michio_Kaku_Future_of_the_mind.

Experiments by Junichi Takeno in the International Journal on Smart Sensing and Intelligent Systems 1(4) 2008: https://www.researchgate.net/publication/228740023_A_Robot_Succeeds_in_100_Mirror_Image_Cognition.

Homepage of Junichi Takeno:
http://www.rs.cs.meiji.ac.jp/en/member/takeno.html

#11 Most SF novels are intended as warnings, not as instructions. In conversation with Andreas Eschbach. (German)

(24.11.2020) Feedback

For the author Andreas Eschbach, people without technology are unthinkable. However, the freedom of the individual is both enabled and threatened by technology, which he takes up and processes in his stories in different genres. What we are discussing today about "Conscious AI" is a basic motif with a long tradition in his profession. He can tell us how media narratives have always humanized computers and robots, and he knows from his own experience that program codes always reflect something about those who wrote them.

Questioner in this episode:
Dr. Hinrich Thölken, Ambassador and Special Representative for Digitization and Digital Transformation at the Federal Foreign Office, Germany

Author: Karsten Wendland
Assistant Editor, Recording Manager and Production: Kayla Zoller

Licence: CC-BY, DOI: https://doi.org/10.5445/IR/1000126701

References mentioned in the podcast:

Isaac Asimov: All Robot Stories, Bastei Lübbe 2007.

Article about "Astro Boy" at SRF.

Andreas Eschbach: NSA, Bastei Lübbe 2020.

Andreas Eschbach Out trilogy: Black Out 2010, Hide Out 2011, Time Out 2012, Arena Publishing.

The computer program AlphaGo, developed by DeepMind, which plays the board game Go.

Homepage of Andreas Eschbach:
http://www.andreaseschbach.de/

#10 The greatest hope would be to prevent dystopia. In conversation with Joachim Weinhardt. (German)

(17.11.2020) Feedback

The theologian Prof. Dr. Joachim Weinhardt, PH Karlsruhe, classifies questions about artificial intelligence and consciousness also in scientific and ethical contexts. From a biblical perspective, he says, it is not intended that man should create other creatures. In principle, however, he considers it possible that consciousness could also be based on different material conditions than we humans have. But how could one guarantee that an artificially created consciousness would not become an unhappy consciousness? Perspectively, one should already prevent preliminary stages of developments that could possibly end in a take-off.

Questioner in this episode:
Prof. Dr. Urs Andelfinger, Darmstadt University of Applied Sciences

Author: Karsten Wendland
Assistant Editor, Recording Manager and Production: Annalena Hörth

Licence: CC-BY, DOI: http://doi.org/10.5445/IR/1000126231

References mentioned in the podcast:

Joachim Weinhardt: Gott und die Welt. Schöpfungslehre und Eschatologie, Kohlhammer 2019.

Article at Zeit online about the AI church "Way of the future" founded by robotics expert Anthony Levandowski (from 18.11.2017).

Article by Alan M. Turing in Mind: "Computing Machinery and Intelligence".

Prof. Dr. Joachim Weinhardt on the website of the Karlsruhe University of Education:
https://www.ph-karlsruhe.de/personen/detail/Joachim_Weinhardt_945

#09 That robots make us believe they have emotions can be very important. In conversation with Janina Loh. (German)

(10.11.2020) Feedback

The philosopher Dr. Janina Loh at the University of Vienna examines inclusive approaches to thinking, at the end of which it is no longer so important whether someone calls himself a human being or a machine. The author of the suhrkamp book "Robot Ethics" traces emotional bonds between humans and robots in the most diverse fields of application, and possible relationships that can arise between them. As a critical posthumanist, she suggests that diversity should also be taken into account in the production of robots and that varied robots should be produced.

Questioner in this episode:
Jascha Bareis, M.A., Institute for Technology Assessment and Systems Analysis (ITAS) at the Karlsruhe Institute of Technology (KIT)

Author: Karsten Wendland
Assistant Editor, Recording Manager and Production: Matthias Gerz

Licence: CC-BY, DOI: https://doi.org/10.5445/IR/1000125862

References mentioned in the podcast:

Janina Loh: Roboterethik. Eine Einführung, suhrkamp taschenbuch 2019.

Article at Zeit.de about the about the robot researcher Kate Darling from MIT and the example of mining robots.

Thomas Nagel: essay "What is it like to be a bat?", 1974.

Article at ze.tt about a woman from Berlin who has a relationship with a model airplane.

Janina Loh at the Homepage of the University of Vienna:
https://ufind.univie.ac.at/de/person.html?id=61692

 

#08 We must build machines that have feelings. In conversation with Antonio Chella. (German)

(3.11.2020) Feedback

For the roboticist Prof. Dr. Antonio Chella, head of the Robotic Labs and the Research Center for Knowledge Technologies at the University of Palermo, consciousness goes far beyond the brain. For the editor of the Journal of Artificial Intelligence and Consciousness, only an empathic machine can also be an ethical machine. Therefore, it is desirable to build machines with emotions, and to be able to learn more about our own phenomenal consciousness from their behavior. According to his observation, many AI people are not interested in philosophical debates, which is why he tries to build practical bridges between disciplines and support interdisciplinary discourses.

Questioner in this episode:
Prof. Dr. Armin Grunwald, Institute for Technology Assessment and Systems Analysis (ITAS) at Karlsruhe Institute of Technology (KIT)

Author: Karsten Wendland
Assistant Editor and Production: Annalena Hörth
Recording Manager: Tobias Windmüller
Voice Actor: Konstantin Kleefoot

Licence: CC-BY, DOI: http://doi.org/10.5445/IR/1000125589

References mentioned in the podcast:

Antonio Chella: Journal of Artificial Intelligence and Consciousness.

Quote from Woody Allen: “You rely too much on brain. The brain is the most overrated organ.“

The US-american science fiction film „Short Circuit", 1986.

Article at Zeit.de about the racist chatbot Tay (from 24.03.2020).

The science fiction drama Ex Machina by Alex Garland, 2015.

The science fiction film 2001: A Space Odysseey by Stanley Kubrick, 1968.

The black and white film Frankenstein by Mary Shelley, 1931.

The silent film Der Golem by Paul Wegener andHeinrich Galeen, 1914.

Antonio Chella at the Homepage of the University of Palermo:
https://www.icar.cnr.it/en/associati-di-ricerca/esterno-1/

#07 Consciousness is a causal power and not a clever programming hack. In conversation with Christof Koch. (German)

(27.10.2020) Feedback

For neuroscientist Prof. Dr. Christoph Koch, chief scientist at the Allen Institute for Brain Science in Seattle, there is a very close connection between the brain and consciousness. He does not believe it is possible to digitally simulate the structures of the human brain, but he does believe that it will eventually be possible to reconstruct it using technical materials. For him, consciousness is all experiences, feelings and sensations that are ultimately always related to the brain. So we do not love with the heart, but in reality with the head.

Questioner in this episode:
Prof. Dr.-Ing. Ralf Otte, Ulm University of Applied Sciences

Author: Karsten Wendland
Assistant Editor, Recording Manager and Production: Tobias Windmüller

Licence: CC-BY, DOI: http://doi.org/10.5445/IR/1000125298

References mentioned in the podcast:

Christof Koch: The feeling of life itself. Why consciousness is widespread but can’t be computed, MIT Press 2020.

Aristoteles: Über die Seele. De Anima, Felix Meiner Verlag Hamburg 2017.

Article about about Floatation tanks at Faz.net (from 17.04.2017).

Information about the Integrated Information Theory.

Explanation of terms „CPU“ and „ALU“ at ITWissen.info.

Homepage of Christof Koch:
https://christofkoch.com/

Christof Koch at the Homepgae of the Allen Institute for Brain Science Seattle:
https://alleninstitute.org/what-we-do/brain-science/about/team/staff-profiles/christof-koch/

#06 We have to be on the road with foresight and ask: What if? In conversation with Frauke Rostalski. (German)

(20.10.2020) Feedback

For the lawyer Prof. Dr. Dr. Frauke Rostalski, University of Cologne and member of the German Ethics Council, neither blame nor responsibility can be ascribed to AI robots - at least not by today's standards and technical realizations. AI systems can only represent parts of our thinking, and vice versa, humans do not function according to algorithms. However, we do not know what is yet to come and should think ahead of various futures in time. Future societies with e-persons, for example, are already debated scenarios.

Questioner in this episode:
Dr. Gerard Blommestijn, physicist and philosopher

Author: Karsten Wendland
Assistant Editor: Kayla Zoller
Set manager on site in beautiful Cologne: Karsten Wendland
Production: Kayla Zoller

Licence: CC-BY, DOI: http://doi.org/10.5445/IR/1000124841

References mentioned in the podcast:

Frauke Rostalski is a member of the German Ethics Council.

Article at Welt.de: Ingenieur „heiratet“ seine selbstgebaute Robo-Freundin (vom 07.04.2017).

Citation of the British evolution biologist Clinton Richard Dawkins:
“We are survival machines – robot vehicles blindly programmed to preserve the selfish molecules known as genes. This is a truth which still fills me with astonishment.” (From his book „The Selfish Gene“, Oxford University Press: 1976).

Article about the Libet-Experiment at Planet Wissen.

The hippocratic Oath.

Scenario of the Black Swan by the publicist and stock exchange trader Nassim Nicholas Taleb. (Article at Risknet.de from 20.07.2020).

Prof. Dr. Dr. Frauke Rostalski at the Homepage of the University of Cologne:
https://rostalski.jura.uni-koeln.de/prof-dr-dr-frauke-rostalski

#05 On refrigerator lights, AI puberty and sneakers. In conversation with Thomas Metzinger. (German)

(13.10.2020) Feedback

For the philosopher Prof. Dr. Thomas Metzinger from the University of Mainz, it is by no means impossible that artificial intelligence could at some point become conscious. We should already start thinking about our responsibility and about ethics, for example in the form of a global charter with AI rules that also cover the topic of consciousness. AI systems learn a lot about us humans, but they do not always have to be linked to the common good. It depends on us to regain our digital sovereignty.

Questioner in this episode:
Dipl.-Phys. Michael Mörike, Integrata Foundation for the Humane Use of Information Technology

Author: Karsten Wendland
Assistant Editor, recording Manager and Production: Robert Sinitsyn

Licence: CC-BY, DOI: http://doi.org/10.5445/IR/1000124512

References mentioned in the podcast:

René Descartes: The passion of the soul (Les passions de l’âme), L. Heimann 1870.

The Minimal Phenomenal Experience (MPE) Project, leadership by Thomas Metzinger.

Article from Thomas Metzinger at Spektrum.de about machine consciousness with reference to the American philosoph Hilary Putnam and his opinion on roboter rights.

European High-Level Expert Group on Artificial Intelligence.

Thomas Metzinger: The Ego-Tunnel: The Science of Mind and the Myth on the Self, Piper Verlag 2014.

Digitization an the Green Deal of the EU commission.

#04 The founding fathers of AI did not care about consciousness. In conversation with Christian Vater. (German)

(6.10.2020) Feedback

For technical historian Christian Vater, who conducts research in Heidelberg and Karlsruhe, AI research has archaeological qualities. He excavates finds that shape our current understanding of AI, even if they were meant differently in their time. He explains from which point of view human consciousness could be understood as a "Turing Onion" with an empty core, and what role female researchers played in the early days of AI.

Questioner in this episode:
Jennifer Heier, Head of UX-driven AI, SIEMENS

Author: Karsten Wendland
Assistant Editor and Set manager on site: Konstantin Kleefoot
Production: Tobias Windmüller

Licence: CC-BY, DOI: http://doi.org/10.5445/IR/1000124235

References mentioned in the podcast:

Entry about  Zombies in the Stanford Encyclopedia of Philosophy.

Hubert L. Dreyfus: What Computers Still Can’t Do. A Critique of Artificial Reason, MIT Press 1992.

Nick Bostrom: Are you living in a computer simulation?, Philosophical Quaterly 2003.

Article by Alan M. Turing in "Mind": Computing Machinery and Intelligence.

Alan M. Turing: On computable numbers.

The dramatized film biography about Alan M. Turing: The Imitation Game.

Article about the mathematician and cryptologist Irving John "Jack" Good and his idea of super intelligence at Spektrum.de. His essay Speculations oncerning the First Ultraintelligent Machine from 1966.

Isaac Asimov: All Robot Stories (Alle Robotergeschichten), Bastei Lübbe 2007.

Jennifer S. Light: When Computers Were Women, Technology and Culture 1999.

Article about the "Astro Boy" at SRF.

Gilles Deleuze, Félix Guattari: Rhizom, Merve Verlag 1977.

Christian Vater on the homepage of the Karlsruhe Institute of Technology: https://www.geschichte.kit.edu/mitarbeiter_1906.php

#03 There is no spirit in today's AI. In conversation with Ralf Otte. (German)

(29.09.2020) Feedback

Prof. Dr.-Ing. Ralf Otte, Ulm University of Applied Sciences, has been researching and developing AI and algorithms related to the brain for decades. For the expert for neuromorphic systems, the mind cannot be described algorithmically. Even though AI is sometimes assumed to be able to "think", today's AI systems are highly trivial - and by no means contain a mind.

His book "Artificial Intelligence for Dummies" was recently published by Wiley-VCH.

Questioner in this episode:
Prof. Dr. phil. Hyeongjoo Kim, Chung-Ang University Seoul

Author: Karsten Wendland
Assistant Editor: Tobias Windmüller
Set manager on site in Ralf's Garden: Karsten Wendland
Production: Robert Sinitsyn

Licence: CC-BY, DOI: http://doi.org/10.5445/IR/1000124010

References mentioned in the podcast:

Ralf Otte: Vorschlag einer Systemtheorie des Geistes. Nicht-energetische Wellenfunktionen und Vorschlag zur Lösung des Geist-Körper-Problems, Cuvillier Verlag Göttingen 2016

At the Princeton Engineering Anomalies Research (PEAR) Institute of Princeton University, researchers studied how random or machine-controlled processes could be influenced by human consciousness. After its closure in 2007, similar research was continued in the Global Consciousness Project. The focus is on using random generators to verify the existence of a global consciousness.

The Austrian mathematician, philosopher and logician Kurt Gödel laid the foundations of predicate logic. Summary article on Spektrum.de (from 01.09.1999)

The US-American science fiction film „Short Circuit“ from 1986 is about a military robot that suddenly becomes autonomous by a lightning strike.

The German physicist Thomas Görnitz researches quantum theory and its philosophical interpretation.

The English mathematician and theoretical physicist Roger Penrose researches mathematical-physical problems of consciousness and AI.

Google builds quantum computers: a summary article by Zeit Online (from 28.09.2019) and Stern (from 23.10.2019).

Research on neuromorphic hardware at Fraunhofer Institute for Integrated Circuits (IIS).

Article by the British logician, mathematician, cryptanalyst and computer scientist Alan M. Turing in the British philosophical journal "Mind": "Computing Machinery and Intelligence".

Ralf Otte: Artificial Intelligence for Dummies, John Wiley & Sons 2019.

Expert Opinion of the Data Ethics Commission of the Federal Government (abridged version) of 2019, published by the Federal Ministry of the Interior, Building and Community and the FFederal Ministry of Justice and Consumer Protection Page 19: Pyramid addressed in the podcast, which classifies the potential dangers of algorithmic systems.

Ralf Otte on the homepage of Ulm University of Applied Sciences:
https://studium.hs-ulm.de/de/users/578627

#02 Robots get a Human Aura. In Conversation with Andreas Bischof. (German)

(22.09.2020) Feedback

Dr. Andreas Bischof, University of Chemnitz, thinks technical and social sciences together. He explains how the fine interplay between man and technology works and how mutual "understanding" can be made possible. His current book "Soziale Maschinen bauen" (Building Social Machines) has been published by transcript Verlag.

The opening quotation to learn by heart:
"If the reconstruction of the developmental practice of interactive technology is to become a method for that very practice, it must engage and compromise with the intricacies and complexities of developing technology for everyday social worlds." (Andreas Bishop)

Questioner in this episode: Prof. Dr. Munish Sharma,
MIT Maharastra Institute of Technology, Aurangabad, India

Author: Karsten Wendland
Assistant Editor: Tobias Windmüller
Voice Actor: Konstantin Kleefoot
Recording Manager and Production: Johanna Müller

Licence: CC-BY, DOI: https://doi.org/10.5445/IR/1000123896

References mentioned in the podcast:

Andreas Bischof: Building social machines. Epistemic practices of social robotics, transcript Verlag 2017. https://www.transcript-verlag.de/978-3-8376-3881-3/soziale-maschinen-bauen/

"ReThiCare." Interdisciplinary research project on the challenges and opportunities of technical assistance systems in the care context. http://www.rethicare.info/

Sherry Turkle: Alone Together. Why We Expect More from Technology and Less From Each Other, Basic Books 2011. https://www.basicbooks.com/titles/sherry-turkle/alone-together/9780465093663/

Homepage of Andreas Bischof: https://andreasbischof.net/

 

#01 No Life, no Consciousness. In Conversation with Thomas Fuchs. (German)

(15.09.2020) Feedback

Consciousness in the computer is a simulation, says Prof. Dr. Dr. Thomas Fuchs, holder of the Karl-Jaspers Professorship for Philosophical Foundations of Psychiatry and Psychotherapy at the University of Heidelberg, in the first episode of our podcast Self-Conscious AI (Selbstbewusste KI). His current book "Verteidigung des Menschen" (Defense of Man) was recently published by suhrkamp.

Questioner in this episode: Murad Futehally, Mumbai/Ettlingen

Author: Karsten Wendland
Assistant Editor: Tobias Windmüller
Set Manager and Production: Konstantin Kleefoot

Licence: CC-BY, DIO: http://doi.org/10.5445/IR/1000123619

References mentioned in this episode:

The Japanese roboticist Hiroshi Ishiguro is researching human-like robots that look like him. http://www.geminoid.jp/en/index.html

"Ceci nest pas une pipe" is an oil painting by the Belgian painter René Magrid

The humanoid robot Sophia of the Honkonger AI and robotics company Henson Robotics: https://www.hansonrobotics.com/sophia/

Angela Merkel talks with Sophia (Article from the FAZ from 28.06.2018)

Sophia receives citizenship in Saudi Arabia. (Article in the WELT from 27.10.2017)

The US-American romantic science fiction film drama "Her" by Spike Jonze from 2013: https://www.warnerbros.com/movies/her#content

Thomas Fuchs on the homepage of Heidelberg University: https://www.uni-heidelberg.de/fakultaeten/philosophie/philsem/phaenomenologie/

Trailer for our podcast series Selbstbewusste KI.
Foretaste and outlook.
Production: Matthias Gerz / Tobias Windmüller
DOI: http://doi.org/10.5445/IR/1000122828