Artificial intelligence and the real existential risks: an analysis of the human limitations of control
DOI:
https://doi.org/10.4013/fsu.2022.233.07Abstract
Based on the hypothesis that artificial intelligence would not represent the end of human supremacy, since, in essence, AI would only simulate and increase aspects of human intelligence in non-biological artifacts, this paper questions the real risk to be faced. Beyond the clash between technophobes and technophiles, what is argued, then, is that the possible malfunctions of an artificial intelligence – resulting from information overload, from a wrong programming or from a randomness of the system - could signal the real existential risks, especially when we consider that the biological brain, in the wake of the automation bias, tends to assume uncritically what is set by systems anchored in artificial intelligence. Moreover, the argument defended here is that failures undetectable by the probable limitation of human control regarding the increased complexity of the functioning of AI systems represent the main real existential risk.
Keywords: Artificial intelligence, existential risk, superintelligences, human control.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Murilo Karasinski, Kleber Bez Birolo Candiotto

This work is licensed under a Creative Commons Attribution 4.0 International License.
I grant the Filosofia Unisinos – Unisinos Journal of Philosophy the first publication of my article, licensed under Creative Commons Attribution license 4.0 (which allows sharing of work, recognition of authorship and initial publication in this journal).
I confirm that my article is not being submitted to another publication and has not been published in its entirely on another journal. I take full responsibility for its originality and I will also claim responsibility for charges from claims by third parties concerning the authorship of the article.