Algorithm and Learning. Ethical and Theoretical Hypothesis on Empathic Robotics and Social Robotics

ARTICOLI / 2 / Igor Pelgreffi /


The article explores the relationship between algorithms, machine learning and artificial empathy. The pivotal problem is whether and how artificial learning of emotions by artificial agents, guided by algorithms, is possible. The first question consists in an in-depth analysis of the concept of learning (through some references to anthropotechnics, to Aristotle, to Merleau- Ponty and to Bateson), showing that – even in the human field – learning involves the use of forms of automatic and procedural repetition, such as the body schema or the behavioural algorithm. The hybrid aspect (between technique and nature) of learning processes in a broad sense is therefore underlined. The second issue is emotional learning by artificial agents. Through the use of an extensive specific bibliography, especially in the IT-engineering field, the possibilities and limits of this possibility are shown, discussing the major knot, namely that the machine or the algorithm has no living body (Leib), but mechanical (Körper). In the artificial field, the repetition and learning of patterns appear to be very far from that of human beings. However, the article tries to stress the usual categories and to formulate hypotheses, highlighting the possibilities of heterogeneous hybridization between the repetition learned in the machine and its ability to complexify its own behaviours, making them much less dissimilar from those of natural agents. The third question is that of the different ways in which the ‘affective’ dimension of the machine can be interpreted, through a relational systemic approach (social robotics) in which artificial empathy takes shape only within the relationship with the other. Different topics are explored, from those of emotional grafting (emotional chip) to that of a form of autopoiesis in the direction of a form of artificial autonomy, today in a larval but not absent form, through references especially to Dumouchel and Guattari. In the concluding part, some hypotheses are formulated on a conceptual frame for managing the ethical issue of autonomy, potentially also on an emotional level, of artificial agents guided by algorithms, suggesting an attitude capable of thinking of an ethics of hybridization but at the same time a hybridization of the usual ethical attitudes, that goes beyond the opposition between roboethics and machine ethics.



Primary Bibliography

  • Anderson M., S. Anderson (eds.) 2014. Machine Ethics, Cambridge University Press, Cambridge.
  • Aristotle 2000, Nicomachean Ethics, tr. by R. Crisp, Cambridge University Press, Cambridge.
  • Bateson G. 1972. Steps to an ecology of mind, Ballantine Books, New York. Berardi F., Sarti A. 2008. Run. Forma, vita, ricombinazione, Mimesis, Milano-
  • Udine.
  • Bostrom N. 2014. Superintelligence. Paths, Dangers, Strategies, Oxford University Press, Oxford.
  • Canguilhem G. 1992. Machine and Organism [1952], in «Incorporations» (J. Crary, S. Kwinter eds.), Zone Books, New York, pp. 45-69.
  • Christian B. 2020. The Alignment Problem. Machine Leaning and Human Value, W. W. Norton & Company.
  • D. A. Norman 2005. Emotional Design, Basic Books, New York.
  • Damasio A. 2004, Looking for Spinoza. Joy, Sorrow and the Feeling Brain, HMH, Montreal.
  • Derrida J. 1994. Nietzsche and the machine, «Journal of Nietzsche Studies», 7, pp. 7-66.
  • Dumouchel P., Damiano L. 2016. Vivre avec les robots. Essai sur l’empatie artificielle, Seuil, Paris.
  • Fadini U., 1999, Principio metamorfosi. Verso un’antropologia dell’artificiale, Milano, Mimesis.
  • Guattari F. 1995. Chaosmosis. An ethico-aesthetic paradigm [1992], Indiana University Press, Indiana.
  • High-Level Expert Group on AI 2019. Ethics Guidelines for Trustworthy AI, EU Guidelines, Bruxelles.
  • Houdé O. 2017. Apprendre à résister, Le Pommier, Paris.
  • Lin P., Jenkins R., Abney K. 2017. Robot Ethics 2.0. From Autonomous Cars to Artificial Intelligence, Oxford University Press, New York.
  • Liu J. et al (eds.) 2005. Autonomy Oriented Computing. From Problem Solving to Complex Systems Modelling, Kluwer Academic Publisher, New York. Mauss M. 1973. Techniques of the body (1934), tr. eng. B. Brewster, «Economy and Society», 2, pp. 70-88.
  • Merleau-Ponty M. 1936, The Structure of Behaviour, Beacon, Boston.
  • OECD 2021. OECD Council Recommendation on Artificial Intelligence (2021), OECD, Paris.
  • Paiva A. 2000. (ed.). Affective Interactions: Toward a new generation of computer interfaces, Springer, Berlin.
  • Picard R. W. 1995. Affective Computing, M.I.T. Media Laboratory Perceptual Computing Section, Technical Report n. 321, Boston
  • Prokopenko M. (ed.) 2014. Guided Self-Organization. Inception, Springer, Heidelberg.
  • Sarti A., Citti G., Piotrowski D. 2022, Differential Heterogenesis. Mutant forms, sensitive bodies, Springer, Berlin.
  • Simondon G. 2014. Sur la technique, PUF, Paris.
  • Sloterdijk P. 2009. Du mußt dein Leben ändern. Über Anthropotechnik, Frankfurt 2009.
  • Tamburrini G. 2000. Etica delle macchine, Carocci, Roma. Thompson E., Varela F. J. 2001. Radical embodiement. Neural dynamics and consciousness, «Trends in Cognitive Sciences», 5, 10, pp. 418-425. Tzsafestas S. G. 2016. Roboethics. A Navigating Overview, Springer, Berlin.

Secondary Bibliography

  • Amigoni F., Schiaffonati V. 2018, Ethics for Robot as Experimental Technologies. Pairing Anticipation with Exploration to Evaluate the Social Impact of Robotics, «IEEE Robotics and Automation Magazine», 1, pp. 30-36.
  • Bernacer J., Lombo J. A., Murillo J. I.(eds.) 2015. Habits: plasticity, learning and freedom, «Frontiers in Human Neuroscience», Lausanne.
  • Froese T. 2007. On the role of AI in the ongoing paradigm shift within the cognitive sciences, in M. Lungarella et al (eds.), 50 Years of Artificial Intelligence, Lecture Notes in Computer Science, vol. 4850, Springer, Berlin, pp. 63-75.
  • Goertzel B. 2014. Artificial General Intelligence: Concept, State of the Art, and Future Prospects, «Journal of Artificial General Intelligence», 5, 1, pp. 1-46.
  • Houdé O. 2017. La neuroéducation: magie ou science? Cerveau & Psycho/Pour la science, «Chronique. L’école des cerveaux», 86, pp. 80-83.
  • Ienca A., Vayena E. 2019. The global Landscape of AI Ethics Guidelines, «Nature Machine Intelligence», 1, 9, pp. 389-399.
  • Ishiguro K 2021. Klara and the sun, Random House Large Print, London.
  • Lawless W. F. et al (eds.) 2017. Autonomy and Artificial Intelligence. A Threat or a Savior?, Springer, Cham.
  • Li H. et al. 2011. Towards an effective design of social robots, «Int. J. of Social Robotics», 3, 4, pp. 333-335.
  • Miller T. 2019. Explanation in Artificial Intelligence. Insights from the Social Sciences, «Artificial Intelligence», 267, pp. 1-38.
  • Mori M. 1970. Bukimi no tami (The uncanney valley), «Energy», 7, 4, pp. 33-35.
  • Murray I. R., Arnott J. L. 1993. Toward the simulation of emotion in synthetic speech, «J. Acoust. Soc. Am.», vol. 93, feb., pp. 1097-1108.
  • Norman D. A. 2004. Emotional Machines in Id., Emotional design, Basics Books, New York.
  • OECD 2020. Principles on AI OECD, Paris.
  • Parisi D. 2014. Future robots. Towards a robotics science of human beings, John Benjamins, London.
  • Parisi D. 2014. Internal robotics, «Connections Science», 16,4, pp. 325-338
  • Pelgreffi I. 2019. Bernard Stiegler e la critica della società automatica, in B. Stiegler, La società automatica, Meltemi, Milano, pp. 11-26.
  • Pelgreffi I. 2020. Ambiente digitale, automatismi e corporeità, in (F. Miano, L. Alici eds.) L’etica nel futuro. Orthotes, Napoli-Salerno, pp. 389-399.
  • Pfeifer R., Bongard J. C. 2006, How the body shapes the way of we think, MITPress, Cambdridge.
  • Pfeifer R., Scheier C. 1999. Understanding Intelligence, MIT Press, Cambridge MA.
  • Pitt L., Valiant L. 1988. Computational limitations on learning from examples, «Journal of the ACM», 35, pp. 965-984.
  • Ramus F. 2018. Neuroéducation et neuropsychanalyse: du neuroenchantement aux neurofoutaises, «Intellectica», 1-2, 69, pp. 289-301.
  • Ryan M., Stahl B. 2021. Artificial Intelligence Guidelines for Developers and Users, «Journal of Information, Communication and Ethics in Society», 19, 1, pp. 61-86.
  • Schiaffonati V. 2021. Computer, robot ed esperimenti, «aut aut», 392, pp. 51-62.
  • Schmetkamp S. 2020, Understanding AI. Can and Should we Empathize with Robots?, «Review of Philosophy and Psychology», 11, pp. 881-887.
  • Scott R. 1982, Blade Runner, USA.
  • Sekuper R., Blake R. 1998. Star Trek on the Brain. Alien Minds, Human Minds, Freeman, New York.
  • Sharkey A., Sharkey N. 2010. The crying shame of robot nannies. An ethical appraisal, «Interaction Studies», 11, 2, pp. 161-190
  • Sharkey A., Sharkey N. 2012. Granny and the robots. Ethical issues in robot care for elderly, «Ethics and Information Technology», 14, 2, pp. 27-40.
  • Sparrow R., Sparrow L. 2006. In the hand of machines? The future of aged care, «Minds and Machines», 16 , 2, pp. 141-161.
  • Testa I., Caruana F. (eds.) 2020. Habits. Pragmatist Approaches from Cognitive Science, Neuroscience, and Social Theory, Cambridge University Press, Cambridge.
  • Tisseron S. 2015. Le jour où mon robot m’aimera. Vers l’empathie artificielle, Albin Michel, Paris.
  • Vallor S., Bekey G. A. 2017. Artificial Intelligence and the Ethics of Self-Learning Robots, in Lin et al (eds), Robot Ethics 2.0. From Autonomous Cars to Artificial Intelligence, Oxford University Press, New York, pp. 338-353.
  • van de Poel I. 2016. An Ethical Framework for Evaluating Experimental Technology, «Science and Engineering Ethics», 22, pp. 667-686.
  • Veruggio G. 2007. Euron Roboethic Roadmap, Release 1.2, january.
  • Veruggio G., Operto F., Bekey G. 2017. Roboethics. Social and Ethical Implications, in Siciliano, Khatib, Springer Handbook of Robotics, Springer, Berlin2, pp. 2135-60.
  • Wallach W., Allen C. 2009. Moral Machines. Teaching Robots Right from Wrong, Oxford University Press, Oxford
  • Weinberger D. 2017. Machines Now Have Knowledge We’ll Never Understand, «Wired», on line 18.4.2017.
  • Whatcher S., Mittelstadt B., Floridi L. 2017. Transaparent, Explainable and Accountable AI for Robotics, «Science Robotics», 6.
  • Ziemke T. 2003, What’s that thing called embodiement?, in Processing of the 25th Annual Meeting of the Cognitive Science Society, Lawerence Erlbaum, pp. 1305-1310.
  • Ziemke T. 2016. The body of knowledge, in L. Damiano, Y. Kuruma, P. Stano (eds.), What can shyntetic biology offer to artificial intelligence (and viceversa)?, «Byosystems», 148, pp. 4-11.
  • Ziemke T., Lowe R. 2009. On the role of emotion in embodied cognitive architectures: from organisms to robots, in «Cognitive Computation», 1, 1, pp. 104-107.
Lo Sguardo è un progetto full open access. Puoi scaricare gratuitamente tutto il nostro archivio, ma saremmo lieti di ricevere un piccolo contributo tramite PayPal.
Sostieni Lo Sguardo
Support Lo Sguardo
Lo Sguardo is a full open access project. You can download all the articles for free, but we will be glad to receive a little support through PayPal.
Lo Sguardo è un progetto full open access. Puoi scaricare gratuitamente tutto il nostro archivio, ma saremmo lieti di ricevere un piccolo contributo tramite PayPal.
Sostieni Lo Sguardo
Support Lo Sguardo
Lo Sguardo is a full open access project. You can download all the articles for free, but we will be glad to receive a little support through PayPal.
Iscriviti alla nostra newsletter
Lasciaci il tuo indirizzo email per rimanere aggiornato sulle nostre novità.