Memory has always been the essential literary themes. ‘In Search of Lost Time’, is all about reminisceing the bygone days from the first person perspective. In ‘Hundred Years of Solitude’, the narrator gives a story of a family in a hundred years time period. Although how they remember the past, or how they chronicle their stories differ from one story to another, Literature is essentially the way that humanities preserve their ‘collective’ memories. Then, can AI learn something from learning all these literary works, to be more like a human? Or if becoming a human is not the object of AI, how can we train AI so that it can understand human beings?
More or less, Artifical Intelligence works based on algorithms, which make it combine numerous data, (which is dubbed as ‘big data’ thesedays) so that it can reason cause-effect from given inputs. Although AI can be more judicial than human beings in many ways, that doesn’t necessarily make AI to make ‘ethical’ decisions. And here, the old problems of ‘ethics’ are summoned again, as humans, even, can commit unethical decisions quite often.
However, here, aside from ethics of AI, let’s talk about what makes human human, and what makes us, human beings, to be human. All the knowledge that we have are necessarily memories, and most of the decisions that we make are based on this knowledge, be them academic or experiential. AI also act upon memories, but in quite different way than humans do. For humans, memories are more ‘linear’ than ‘horizontal’ or ‘divergent’, while for AI, memory is both vertical and horizontal. And for AI, horizontal memories are more strengthened so that they can be fed with more memories (data). For humans, memories are necessarily connected to some ‘sensory’ experiences, and they can be processed inadvertantly. On the other hand, for AI, sensory memories are also a part of ‘electrical’ data, (or they might not be able to have this sensory memories), and they only work upon given ‘algorithms’. Memories of AI never inflict them in a sense that traumatic memories sometimes jeopardizes their hosts, or human beings. For AI, nothing’s ‘personal’, but everything’s ‘public’ and ‘objective’. For AI, there’s nothing like ‘SUBJECT’.
Let’s talk a bit more about ‘prejudice’ of algorithms. This has been a great topic for many of the CS and Social/Humanities Studies researchers alike. As most of the AI researchers are male and have been trained in the US universities, many have reported that AI can make disadvantageous/prejudiced decisions. For example, black people have higher chances of getting rejected based on the decisions of AI when they apply for Insurnaces. Then, this can be ‘corrected’, or at least ‘removed’ by retraining the AI. However, how can humans teach the ‘subjectivity’ or ‘autonomy’ the the AI? How can we train the ‘subjectivity’ and ‘autonomy’ to the AI so that they can use their ‘autonomous power’ responsibly?
Let’s go back to literature again. Good or bad, literature is mostly about love, exhaltation, sometimes about loss, about fury, and about anger. Reading a literary works is thus, about learning how the others feel. Make a distinction between the world of ‘subject’ and that of ‘object’. For literary people, or those who have savored (too much) literary works, they can’t request more things from the others, as they know too well about how others would feel. They know how the others can be, sometimes just too much. So, will there be any way to make AI learn the ‘objective’ beings out their?
Today’s Question: How can we teach ‘subjectivity’ and ‘objectivity’ to Artificial Intelligence? Or, would that be a help in making them more humane or more ethical beings? Will there be any algorithms that we can teach them this old philosophical concept?