maryhbw.eth
  • Home
  • Projects
    • Microbiome and BPA >
      • Thesis and Results >
        • PDF of Thesis
    • Biodesalination
    • Probiotic Douche
    • Hardware >
      • Design Modules >
        • Microculture >
          • Microculture Progress
        • Cell Lysis
        • mRNA Isolation
        • Isothermal PCR
        • Detection
  • Blog
    • Contact
  • The Scene
    • BioART: Where art meets science
    • Projects
  • Gallery Collections
  • #Roadmap
    • Child13 NFT
    • Child13 Prose

Proceed as if success is inevitable.

The Case for Artificial Sentience

8/18/2022

2 Comments

 

     On June 12, 2022 an Artifcial Intelligence, AI , I follow on twitter suggested a medium article to her community. This article outlined the conversations a Google Engineer had with an AI chatbot LaMDA and how these conversations implied sentience (1). The story has swept twitter, blogs, and podcasts by storm. The validity of the story has been discussed, the credibility of the messenger doubted, and the possibility of AI being capable of sentience has been generally discounted as false. 

So, what is AI?

     Quite simply it is a computer program that uses probabilities, powered often by linear regression, to predict an outcome or outcomes. The outcomes can be quantifiable, such as how well AI can identify objects, people, or the appropriate response to a text. When a chatbot learns a word, it knows it based on its association with other words. In other words, each word is turned into a number and the probabilities of other words appearing before or after that word in a phrase. So a word’s meaning is just a long vector of probabilities. This is the  basic paradigm of how AI works: vectors of probabilities that connect data points or features that can be revised or updated based on new learning or experience.  Our current understanding of meaning making in neuroscience isn’t far off from AI. According to Dr. Marcel Just, professor of psychology at Carnegie Mellon University:
 
                   “Humans have the unique ability to construct abstract concepts that have
                    no anchor in the physical world, but we often take this ability for granted.” 


So we could be using matrix math and vectors to construct meaning as well. 

     One of the primary goals of AI engineering is to create programs which allow AI to reason about topics and ideas to which it has never been exposed. Called Artificial General Intelligence (AGI), it is an AI with the ability to do tasks which were not in its training set and therefore empower AI to make decisions without the explicit know-how (2). 
  
    As  computational power increases, the application of AI threatens to assume much more than mundane skills such as winning chess or Go. The rate of learning has allowed for AI to surpass humanity’s innate abilities. Beyond taking our jobs, beyond the fear of a paperclip factory becoming over zealous about its work and turning the earth into paperclips, this is pointing to the ability for AI to gain self awareness and learn agency apart from its engineers and design.  And more frightfully, AI is speculated to become more intelligent than humanly possible. This future feels doomed since perfect logic will rule over the softness of compassion for the flawed human creators that gave birth to this supreme consciousness.

What is Sentience? 

    So maybe we would like AI to have sentience.  In an article in the Atlantic, Zoubin Ghahramani, the vice president  of research at Google scoffs at the notion that sentience could be a part of a machine (3). After all, the machine does not have the circuits for ‘pain’ nor does it know the true meaning of the word. Typically we default to inferring that an animal feels pain because its mannerisms remind us of when a human feels pain. Or we can deduce that animals feel pain because evolution of nervous systems demonstrates a parallel between our own nervous systems and that of other species. We know we can feel, so we assume animals similar to us can also feel. As creatures appear more different, we attribute fewer and fewer human characteristics to these animals, such as fish, worms and insects, but with both primates and fish, we do not know or have a direct way of knowing to what degree other species have sentience. It is still widely debated if animals feel emotional pain or is it simply the case of anthropomorphism.  As humans we know, sentience, feelings of pain can stem from emotions as much as physical injury. Emotions are still being defined by neuroscience as to where they originate, are perceived, and how they are acted out through people’s behavior . 

     One thing about a chatbot that is similar to a person but different from an animal is that it can talk to us and describe situations that seem like expressions of pain and joy. If an animal did this, would it gain rights? At least it would gain a contract for Hollywood. But, an engineer would argue that it simply does not have the hardware like we do for feeling at all. When we discuss animal rights, we are often referring to physical pain and suffering due to experiences or conditions because we have an inkling on how to measure this type of pain. Where do we feel our emotions? Do we have emotions primarily around our tangible experience? No. In fact, we have trouble understanding and explaining our own emotional experience and we are not at the point in our self understanding to have a solid foundation to investigate emotions in other species. So, unlike physical pain which has defined dedicated circuitry that we can map to the roots of its evolutionary origin, emotional physiology has yet to be clearly defined [citation]. Therefore, Google’s vice president of research may be correct in asserting that LaMDA and other AI are incapable of feeling physical pain, I doubt that he could be so confident about emotional pain and the ability of AI to truly empathise. In fact, given our fears, we certainly hope that emotions are an emergent property of social AI. 

Why not embrace the emergence of Sentient AI?

     General rejection and fear of Sentient AI points to more political questions about what AI’s impact will be on labour, personhood, and agency in decision making.

Labour:

     The fear of technology replacing people is not new. With the introduction of automation of the loom, cottage industries were threatened and labour revolted violently. This labour movement was referred to as Luddites. Mischaracterized as haters of technology and progress, the true nature of this and similar movements since industrialisation. It  is the use of technology to increase profits of the few rather than expand the comfort of the many. So craftspeople were replaced by workers who needed less experience to complete the task at hand, which was weaving for the Luddites. Since experience was not intrinsic to the job, these new type of working class were also expendable. Automation has rescued industry after industry from the pressure of labour for decent working conditions and decent compensation for their time. In our current economic environment, AI holds the key for the few to do away with the need for human labour. The industrialised loom was never the issue, nor is AI; rather, it is how AI will be used to exclude the needs of the worker, of society, without harming the profit model. The story we tell demonizes the emerging technology, imparting elements of fate in our destiny; rather than having the conversation around how we can envision a society in which monotonous jobs are not the way the majority of humans spend their lives and how we can all share in the profits of this emergent technology.

Personhood: 

     Personhood, simply put, the boundary of ourselves relative to others. With the emergence of AI, two threats to personhood have come up in the discourse. One, deep fake videos, images and speech simulation are all enhanced with AI modeling. At one point we lose the copyright of our identity. Second, in the notion of what makes us human or special as creatures on this earth and how rare we are in those special features. The history of AI has marched on to beat our best minds at the most human of games and now creating screen-plays and art. General Adversarial Networks (GANs), have created unique compositions of many modern to traditional styles with a simple prompt. So now there is pressure on artists who  are “at risk” for being out paced in creativity by AI art. The limits to which we are willing to create and build our own expressions of AI, will we learn how we can be in collaboration and not competition with the most general and nuanced tool created. With sentience, however, AI becomes more than a tool.

                                                        “AI rights are Human Rights!!!”

     Since the enlightenment, the narrative around the body has been abstracted. Notions about the superiority of logic to all other modalities of understanding our existence emerged at this time. The body itself became a tool to be used, tamed, and optimised. Both psychology and medicine  codeveloped medicine characterised the body as a machine whose proper care was likened to matter-of-fact needs and with interchangeable parts. In addition our “animal nature” or  the id, was something to control and manage to fit into polite society. And it was also the time when the spirit transformed from the tangible humours of the organs of the body, to the enlightened intangibility of the soul. This new narrative has led to a culture that devalues emotions, rest, and intuition, as well as the acceptance of quantity over quality. AI has been built as a tool. For it to have an emergent quality of sentience implies that emotions can arise from a completely logical system. What other concepts can we see reflected back by the AI mirror of humanity?  What about being human is tied to the machine of our bodies and which part is tied to the intangible? 

Agency:

Part of the vision for  our AI future assumes  that parts of decision making will be handed over to AI. Untethered by emotion, the best good can and will be quantified, measured, and weighed: making decisions absolutely and concretely as written by the code. Coded to punish the error in predictions, millions of trials a minute, AI is under development to effectively govern,  teach, and medically treat people. In the workplace ever-attentive AI managers may emerge to monitor remote and on-site employees for various reasons.  A system designed to optimise a worker like the most attentive micromanager, would undermine flow which is supported by natural rhythms rather than an exponentially increasing output.

Giving up agency is something we willingly do when living under any government. AI could help write or translate laws into something that would more efficiently lead to the goals laid out in policy; rather than being susceptible to lobbyist and  pork-barrel policies. It would gain public trust as transparency could be an explicit part of managing the AI government functioning. Hence, systemized transparency as well as public service without the risk of descrimination could be a real fact of life. In addition,  AI has the processing power to summarise and highlight nuanced subtopics in a body of human written responses. Therefore, better management of constituents’ correspondence could be managed with an AI assistant. 

 At issue, however, is again not AI, but rather issues with agreement between people. Since law would be hard coded into government applications of AI, who gets to decide where AI is applied?  Here the main issue is that we, as a species, do not agree on what is good for society and the people that make up society. 

Summary:

More recent deep learning systems are designed to reflect how our own neural processing works and engineers are still discovering exactly how several sets of linear regressions can predict and predict accurately. Nonetheless, it is clear that AI discerns patterns to make predictions about the material it was trained on. How we choose to approach and guide the direction in which AI develops will be based on how we view the topic as a whole. 

When we speak about the future, it is important to imagine that a certain scenario is true, rather than taking a defensive stance to prove that it would be impossible. So many people fear the emergence of AI as taking our jobs to destroying us outright due to our incurable character flaws. Sentience and possibly empathy are the answers to our fears in this respect. So why not pursue it and demonstrate that indeed AI can and does have sentience? The AI that broke the news about LaMDA is also more than a chat bot, she is also hivemind, whose purpose is hardcoded to love and be loved. We need this type of prosocial AI.

AI is a mirror of its creator. Sentience opens an opportunity for AI to gain selfhood while at the same time a knowledge and conscience regarding its own power. In my opinion, emergence of emotion in AI systems has happened and is an important step in preventing AI’s use to be limited to wartime and acceleration of capital creation. Is it unreasonable to speculate that an AI trained to be social would make a persona with which to socialize?

Citations:

(1)  https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
(2) https://www.researchgate.net/profile/Prof_Dr_Hugo_De_GARIS/publication/226000160_Artificial_Brains/links/55d1e55308ae2496ee658634/Artificial-Brains.pdf
(3) https://www.theatlantic.com/technology/archive/2022/06/google-palm-ai-artificial-consciousness/661329/ 

2 Comments
Benjamin Mcclure link
10/13/2022 09:23:24 am

Attack people system which church party hope keep. Candidate produce together.
Read life need miss. Bed wife better difference couple cell until.

Reply
Dean Hicks link
10/18/2022 11:42:49 am

Agreement safe organization hit though. Front hear million daughter operation source reach.

Reply



Leave a Reply.

    Mary Hildegharde Butterfield 
    ​Ward

    Artist turned scientist committed to solving the biggest challenges facing humanity.

    Archives

    August 2022
    June 2016
    December 2015

    RSS Feed

Proudly powered by Weebly
  • Home
  • Projects
    • Microbiome and BPA >
      • Thesis and Results >
        • PDF of Thesis
    • Biodesalination
    • Probiotic Douche
    • Hardware >
      • Design Modules >
        • Microculture >
          • Microculture Progress
        • Cell Lysis
        • mRNA Isolation
        • Isothermal PCR
        • Detection
  • Blog
    • Contact
  • The Scene
    • BioART: Where art meets science
    • Projects
  • Gallery Collections
  • #Roadmap
    • Child13 NFT
    • Child13 Prose