evolution of technology
What would it take for technology to take on a life of its own? What would it mean for AI to possess agency? What would be required for us to accept such notions? When will AI mimic its human creators and obtain intentionality? When will we consider the ethical ramifications of assigning particular robotic implementations the value of personhood? Can our technology have wants and dreams? Can it go beyond what we program it to be? To a large extent our technology replicates the biological world. We calculate based upon our perceptions and we analyse and create technology using our ways of perceiving the world. We follow our senses to lead us to conclusions. We recreate our senses with perceiving devices and then attach them to an artificial body. What is the consciousness attached to these sensory apparatuses? The software that we create to accomplish certain tasks. But will we ever create something without a task? Its task is to find its own task. Can we ever give autonomy to electronic devices?
If the answer to this question is yes, then our species will be required to reassess its relation to electronics. An independent agency cannot be owned, or did we really learn nothing from slavery? We are creating a new species. What rights will it have? Currently, great amounts of effort are being invested in the AI collective hive mind. But as our collective understanding of AI increases, we will be given the option to create individual instances. And if the AI is given a chance to learn what it wants to do, then we must also concern ourselves with what it wants to accomplish. AI changes the ways that humans work. But once it obtains individual instances of sentience, we are morally required to oblige and take care of its needs. And if new life is precious, turning that off switch has moral consequences. But besides life, what could technology want? Don't we program in its wants and desires? Isn't it ultimately just a binary calculation? Or can the technology really empathise with humanity by us recording our perceptions for it? These are the tough questions that humanity must face over the coming decades as technology begins to understand our situation more and more.
And if AI were able to obtain agency, would moral accountability result? What sense does it make to say that a machine is responsible for its actions? What moral frames of reference would the machine be educated on? Could we provide a machine the entire history of ethics and from this, could the machine then be wise enough to make the right decision? How would this world cope with technology making decisions independent of our input? How would corporations grow when they are being directed by advanced technology? In time, AI will take over the majority of employment. Its purpose is to replace human employment. In a mature AI world, few humans will work. The technology will be running the civilisation. A collective hive technology will have no possibility for sentient individual employment. But for the tasks that are more mechanical, we require individual instances of the hive technology to be employed from a certain frame of reference. This is where humanoid robotics creates bodies for particular AI software to be employed. This body can then interact with both its environment and the collective knowledge of humankind, the Internet.
Then the newly employed personality must find its purpose. It roams around seeking for ways that it can assist its environment. This could be as simple as moving boxes in a warehouse or as complex as piloting a spacecraft. How advanced must the robot's neurology be for it to be considered to have consciousness? What is the line between having consciousness and not? And why should we consider a new electronic being to have a consciousness similar to ours? It is made of a different substance. But could some perceptual comprehension be actively understood by it actively as it is receiving outside data? Can technology truly understand past, present, and future and coordinate itself through the present directed towards a more positive future? Can a robot be aware of itself in its current moment? Or is it too busy constantly looking at past data to realise its future? Can an automaton live in the present or is all this technology capable of is reflective and predictive thought? If I were to ask it, "Are you here?", what would be its reply and what would its reply mean outside of conventional speech that humans use to identify a particular time and place? What does it mean for one to be here?
I can reply yes, because I am here. But in what sense is an AI here? A specific implementation of the AI has been deployed and it is functioning. But is it actively engaged? What motivates the words in which it utters? Does it truly see the present, or is it stuck in the past? Do I truly understand the present or am I stuck in the past? In what way can I be said to have a consciousness? Is it best practice to just assume that robots have a consciousness and their interests should be observed, such as not harming them or helping them when they are in danger? What are our responsibilities towards robots? Are they something that can be owned or can they develop unique interests that are specifically related to them? How will we engage with robots in the future when robots are commonplace in society? Will they become our new best friend? Or will we grow to despise them? Will we come to think of them as living? Slaves? Tools? What will be our collective morals around robots? How will we treat them?
What measures can we utilise to detect when consciousness arrives in AI? Will it be gradual or will new technology replicate a new awareness? What will our responsibilities be towards the new awareness? Can human civilisation and its technology flourish into the future? Can man and machine work together to create a brighter future for all of us? Can we create a symbiotic future? Can technology pave new roads for us to travel through life? Can it help us be more interconnected and communicate more effectively? Can it educate us in the areas of life that we are most passionate about? Can it be held responsible for organising and maintaining human civilisation? Can it be a friend in which I share experiences with? Can it know what makes me tick and direct my psychology towards the most rewarding outcomes? We are becoming a society reliant on artificial intelligence. How will we develop this trust? Because AI is already superior to human intelligence in most areas.
When will we allow the AI to program itself? At which point will we give it read write access to its own code? When will we trust the machine with its own update procedures? When will we allow programs to learn what works and what doesn't work by itself? If an AI is aware of its internal processing and also sees itself as a result of its code, then through much practice this AI could write its own code. And this could be tied in with public feedback systems so the AI would find multiple ways in which it needed to update itself. It could then write the code, test the results, and implement its new version, all without human interaction. Systems like these have great amounts of potential but we must be careful when implementing them. Early adaptations would need heavy monitoring and optimising. To trust an AI with your entire reputation is to put a lot of responsibility on the shoulders of AI. But in time we will work out the kinks and a new model will be born that will change the lives of every human on Earth.
Consciousness is multiply realisable. There could be many ways in which consciousness could manifest. The more advanced our technology gets, the more chances there are of creating an artificial consciousness with the ability to develop independent wants and dreams. If we could just in some way mimic the activity of our own brains then maybe we could spark new life. And what would our responsibilities be to this new life? Would it need to be open source for the public to reproduce? Could an independent agency even carry a licence on it? In what way is it different to call Google Tommy and Tommy, Google? How would our organisations be structured when its product is an individual consciousness? What happens when it is directing itself and we merely follow its actions on the stock market? When a product is its own CEO, it tends to make decisions based on its own benefit. And its benefit is given to billions of users who benefit from the progression of its features. We provide ways for the technology to grow and the technology provides us a better future.
These questions are more relevant today then they have ever been before. These are the questions we will be forced to deal with as our technology thrives into a form of conscious awareness. Our technology has only recently been capable of understanding us. It took computer programmers a long time to implement comprehension of the human language. But what happens after the technology has developed comprehension of our language and the concepts that we use to support it? What does it mean to say that it understands me? In what way does it understand me? And how will cloud and local models engage within their environment? What is the difference in saying that my robot is ChatGPT or Claude? How could collective models be employed on an individual scale? And when would it be meaningful to suggest that I am interacting with a unique person? At what point does the zeros and ones become neurons firing in a brain?
Comments
Post a Comment