Introduction
Dr. Ben Goertzel did an interview with London Real and he gave some extremely intriguing ideas about how AI might replace humans and take over the workforce.
Dr Ben is both an entrepreneur and an expert in artificial intelligence. He has contributed to over 150 technical publications and 25 scientific books on the subject. In 2017, he founded singularity net which is a decentralised, democratic, inclusive, and beneficial artificial general intelligence.
He also holds several positions, including head scientist at Singularity Studio and chair of the futurist nonprofit organisation humanity plus. He is the principal scientist at Singularity Studio and the chairman of the futurist nonprofit organisation humanity plus.
He believes that ultimately, we should create AI systems that improve the world by making it more compassionate, just, and sustainable. The interviewer questioned Ben Goertzel about AI and how it will alter the world.
Will AI exterminate humanity?
Humanity will not be wiped out by artificial intelligence, but it can be triggered due to humans for other reasons.
It is believed that AGI (Artificial General Intelligence or AI) will not cause extinction of humanity but there are other higher chances like world war 3 which can cause humanity to end. But AI offers a significant opportunity to prevent the extinction of humanity.
Human perversity is afflicting the human race everywhere we turn. The question that really has to be posed is: Will AI ever come up with solutions or efficiencies that will save us from ourselves? Do you think AI will prioritise our own self-preservation instead?
As AI advances, it will go through numerous stages, each of which will have a unique ethical impact. AI systems are currently unable to reason independently or comprehend the world like humans do.
Although artificial intelligence (AI) systems are more advanced than human ones, they can still only do a limited set of jobs and cannot think like us. There is some truth to the statement that “AI can be utilised by individuals to do whatever they want.”
The distinction between artificial intelligence (AI) and human intelligence makes it obvious that AI systems cannot think like people do.
Will AI takeover our jobs?
The process has already begun where Buzzfeed got rid of there employees due to AI written article. And another example of graphic art jobs.
We are unaware of the extent to which human jobs and duties would become obsolete in the absence of artificial general intelligence (AGI), which is AI that can learn on its own without training material and has its own moral compass and learning mechanism.
But it is something that is a bit hard to predict. For example: graphic art jobs, almost certainly going to disappear. This isn’t due to the fact that artists are often educated people. It’s because there is already so much art in existence. You do not require an artist because the majority of graphic designers are rather repetitious. You only need to judiciously combine images from different works of art under the direction of some human beings. Therefore, you won’t be able to create a system that can create impressionism, cubism, or something like for the first time.
You won’t receive anything as creative as a Van Gogh or a Picasso. Therefore, there is still much work to be done in order to fulfil human requirements without developing artificial general intelligence (AGI). in a number of different ways. It still represents only modest advancement in solving the AGI issue. And with regard to the AI ethics with which you begin this discussion.
Current AI systems like Chat GPT and Google Lamda lack the contextualization necessary to function as autonomous agents in the same way that an AGI system could, these limited AI systems, as capable as they are, would still, you know, follow the lead of their human masters.
And in that regard, you feel that could be risky because you mentioned that those might be used by, I believe you said, unpleasant, greedy people. We ought to be more terrified of those whose AGI is genuinely unknown. It would end quickly, and old age is beyond your control.
We’re correctly constructing and forming them. So there, I guess. There doesn’t seem to be any justification, in my opinion, for thinking that raising a baby AGI to engage in education, healthcare, medicine, and science in a way that benefits humans, animals, and the environment will result in an AGI that is raised with a positive ethical framework that values love and respect for sentient beings.
Well, sure anything could happen in in this world. But it doesn’t.
What is Artificial General Intelligence or AGI?
Evidently, general intelligence is intelligence that is not limited to a single narrow problem or domain, but rather, intelligence that can be demonstrated by a single system using a common set of methods and structures.
Additionally, you want a system that can generalise so that you may train or programme it to perform certain tasks. It then takes a unique creative leap. Then it develops Artificial General Intelligence, which is when a machine learns to perform actions that are vastly dissimilar from those for which it was trained or designed.
Such systems are likely to make mistakes and occasionally feel like magic, therefore they are not going to be completely reliable. The fact that humans have successfully accomplished this as a species speaks volumes.
We developed from roaming the savannah in search of food, and as a result, we built robots, invented Facebook, and sent people into space. And we each demonstrate it.
Dr. Ben taught himself how to make circuits and to programme computers. The brain was not wired for it. Dr. Ben had to figure it out on the spot, but this human ability is inherent and arises from the basic self organising, self-expanding structure of complex systems. It is driven by two main factors, the first of which is individuation.
It strives to uphold its own boundaries, remain a complete system, and thrive ethically. The other is what you might refer to as self transcendence, which aspires to grow, transcend, and embrace new frontiers.
Can Chat GPT or Google Lamda do really complex tasks and programming?
Chat GPT, or Lamda can answer questions and just about anything. They can do some very simple soft software programming, they can answer some very simple mathematics they can answer simple questions on medicine, they can write a job cover letter for you.
So they, they are general in the sense that they cover a very wide variety of areas. On the other hand, they can do that because all those areas are in their training data, Because they read something very close to to the whole web.
So the general in the sense that they cover a lot of things, but they don’t have a fundamental ability to generalise. The reason they cover a lot of things is because they’re trained on a lot of things.
If human civilization, consisted of only of chat GPTs would never advance it would never progress because it’s just like retreading combinations of what was was fed into it. Now that part of the confusing thing. Like we do a lot of bloviating where we have no idea what we’re talking about. A lot of art is derivative, a lot of what people say is just like what we heard someone else say what we saw on TV, right, so so a lot of what humans do is just kind of shallowly, recombining stuff that was fed into us and kind of the same way that the chat GPT system does. On the other hand, that’s not all of what humans do, for every for every metal guitarist like going into the music store, badly playing Stairway to Heaven, there’s a million of those but there’s one Jimi Hendrix or Bucket Head he’s like making up radically new stuff on the guitar that that nobody ever did before. Right. So I think there’s a lot of power in these systems but people still have more power.
And it’s counterintuitive to us that you can have this much power without the ability to generalise but that’s because humans don’t have an intuitive sense for just the scope of data that’s fed into a system like this, like, none of us can suck the whole web into our brain like that, right? So we don’t have an intuition for what you can do with pretty dumb algorithms trained on on that scale of textual textual data or any kind of data and it’s, so these are really interesting tools that can be used for for good for evil and just for a lot of silly stuff, but they’re, they don’t have the ability to generalise in radical leaps, beyond the information, they’ve been fed in that way. They’re quite different than, than human brains or your human human cultural systems.
That’s not to say that subsystems cannot serve as components within AGI systems, right? I mean, you could perhaps use a chat GPT type system is one lobe of the of the artificial brain of an AGI system. But it’s some other lobe of that dream. That’s giving the ability to generalise and make wild creative leaps. It’s not the sort of statistical pattern learning and generation system that Chad GPT demonstrates, right, so you can continue to innovate, say Chuck GPT from three five to four to five to six, but as long as it’s a LLM, or a large language and language learning model, then it will never become an AGI because it can’t make that leap and generalise on its own. Yeah, that’s right. But I think what what people didn’t realise beforehand and to be honest.
What are main difference between Chat GPT and Google Lamda?
Both Chat GPT and Google Lamda are amazing and can do useful stuff. And they can seem very smart in the particular things they do, but they’re not making a big creative leap beyond beyond what they were trained and programmed to do.
The trick with these systems like chat GPT, and other similar systems that have been around a couple of years and not publicly launched, like Google Lamda by most reports is smarter than chat GPT. But Google did not want to release it publicly. Because basically, it spouts too much senseless BS and Google felt that it would not look good for their reputation to release a system like that.
Open AI made a different business call, partly because they’re not defending a huge search business. But the trick with these systems is that the training data is just so much, right. So these systems are general.
Lamda (Language Model for Dialog Applications) and ChatGPT are chatbot systems that provide more informal conversations. Despite sharing a GPT-3.5 architecture, Lamda and ChatGPT have significantly distinct training philosophies.
Compared to ChatGPT, Lamda is a supervised-learning model that produces responses that are typically more human-generated. It has been trained using a large amount of dialogue data. Its responses are frequently more human-generated than ChatGPT’s.
On the other hand, ChatGPT is a pre-trained model that is modified on web sites and produces responses that are more like to human.
But unlike ChatGPT, which simply makes use of pre-trained models and online resources, Lamda has the ability to learn from dialogue data and hence might create responses that are more suitable for a certain topic.
As a result, when an AI chatbot is used as a Q&A platform, the Lamda’s responses are more likely to be accurate and reliable. Ultimately, both LaMDA and ChatGPT have benefits and drawbacks when it comes to conversational chatbot technology.
Lamda may offer more human-generated responses because of its supervised-learning model and conversation data; yet, ChatGPT’s pre-trained model and web pages may offer a more reliable response. Trainers of human-AI systems must choose the technology that is best suited for their use.
Conclusion
We covered various aspects in this article which was mainly around AI and how it is influencing the job sector and how it will be influencing rest of the world. We also covered various different questions:
-
- Will AI exterminate humanity?
- Will AI takeover jobs?
- What is artificial general intelligence (AGI)
- Can Chat GPT or Google Lamda do really complex tasks and programming?
- What are main difference between Chat GPT and Google Lamda?