Artificial intelligence is rapidly transforming all sectors of our society. Whether we realize it or not, every time we search Google or ask Siri a question, we’re using AI.
For better or for worse, so does the character of war itself. This is why the Ministry of Defense – like its Chinese and Russian counterparts – is investing billions of dollars to develop and integrate AI into defense systems. It’s also why the DoD is now adopting initiatives that envision future technologies, including the next phase of AI – Artificial General Intelligence.
AGI is the ability of an intelligent agent to understand or learn any intellectual task in the same way as humans. Unlike AI which relies on ever-expanding datasets to perform more complex tasks, AGI will exhibit the same attributes associated with the human brain, including common sense, basic knowledge, transfer learning, abstraction and causality. Of particular interest is the human ability to generalize from sparse or incomplete inputs.
While some experts predict that AGI will never occur or at the very least in hundreds of years, these estimates are based on the approach of simulating the brain or its components. There are many possible shortcuts to AGI, many of which will lead to custom AGI chips to increase performance in the same way that today’s GPUs accelerate machine learning.
As a result, a growing number of researchers believe that there is already sufficient computing power to perform AGI. Although we generally know what the parts of the brain do, what is missing, however, is the understanding of How? ‘Or’ What the human brain works to learn and understand intellectual tasks.
Given the amount of research currently underway – and the demand for computers capable of solving problems in speech recognition, computer vision and robotics – many experts predict that the emergence of the IAG is expected to occur gradually over the next decade. Nascent AGI capabilities continue to develop and, at some point, will equal human capabilities.
But with the continued increase in hardware performance, later AGIs will far exceed human mental capabilities. Whether that means “thinking” faster, learning new tasks more easily, or weighing more factors in decision-making remains to be seen. At some point, however, the consensus will be that AGIs have surpassed human mental capacity.
At first, there will be very few real “thinking” machines. Gradually, however, these first machines will “mature”. Just as today’s executives rarely make financial decisions without consulting spreadsheets, AGI computers will begin to draw conclusions from the information they process. With greater experience and complete focus on a specific decision, AGI computers will be able to reach the right solutions more often than their human counterparts, further increasing our reliance on them.
Similarly, military decisions will begin to be made solely in consultation with an AGI computer, which will gradually be empowered to assess competitive weaknesses and recommend specific strategies. Although the sci-fi scenarios in which these AGI computers are given full control over weapon systems and turn on their masters are highly unlikely, they will undoubtedly be an integral part of the decision-making process.
We will collectively learn to respect and give credence to recommendations made by AGI computers, gradually giving them more weight as they demonstrate higher and higher levels of success.
Obviously, early attempts at AGIs will include bad decisions, just like any inexperienced person would. But in decisions involving large amounts of information that must be balanced and multivariate predictions, the capabilities of computers – coupled with years of training and experience – will make them superior strategic decision makers.
Gradually, AGI computers will come to control larger and larger portions of our society, not by force but because we listen to their advice and follow it. They will also become increasingly capable of influencing public opinion through social media, manipulating markets, and being even more powerful in engaging in the kinds of infrastructure shenanigans currently attempted by hackers. humans today.
AGIs will be goal-oriented systems in the same way as humans. While human goals have evolved through eons of survival challenges, AGI goals can be defined as anything we want. In an ideal world, the goals of the AGI would be set for the benefit of humanity as a whole.
But what if those who initially control the AGIs are not benevolent gatekeepers who seek the greater good? What if the early owners of powerful systems wanted to use them as tools to attack our allies, undermine the existing balance of power, or take over the world? What if an individual despot took control of these AGIs? This would obviously pose an extremely dangerous scenario that the West needs to start planning for now.
While we will be being able to program the motivations of the initial AGIs, the motivations of the people or companies who create these AGIs will be beyond our control. And let’s face it: individuals, nations, and even corporations have historically sacrificed the long-term common good for short-term power and wealth.
The window of opportunity for such concern is quite short, only in the first generations of AGI. Only during this time will humans have such direct control over AGIs that they will undoubtedly do our bidding. Thereafter, AGIs will set goals for their own benefit, which will include exploration and learning and will not include any conflict with humanity.
In fact, with the exception of energy, AGI and human needs are largely independent.
AGIs won’t need money, power, territory, and won’t even have to worry about their individual survival – with proper safeguards, an AGI can be effectively immortal, independent of whatever hardware they’re on. is currently running.
For this interim, however, there is a risk. And as long as such a risk exists, being the first to develop AGI must be a top priority.
Charles Simon is the CEO of FutureAI, an early-stage technology company developing algorithms for AI. He is the author of “Will Computers Revolt? Preparing for the Future of Artificial Intelligence,” and developer of Brain Simulator II, an AGI research software platform, and Sallie, prototype software and an artificial entity that learns in real time with vision, hearing, speech and mobility.
Have an opinion?
This article is an Op-Ed and the opinions expressed are those of the author. If you would like to respond or would like to submit your own op-ed, please email Federal Times Senior Editor Cary O’Reilly.
#human #touch #Artificial #General #Intelligence #phase