“Artificial General Intelligence” is the next stage of artificial intelligence

Artificial intelligence is rapidly changing all sectors of our society. Whether we realize it or not, every time we do a Google search or ask Siri a question, we’re using artificial intelligence.

For better or worse, the same is true of the very nature of war. This is why the Department of Defense – like its counterparts in China and Russia – is investing billions of dollars to develop artificial intelligence systems and integrate them into defense systems. It’s also the reason why the Department of Defense is now adopting initiatives that envision future technologies, including the next stage of AI – Artificial General Intelligence.

Artificial general intelligence is the ability of an intelligent agent to understand or learn any intellectual task in the same way that humans do. Unlike AI that relies on ever-increasing data sets to perform more complex tasks, general AI will exhibit the same traits as those associated with the human brain, including common sense, basic knowledge, transfer learning, abstraction, and causation. Of particular importance is the human ability to generalize from meager or incomplete inputs.

While some experts predict that AI will never happen or at least hundreds of years away, these estimates are based on approaches to simulating the brain or its components. There are many possible shortcuts to AGI, many of which will lead to AGI chips dedicated to boosting performance in the same way that today’s GPUs speed up machine learning.

Accordingly, a growing number of researchers believe that computer power is already sufficient to achieve artificial general intelligence. Although we generally know what parts of the brain do, what we lack is insight How The human brain learns and understands intellectual tasks.

Given the amount of research currently underway—and the demand for computers capable of solving problems related to speech recognition, computer vision, and robotics—many experts predict that the emergence of artificial general intelligence is likely to occur gradually within the next decade. The capabilities of the nascent AGI continue to evolve and will at some point equal human capabilities.

But with the continuous increase in hardware performance, the subsequent artificial general intelligence will greatly exceed human mental capabilities. Whether this means “thinking” faster, learning new tasks more easily, or evaluating more factors in decision-making remains to be seen. At some point, though, the consensus will be that artificial general intelligence has surpassed human mental capabilities.

At first, there will be very few true “thinking” machines. However, these initial machines will gradually “mature”. Just as today’s executives rarely make financial decisions without reference to spreadsheets, AGI computers will begin to draw conclusions from the information they process. With greater expertise and a complete focus on a specific decision, AGI smart computers will be able to access the right solutions more often than their human counterparts, increasing our dependence on them.

In a similar way, military decision-making will begin only in consultation with the AGI computer, which will gradually be enabled to assess competitive weaknesses and recommend specific strategies. While sci-fi scenarios in which these AI computers are given complete control over weapon systems and the operation of their masters are highly unlikely, they will undoubtedly become an integral part of the decision-making process.

We will collectively learn to respect and belief the recommendations made by AI computers, gradually giving them more weight as they demonstrate greater and greater levels of success.

Obviously, early attempts at artificial general intelligence would involve some bad decisions, just as any inexperienced person would. But in decisions involving large amounts of information that have to be balanced, and predictions with multiple variables, the capabilities of computers—linked to years of training and experience—will make them superior strategic decision makers.

Gradually, smart computers will gain control of larger and larger parts of our society, not by force but because we listen and follow their advice. They will also be increasingly able to influence public opinion through social media, manipulate markets, and will be more powerful at engaging in the kinds of infrastructure fraud that human hackers are currently trying to use.

AI will be goal-driven systems in the same way that humans are. As human goals have evolved through eons of survival challenges, AI general goals can be set just as we like. In an ideal world, the goals of artificial general intelligence are set for the benefit of all of humanity.

But what if those who initially control AI are not benevolent minds seeking the common good? What if the first owners of powerful systems wanted to use them as tools to attack our allies, undermine the current balance of power, or take over the world? What if a tyrant took control of such artificial intelligence? This would obviously be a very dangerous scenario for which the West must start planning now.

while we will Being able to program the motives for raw AI, the motives of the individuals or companies that create that AI will be beyond our control. And let’s face it: individuals, nations, and even corporations have historically sacrificed the long-term common good for short-term power and wealth.

The window of opportunity for such a concern is rather short, only in the first few generations of AGI. Only during that time will humans have such direct control over AI that they will undoubtedly make our bids. Then, AGIs will set goals for their own benefit which will include exploration and learning and need not include any conflict with humanity.

In fact, except for energy, there is largely no relationship between AI and human needs.

AI won’t need money, power, and territory, and don’t even have to worry about their individual survival — with proper backups, AI can be effectively immortal, independent of whatever device it’s currently running on.

For this temporary period, though, there is a risk. And as long as such risks exist, being the first to develop artificial general intelligence should be the top priority.

Charles Simon is the CEO of FutureAI, an early-stage technology company that develops algorithms for artificial intelligence. He is the author of Will Computers Revolutionize? Prepare for the Future of Artificial Intelligence, developer of Brain Simulator II, an AGI research software platform, and Sallie, a software prototype and artificial entity that learns in real time through vision, hearing, speaking, and navigation.

Do you have an opinion?

This article is an editorial and the opinions expressed are those of the author. If you would like to respond, or have your own editorial that you would like to submit, please Email, Federal Times Senior Editorial Director, Carrie O’Reilly.

Leave a Comment