Main menu

Pages

Will artificial intelligence increasingly take over aspects of life from humans

 

Artificial intelligence

An expert revealed that artificial intelligence will 'increasingly take over' many aspects of life, but human imperfections can save us from being replaced, as most people around the world have come across some form of artificial intelligence.

This technology has become ubiquitous in everyday life — from Apple's Siri to your favorite chatbot on the web. However, such widespread use could seep technology into every aspect of our daily lives, experts say.

Roger Grimes, a data-driven advocacy expert at cybersecurity firm KnowBe4, spoke to The U.S.A. Sun on the future of artificial intelligence (AI). “The technology will come no matter who tries to stop it,” Grimes said, noting that it will become more advanced over time, and added that AI will definitely take over more and more with time.

However, there are parts of the human psyche that AI will never be able to fully replicate, said Grimes and several other experts. Part of what makes us human is the ability to make mistakes, and sometimes those mistakes lead us to new discoveries that lead to New inventions that we weren't originally assigned to.

"Only humans look at the sky and see clouds that look like animals in a way that somehow helps us solve some problems that we didn't even think about at that time," he said. He is currently learning it.

As it stands, AI does not have human-like intelligence, nor can it predict the future or plan for the future — this is known as "narrow AI," Grimes said. focus on it when we're doing it."

He added, "But, yes, AI will continue to replace the mundane things in our lives. And it may one day come close to replacing what we think of as mostly human today."

What is the worst thing that can happen

Grimes noted that the worst-case scenario is that humans destroy themselves due to acquired technology or a related accident, and assuming this does not happen, technology is likely to improve all of our lives - depending on how it is used. Technology is the basic life of all human beings, rich and poor.

“The poorest of us live in a world that only kings could have imagined just a few centuries ago,” he added. Thanks to technology, we have pioneering medicines and vaccines that have saved hundreds of millions of lives, and it has also put vast amounts of information at our fingertips via smartphones and the Internet.

Artificial intelligence

Certain behavior and characteristics of computer programs that make them simulate human mental capabilities and work patterns. One of the most important of these characteristics is the ability to learn, reason and react to situations that were not programmed into the machine. However, this term is controversial because there is no specific definition of intelligence.

The field is founded on the assumption that a faculty of intelligence can be described with such accuracy that a machine can simulate it. This raises a philosophical debate about the nature of the human mind and the limits of scientific methods, which are issues that have been covered in mythological, imaginary and philosophical tales and discussions since ancient times. There is also a debate about the nature and types of intelligence that a person possesses, and how to simulate them with a machine. Artificial intelligence was and still is a cause for great optimism, it has suffered severe setbacks throughout history, and today it has become an essential part of the technology industry, bearing the burden of the most difficult problems in modern computer science.

AI research is so highly specialized and technical that some critics criticize the field's "disintegration". Subfields of artificial intelligence center around specific problems, the application of special tools and old theoretical differences of opinion. The main problems of artificial intelligence include abilities such as logical reasoning, knowledge, planning, learning, communication, perception, and the ability to move and change objects. General intelligence (or “strong artificial intelligence”) remains a long-term goal for some research in this field.

History of artificial intelligence research

In the middle of the 20th century, a few scientists began exploring a new approach to building intelligent machines, based on recent discoveries in neuroscience, a new mathematical theory of information, and the development of cybernetics, and above all, by the invention of the digital computer, a machine that could simulate Human computational thought process.

He founded the modern field of artificial intelligence research at a conference on the campus of Dartmouth College in the summer of 1956. These attendees became leaders of artificial intelligence research for decades, especially John McCarthy and Marvin Minsky, Allen Noel and Herbert Simon who founded the Artificial Intelligence Laboratories at MIT and the University of Carnegie Mellon (CMU) and Stanford, they and their students have written programs that most people have been amazed at. The computer was solving algebra problems, proving logical theorems, and speaking English. By the mid-1960s, this research was being funded generously by the US Department of Defense. These researchers made the following predictions:

In 1965, e. a. Simon: "In 20 years, machines will be able to do anything a human can do."

1967 Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will be largely solved."

But they failed to realize the difficulty of some of the problems they faced. In 1974, in response to the criticism of Sir James Lightell of England and the constant pressure from Congress to fund more productive projects, the US and British governments cut their funding for all undirected exploratory research in the field of artificial intelligence. This was the first setback in artificial intelligence research.

In the early 1980s, AI research experienced a resurgence with the commercial success of "expert systems", which are artificial intelligence programs that simulate the knowledge and analytical skills of one or more human experts. By 1985, the profits of AI research in the market had reached more than $1 billion, and governments had started funding again. A few years later, starting with the Lisp machine market crash in 1987, AI research experienced another, but longer, setback.

In the 1990s and early 2000s, AI made bigger inroads, albeit somewhat behind the scenes. AI is used in logistics, data mining, medical diagnostics and many other areas throughout the technology industry. This success is due to several factors: the great power of computers today (see Moore's Law), the increased focus on solving specific sub-problems, the creation of new relationships between the field of artificial intelligence and other areas of work on similar problems, and above all that researchers began to adhere to strong mathematical approaches And strict scientific standards.

In the 21st century, AI research has become highly specialized and technical, and has been divided into independent subfields so deeply that they are few in number with each other. Divisions of the field have grown around specific institutions, researchers working on specific problems, and long-standing differences of opinion over how AI should work and the application of widely different tools.

Artificial intelligence problems

The problem of simulating (or making) intelligence is divided into a number of specific sub-problems. These consist of specific features or capabilities that researchers would like an intelligent system to embody. The features listed below have received the most attention.

Reasoning, logical thinking, and the ability to solve problems

Early researchers in artificial intelligence developed algorithms that mimic the sequential reasoning that humans do when solving puzzles, playing backgammon or logical deductions.

In the 1980s and 1990s, AI research led to highly successful ways of dealing with uncertain or incomplete information, using concepts of probability and economics.

For difficult problems, most of these algorithms require massive computational resources—leading to a "combinatorial explosion": the amount of memory or time required for computers becomes astronomical when the problem exceeds a certain size. Searching for more problem-solving algorithms is a top priority for AI research.

Humans solve most of their problems using quick, intuitive rather than conscious judgments, by way of gradual deduction that early AI researchers were able to simulate automatically. AI research has made some progress in imitating this 'sub-symbolic' type of problem-solving skill: the approaches involved underline the importance of sensory-motor skills for higher-order thinking; It tries to research in the field of neural networks and simulate the structures inside the brain of humans and animals that lead to the emergence of this skill.

author-img
Mohamed Al-Rawi is a professional journalist since 2011, a media graduate from Kuwait University, a technology expert, a media consultant and a member of the International Organization of Journalists - a member of the fact-checking team at Meta Company. He writes in the fields of entertainment, art, science and technology, and believes that the pen can change everything.

Comments