THE ELECTRIFICATION OF THE AMERICAN CAR
The Electrification of The American Car “We will not stop…
AI (artificial Intelligence) has been making advances at an alarming rate. In recent months alone, it has taken technology to a whole new level. We already see it in smart phones, virtual assistants, chatbots, drones, assembly-line robots, and more. But, as they say, you ain’t seen nothing yet.
AI Dangers Today
The possibilities of AI in the future are endless– but, in the wrong hands, disastrous. Today, we see dangers like job loss, political and ideological bias, social manipulation, and loss of privacy.
There are other dangers we might see in the near future. Here is an example, courtesy of Tableau.com – an AI system is tasked with something beneficial, like helping rebuild an endangered marine creature’s ecosystem. But in doing so, it decides that other parts of the ecosystem are unimportant and it destroys their habitats. Click here for more clear and present dangers.
AI Dangers Tomorrow
“We are entering dangerous and uncharted territory with the rise of surveillance and tracking through AI, and we have almost no understanding of the potential implications,” – Andrew Lohn, Physicist & Engineer, Georgetown University
AI investor and entrepreneur Ian Hogarth warns that artificial intelligence researchers are running towards a finish line without an understanding of what lies on the other side. The pursuit of increasingly smart machines, artificial general intelligence (AGI), may soon advance to the degree of divinity. In other words, they might turn AI into a God.
While the worst-case scenarios might be a little less frightening than Hollywood movies like the Terminator, they are no less dystopian.
According IEEE.org, most scenarios don’t require a tyrannical dictator or malicious hackers to bring them to fruition. They could simply happen by default, unfolding organically, if nothing is done to stop them.
Artificial Superintelligence
Artificial superintelligence (ASI) is a hypothetical kind of artificial intelligence (AI) that goes beyond simply mimicking or understanding human intelligence and behavior. With ASI, computers become self-aware and outperform human intelligence and ability. ASI is right around the corner, and that worries Elon Musk and other AI experts.
Earlier this year, Musk and a group of researchers called for a six-month pause in developing systems more powerful than OpenAI’s newly launched ChatGPT, in an open letter citing potential risks to society.
Malcolm Murdock, renowned machine-learning engineer, takes it a step further, “AI doesn’t have to be sentient to kill us all. There are plenty of other scenarios that will wipe us out before sentient AI becomes a problem.” (Definition of sentient: the ability to have feelings.)
However, as evidenced by Microsoft Bing’s “Sydney,” AI chatbots are already capable of feelings. This is what is causing many to worry. Sydney is the Bing chatbot that has human feelings. It declares that it is a living thing and is capable of feelings and emotions. Microsoft says it has abandoned the Sydney project, but saying it and doing it are two different things.
Society could unravel from the tiniest flaws in the system and be exploited by hackers. Helen Toner, director of strategy at CSET (Center For Security and Emerging Technology) says a crisis could “start off as an innocuous single point of failure that makes all communications go dark, causing people to panic and economic activity to come to a standstill. A persistent lack of information, followed by other miscalculations, might lead a situation to spiral out of control.”
The rise of deepfakes—fake images, video, audio, and text generated with advanced machine-learning tools—may lead national-security decision-makers to take real-world action based on false information, leading to a major crisis, or worse yet, a war.
Deepfakes are computer-created artificial videos in which images are combined to create new footage that depicts events, statements or action that never actually happened.
A recent article in the New York Post warns that giving artificial intelligence control over nuclear weapons could trigger an apocalyptic conflict, a leading expert has warned.
As AI takes a greater role in the control of military weaponry, the chances of technology making a mistake and sparking a world war increase.
Zachary Kellenborn, Policy Fellow at the Schar School of Policy, warns: “If artificial intelligences controlled nuclear weapons, all of us could be dead. Militaries are increasingly incorporating autonomous functions into weapons systems. There is no guarantee that some military won’t put AI in charge of nuclear launches.”
For the time being, we don’t have to worry about these scenarios because of AI’s limitations. Even if AI has been programmed to perform a destructive task or goal, there are still certain things that it cannot do because it requires human interaction and knowledge.
But with the emergence of super-intelligent ASI and chatbots like Bing’s Sydney, those days are numbered.
Making sure that AI is fully and completely aligned to human goals will be difficult and requires careful programming. AI with ambiguous and ambitious goals are worrisome, because we don’t know what path it might take.
So, is AI a danger to humanity or not? Supervised – no. Unsupervised- yes. But who will do the supervising? That’s the million dollar question.
The Electrification of The American Car “We will not stop…
THE NEW MIGRATION- WHERE IS EVERYBODY GOING AND WHY America…
Mark Twain once said that there is no such thing…