Jump to content

Recommended Posts

Elon Musk issues a stark warning about artificial intelligence

CNBC  /  August 11, 2017

Tesla CEO Elon Musk fired off a new and ominous warning on Friday about artificial intelligence, suggesting the emerging technology poses an even greater risk to the world than a nuclear conflagration with North Korea.

Musk—a fierce and long time critic of A.I. who once likened it to "summoning the demon" in a horror movie—said in a Twitter post that people should be concerned about the rise of the machines than they are.

Reacting to the news that autonomous tech had bested competitive players in an electronic sports competition, Musk posted what appeared to be a photo of a poster bearing the chilling words "In the end, the machines will win."

Musk, who is spearheading commercial space travel with his venture SpaceX, is also the founder of OpenAI, a nonprofit that promotes the "safe" development of AI.

His stance puts him at odds with much of the tech industry, but echoes remarks of prominent voices like Stephen Hawking—who has also issued dire warnings about machine learning.

If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc

— Elon Musk (@elonmusk) August 12, 2017

Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.

— Elon Musk (@elonmusk) August 12, 2017

-----------------------------------------------------------------------------------

Stephen Hawking warns artificial intelligence could end mankind

BBC  /  December 2, 2014

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

He told the BBC:"The development of full artificial intelligence could spell the end of the human race."

His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI.

Prof Hawking fears the consequences of creating something that can match or surpass humans.

"It would take off on its own, and re-design itself at an ever increasing rate," he said.

"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

  • Like 2
Link to comment
https://www.bigmacktrucks.com/topic/50752-artificial-intelligence/
Share on other sites

Ks did you hear on the news a week or so ago about the Facebook talk bots that started their own language and were talking to each other and they had to be shut down? The techs couldn't figure out what they were saying to each other. 

https://www.google.com/amp/www.cbsnews.com/amp/news/facebook-shuts-down-chatbots-bob-alice-secret-language-artificial-intelligence/

Edited by HeavyGunner
  • Like 1

The problems we face today exist because the people who work for a living are outnumbered by the people who vote for a living.

The government can only "give" someone what they first take from another.

  • 2 weeks later...

Ksb Until this post I never gave one single thought to a.I.,! Possibly proving my simple nature and naivete! Upon reflection does humanity need machine based intelligence which in my opinion would quickly surpass even the intelligence of the smartest human ( Einstein, da Vinci ?) As someone mentioned, the machines lack conscience,morality, or ethics!, and aren't hindered by the need to spend years evolving into a superior being! Look at what humankind has accomplished just using the thought processes of our smartest scientists,engineers and inventors! Having said that,without a conscience it's pretty obvious that the possessors of the ai would eliminate the sick,elderly,disabled,special needs people who cost money and time to care for and create inefficiency! Like Hitler and his "master race" let's go back to trucks!

  • Like 1
  • 6 months later...

See now I have the best device and it's portable...

fluxcapacitor.jpg.740963544df065d89e8d7a9cc7a67cb7.jpg

  • Like 1

"OPERTUNITY IS MISSED BY MOST PEOPLE BECAUSE IT IS DRESSED IN OVERALLS AND LOOKS LIKE WORK"  Thomas Edison

 “Life’s journey is not to arrive at the grave safely, in a well preserved body, but rather to skid in sideways, totally worn out, shouting ‘Holy shit, what a ride!’

P.T.CHESHIRE

1 hour ago, 41chevy said:

See now I have the best device and it's portable...

fluxcapacitor.jpg.740963544df065d89e8d7a9cc7a67cb7.jpg

 

LOL...Of course you have the better toy, again. If you were here I'd soak my cupcake in my milk glass and throw it on you!

Speed boat, Calico (sub-machine gun looking) pistol, EMP proof genset, Zombie Apoc Mack, 12 volt signal jammer thingy with intimidating lights....it's a James Bond complex or something.    

So what does it actually do....? Is it one of those anti-GPS devices they want to ban?

1 hour ago, Mack Technician said:

 

LOL...Of course you have the better toy, again. If you were here I'd soak my cupcake in my milk glass and throw it on you!

Speed boat, Calico (sub-machine gun looking) pistol, EMP proof genset, Zombie Apoc Mack, 12 volt signal jammer thingy with intimidating lights....it's a James Bond complex or something.    

So what does it actually do....? Is it one of those anti-GPS devices they want to ban?

I'd go back in time and eat you cup cake  ;)

 Well, it does 2 things. Plug it into your cigarette lighter and you have  USB ports to charge your phones, the flashing LED's annoy my daughter inlaw

Edited by 41chevy
  • Like 1
  • Haha 1

"OPERTUNITY IS MISSED BY MOST PEOPLE BECAUSE IT IS DRESSED IN OVERALLS AND LOOKS LIKE WORK"  Thomas Edison

 “Life’s journey is not to arrive at the grave safely, in a well preserved body, but rather to skid in sideways, totally worn out, shouting ‘Holy shit, what a ride!’

P.T.CHESHIRE

  • 3 months later...

Henry Kissinger pens ominous warning on dangers of artificial intelligence

RT  /  July 9, 2018

Former US Secretary of State Henry Kissinger has issued a stark warning to humanity: advances in artificial intelligence could lead to a world which humans will no longer be able to understand — and we should start preparing now.

What if machines learn to communicate with each other? What if they begin to establish their own objectives? What if they become so intelligent that they are making decisions beyond the capacity of the human mind?

Those are some of the questions the 95-year-old Kissinger poses in a piece published by the Atlantic under the apocalyptic headline: ‘How The Enlightenment Ends.’

https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/

Kissinger’s interest in artificial intelligence began when he learned about a computer program that had become an expert at Go — a game more complicated than chess. The machine learned to master the game by training itself through practice; it learned from its mistakes, redefined its algorithms as it went along — and became the literal definition of ‘practice makes perfect.’

Into the unknown

We are, Kissinger warns, in the midst of a “sweeping technical revolution whose consequences we have failed to fully reckon with and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.”

Kissinger uses the example of a self-driving car. Driving a car requires judgements in impossible-to-predict circumstances. What would happen, he asks, if the car found itself having to decide between killing a grandparent or killing a child? Who would it choose, and why?

Artificial intelligence goes “far beyond” the kind of automation we are used to, he says, because AI has the ability to “establish its own objectives,” which makes it “inherently unstable.” In other words, through its processes, AI “develops an ability previously thought to be reserved for human beings.”

Unintended consequences

The typical science-fiction narrative is that robots will develop to the point where they turn on their creators and threaten all of humanity — but, according to Kissinger, while the dangers of AI may be great, the reality of the threat may be a little more benign. It is more likely, he suggests, that the danger will come from AI simply misinterpreting human instructions “due to its inherent lack of context.”

One recent example is the case of the AI chatbot called Tay. Instructed to generate friendly conversation in the language patterns of a 19-year-old girl, the machine ended up becoming racist, sexist and giving inflammatory responses. The risk that AI won’t work exactly according to human expectations could, Kissinger says, “cascade into catastrophic departures” from intended outcomes.

Too clever for humans

The second danger is that AI will simply become too clever for its own good — and ours. In the game Go, the computer was able to make strategically unprecedented moves that humans had not yet conceived. “Are these moves beyond the capacity of the human brain?” Kissinger asks. “Or could humans learn them now that they have been demonstrated by a new master?”

The fact is that AI learns much faster than humans. Another recent example was a computer program AlphaZero, which learned to play chess in a style never before seen in chess history. In just a few hours, it reached a level of skill that took humans 1,500 years to reach — after being given only the basic rules of the game.

This exceptionally fast learning process means AI will also make more mistakes “faster and of greater magnitude than humans do.” Kissinger notes that AI researchers often suggest that those mistakes can be tempered by including programming for “ethical” and “reasonable” outcomes — but what is ethical and reasonable? Those are things that humans are still fighting over how to define.

No way to explain

What happens if AI reaches its intended goals but can’t explain its rationale? “Will AI’s decision-making abilities surpass the explanatory powers of human language and reason?” Kissinger asks.

He argues that the effects of such a situation on human consciousness would be profound. In fact, he believes it is the most important question about the new world we are facing.

“What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?”

Legalities and opportunities

Outside of philosophical concerns, Kissinger also outlines some legal ones: in this vastly different world, who will be responsible for the actions of AI? How will liability be determined for their mistakes? Can a legal system designed by humans even keep pace with a world run by artificial intelligence capable of outthinking them?

But it’s not all doom and gloom. Kissinger admits that AI can bring “extraordinary benefits” to medical science, provision of clean energy and environmental issues, among other areas.

Kissinger acknowledges that scientists are more concerned with pushing the limits of discovery than comprehending them or pondering their philosophical ramifications. Governments, too, are more concerned with how AI can be used — in security and intelligence, for example — than examining its results on the human condition.

In his final pitch, the senior diplomat implores the US government to make artificial intelligence a major national focus, “above all, from the point of view of relating AI to humanistic traditions.”

He argues that a presidential commission of eminent thinkers in the field should be established to help develop a “national vision” for the future. “If we do not start this effort soon,” Kissinger writes, “before long we shall discover that we started too late.”

  • Like 1
  • 1 month later...

“Lethal Automated Weapons Systems” with artificial intelligence. Just what the world needs. You can be sure this technology is some ten years ahead of what we're told.

Someday in a remote enclave, parents will tell their children, "Long, long ago, mankind lived throughout the world..................".

--------------------------------------------------------------------------------------

Associated Press  /  September 3, 2018

A key opponent of high-tech, automated weapons known as “killer robots” is blaming countries like the U.S. and Russia for blocking consensus at a U.N.-backed conference, where most countries wanted to ensure that humans stay at the controls of lethal machines.

Coordinator Mary Wareham of the Campaign to Stop Killer Robots spoke Monday after experts from dozens of countries agreed before dawn Saturday at the U.N. in Geneva on 10 “possible guiding principles” about such “Lethal Automated Weapons Systems.”

Point 2 said: “Human responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines.”

Wareham said such language wasn’t binding, adding that “it’s time to start laying down some rules now.”

Members of the LAWS conference will meet again in November.

  • Like 1
  • 2 months later...

Until this post I kinda figured  Musk was about one snort short of rehab  because of some of his crazy posts  . But I kind of feel the same way now , bridges fall down because the engineers  let the computers do the work, and what about banking ,the computer there could wipe out the worlds wealth  in a nano second . And what about all the security breachs of different information companies ,is it really the Russians or is it just a couple of computers wanting information .

Your hard pressed to find advocacy. AI is almost universally disliked by the general population. Folks know history on this crud.....Like the H-bomb, it just sorta shows up on the scene, serves to end a war, then becomes a (arms) “race”, drains billions and eventually becomes a life-long, multi-generational, white elephant gift that needs to be managed with scrutiny because it has endless destructive potential. Coincidentally, my niece is in Wyoming right now sitting on a Missile silo chasing rabbits away from the motion sensors. Our inventions tend to become our slavemasters. 

  • Like 1
On 11/10/2018 at 2:00 PM, Mack Technician said:

Your hard pressed to find advocacy. AI is almost universally disliked by the general population. Folks know history on this crud.....Like the H-bomb, it just sorta shows up on the scene, serves to end a war, then becomes a (arms) “race”, drains billions and eventually becomes a life-long, multi-generational, white elephant gift that needs to be managed with scrutiny because it has endless destructive potential. Coincidentally, my niece is in Wyoming right now sitting on a Missile silo chasing rabbits away from the motion sensors. Our inventions tend to become our slavemasters. 

https://www.newsweek.com/sophia-saudi-robot-baby-future-family-725254

 

http://www.bbc.co.uk/newsbeat/article/42122742/sophia-the-robot-wants-a-baby-and-says-family-is-really-important

 

 

"OPERTUNITY IS MISSED BY MOST PEOPLE BECAUSE IT IS DRESSED IN OVERALLS AND LOOKS LIKE WORK"  Thomas Edison

 “Life’s journey is not to arrive at the grave safely, in a well preserved body, but rather to skid in sideways, totally worn out, shouting ‘Holy shit, what a ride!’

P.T.CHESHIRE

  • 2 weeks later...

The Truth About Killer Robots: the year's most terrifying documentary

Zach Vasquez, The Guardian  /  November 26, 2018

In a cautionary film, director Maxim Pozdorovkin lays out the many ways that automation could affect us in the long term from labor to sex to psychology

When it comes to the dangers posed to us by automatons, film-maker Maxim Pozdorovkin wants us to start thinking beyond what Hollywood has warned us about.

“This idea of a single, malevolent AI being that can harm us, the Terminator trope … I think it’s created a tremendous blind spot,” he said to the Guardian. “[It gets us] thinking about something that we’re heading towards in the future, something that will one day hurt us. If you look at the effects of automation broadly, globally, right now, it’s much more pervasive. The things happening – de-skilling, the loss of human dignity associated with traditional labor – they will have a devastating effect much sooner than that long-distance threat of unchecked AI.”

That isn’t to say that robots can’t also just reach out and crush us. In his new documentary, The Truth About Killer Robots, Pozdorovkin traces all manner of dangers – economic, psychological, moral and, yes, mortal – posed to our species by automation and robotics. At the center of his film lies the question: “when a robot kills a human, who takes the blame?”

Pozdorovkin had long sought to make a film on automation, but he had a difficult time figuring out a way to approach the subject given its scope, as well as the many misconceptions surrounding it. It wasn’t until he heard about a case in Germany, where a manipulator arm crushed a line worker at a Volkswagen plant to death, that he knew he had his way in.

Using science-fiction author Isaac Asimov’s First Law of Robotics – “A robot may not injure a human being or, through inaction, allow a being to come to harm” – as a jumping-off point, his documentary covers a sampling of deadly incidents involving automated machinery, including a couple driverless car accidents that resulted in fatalities, as well as the first intentionally lethal use of a robot by American law enforcement.

In describing how his film came to fruition, Pozdorovkin recalls, “I went [to Germany] to investigate, to talk to the workers. Most of them were forbidden from talking about the accident. But a lot of them talked about the perils of automation, the way that their work environment was made worse as the result of robots. I’m using the tropes of science fiction and true crime to make a film that investigates some of the philosophical and economic problems that automation brings with it.”

The film distinguishes itself from other science documentaries thanks to its holistic approach: rather than speaking exclusively to the people behind the tech – CEOs, programmers, engineers – Pozdorovkin also interviewed members of the global labor pool – truck drivers, factory workers, gas station attendants, Swat team snipers – those whose lives and livelihoods have seen the most immediate effects of automation’s disruption.

Given the dire nature of those effects, such as the hollowing-out of entire labor sectors and the rise of global inequality, you would think automation would be public enemy number one among the middle and lower classes. Yet, as a political issue, it remains on the margins. Pozdorovkin believes it’s because “we’re still feeling it in qualitative ways.” He continues: “A lot of things that you see, like the rise in suicides amongst older white men in America, has to do with the way labor has been stripped of dignity and existential value.”

Meanwhile, “anti-immigrant and anti-globalization rhetoric covers up a lot of the structural damage done by automation. It goes back to the qualitative/quantitative distinction. The economy is elastic, so way before massive job loss will be a period of broadly sucking out the skills from the labor that’s involved.”

Our fears over the rise of machines therefore tend to take a sci-fi, post-apocalyptic bent, a la The Terminator. Those fears are exacerbated by examples where Asimov’s First Law is blatantly violated, such as when the Dallas police strapped C4 on to a robot (a bomb-detecting robot, ironically), sent it into the corner of the library where they had mass shooter Micah Johnson cornered, and triggered it, effectively killing Johnson. In the aftermath, many observers wondered if we’d entered a new stage of weaponized robotics for domestic use.

Pozdorovkin doesn’t think that there was anything that problematic about the use of the robot in this particular case. “Ultimately, had the sniper, who we interview in the film, shot [the suspect], as he had done many times before in other cases, there wouldn’t be any problem. [But] sending a robot to go in and kill someone feels uncomfortable. You can’t quite pinpoint it, but it touches into some kind of fundamental, uncanny, discomfort.”

That sense of the uncanny is not limited to lethal examples. One of the most memorable segments in his documentary centers on Zheng Jiajia, a Chinese engineer who married a silicone sex robot that he designed himself.

The rise in robotic pleasure dolls was something that Pozdorovkin knew he had to cover, but he wanted to avoid a sensationalized approach. “I’ve watched and read hundreds of reports, articles, etc, about sex robots and silicone dolls. And every single one was predicated on the question of whether the sex was any good. This sounds like the most interesting thing, but it’s by far the least interesting. The most interesting questions are ‘what are the social factors that will bring this into the mainstream?’ The obvious answer is demographics. It’s just a fact that certain people will not have mates. This is exacerbated in China because of the one child policy, but it will be true around the world as inequality skyrockets.”

If the results of all this new uncanniness were as simple as law enforcement using robots to supplement legally sanctioned police manoeuvres or giving lonely people a new form of emotional and physical reprieve, there wouldn’t be that much to fear. But Pozdorovkin worries about the effect it will have on our individual and collective empathetic abilities. That, more than anything, may be what’s truly at stake.

Pozdorovkin lays out a thought experiment: “Picture yourself driving on the highway. You decide to switch lanes, and in your sideview mirror you see a car going really fast. You don’t veer over and cut off that person, because you project fallibility on to them. They could be distracted, they might have a death in the family, they could just be reckless. You’re just going to let them pass and then go. But when you see that there’s a robot next to you, you will drive like the biggest asshole, because the machine is programmed not to bump into you.

“And the kicker is this: once there’s enough of these entities which we treat without any ethical regard, without projecting possible fallibility unto, the way we interact with them will spill over and we will be ruder, more aggressive, more inconsiderate to humans. This argument applies to sex dolls, it applies to a lot of things that we see.”

Have we already crossed the point of no return? Is the current political climate throughout the west the result of this degradation of empathy, stemming perhaps from the way we communicate with each other online, where we can automate personal exchanges via a retweet, like, or eye-roll emoji – to say nothing of the way we spread vitriol?

“I think that a lot of the sheltering and toxicity that you see online is ultimately part and parcel with the shielding mechanism that the anonymity of social media permits,” he says.

Ultimately, it’s just one of the ways in which the takeover of machines is well under way. Even as we continue to reel from the pace at which it is happening, those in charge of, or with access to, the technology – the corporate owners, the military, the police – will not hesitate to use it. Nor will they concern themselves with “the philosophical consequences and complications of breaking Asimov’s Law”.

And what about his own field: the movies? Can the people in front of and behind the camera expect to lose their jobs to robots, the same way those in manufacturing and the service industry have? Pozdorovkin thinks it entirely plausible.

“Artists have become shameless in promoting our absolute immunity from this. But if you look at the economic data, the exact same thing that happened to all of these other industries is happening to the arts.”

Rather than attempt to fight against this new paradigm, The Truth About Killer Robots embraces the inevitability, using an android robot (originally designed to read the news on Japanese television) and automated narrator as its face and voice.

“It’s cheaper, easier, more flexible. But most importantly, it’s a way for us to be honest about the process. The worst thing that we could have done was hire a James Earl Jones sound-alike to add human gravitas to the story.”

It’s a fitting choice, considering that the medium of film – like the broader story of this moment in history – no longer belongs first and foremost to humanity.

·         The Truth About Killer Robots premieres in the US on HBO on 26 November and in the UK on Sky Atlantic on 2 December

.

 

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...