Jump to content


Ghost in the Machine

Member Since 12 Aug 2015
Offline Last Active Oct 09 2017 11:53 AM
-----

Posts I've Made

In Topic: Rhetorical Devices Used in Literary Logic

09 October 2017 - 11:29 AM

Have they taught them how to lie?
 
Facebook built an AI system that learned to lie to get what it wants
 
The pursuit of Facebook’s AI isn’t too different than other applications of AI, like the game Go. Each anticipates its opponent’s future actions and works to maximize its winnings. But unlike Google’s Go-playing AlphaGo, Facebook’s algorithm needs to make sense to humans while doing so.
 
From the human conversations (gathered via Amazon Mechanical Turk), and testing its skills against itself, the AI system didn’t only learn how to state its demands, but negotiation tactics as well—specifically, lying. Instead of outright saying what it wanted, sometimes the AI would feign interest in a worthless object, only to later concede it for something that it really wanted. Facebook isn’t sure whether it learned from the human hagglers or whether it stumbled upon the trick accidentally, but either way when the tactic worked, it was rewarded.
 
 
Interesting business possibilities
 
The first thought that comes to mind is taking us humans out of the equation and letting AI do all of the hard work on large contract negotiations.
 
How great would it be to bring my "AI bot" to the negotiating table (or I guess now it would be the negotiating computer screen) to outsmart, deceive, and manipulate the pathetic human on the other side of the contract negotiations?
 
We'd win every time.
 
Of course, other companies would quickly get smart to it and start to bring their own AI bot negotiators. Then it might be like some form of Robot Wars, except instead of two mechanical robots attempting to slice and dice each other physically, we'd have two AI bots duking it out via a computer screen.
 
We could have them actually run big parts of the business for us. We could get them involved in the highly strategic world of mergers and acquisitions. Every company could have lots of AI bots out there doing the work, building AI bot relationships, strategically maneuvering around the business landscape while us humans hung out in Vegas.
 
It might get really interesting for us to watch. Who's to say that the AI bots wouldn't form alliances out there to help them lie, deceive and manipulate their way to success? One AI bot could bluff its way into a big business opportunity by aligning with two other AI bots only to reveal later that it was part of a larger plan to buy those other two AI bots out.
 
Actually, that kind of sounds like human behavior but just done much more effectively.
 
 
Google’s DeepMind pits AI against AI to see if they fight or cooperate
 
Unsurprisingly, they do both
 
AI computer agents could manage systems from the quotidian (e.g., traffic lights) to the complex (e.g., a nation’s whole economy), but leaving aside the problem of whether or not they can do their jobs well, there is another challenge: will these agents be able to play nice with one another? What happens if one AI’s aims conflict with another’s? Will they fight, or work together?
 
Google’s AI subsidiary DeepMind has been exploring this problem in a new study published today. The company’s researchers decided to test how AI agents interacted with one another in a series of “social dilemmas.” This is a rather generic term for situations in which individuals can profit from being selfish — but where everyone loses if everyone is selfish. The most famous example of this is the prisoner’s dilemma, where two individuals can choose to betray one another for a prize, but lose out if both choose this option. 
 
The results of the study, then, show that the behavior of AI agents changes based on the rules they’re faced with. If those rules reward aggressive behavior (“Zap that player to get more apples”) the AI will be more aggressive; if they rewards cooperative behavior (“Work together and you both get points!) they’ll be more cooperative.
 
That means part of the challenge in controlling AI agents in the future, will be making sure the right rules are in place. As the researchers conclude in their blog post: “As a consequence [of this research], we may be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet - all of which depend on our continued cooperation.”
 

In Topic: NASA Testing Planetary Defense System on Asteroid

08 October 2017 - 02:27 PM

Maybe that's what that mini space shuttle is for.
 
Who knows why they blast that thing off.
 
Supposedly, it's unmanned.
 
It goes up and comes down a while later.
 
Outfit them both with whatever it takes to deflect all incoming rocks of appropriate size. 
 
dn24950-2_600.jpg
 
Too much?
 
:funny-chicken-smiley-emoticon:
 
Oh well, how about this: 
 
Large space cannons?
 
tumblr_n4vtqpc8sA1qe0jbao1_1280.gif
 
:chuckle:
 
Most likely, a satellite missile system...
 
western-space-agencies-are-tracking-what
 
In 2007, China destroyed one of its own, aging weather satellites with an anti-satellite device mounted on a ballistic missile. The result was a proliferation of space debris that, as depicted in a fictional scenario in last year's blockbuster "Gravity," poses a danger to other satellites.
 
The US followed suit the next year by destroying a spy satellite — one that was already out of commission — simply by ramming a missile into it; no explosive was used. At the time, the Pentagon specified that resulting debris would burn upon re-entering the Earth's atmosphere.
 
The difference here, of course, is that Russia's experiment could involve an asset with more longevity, rather than a missile used just once. If it is indeed a weapon, it could lend new urgency to the previously tentative race to weaponize not just air, land, and sea, but space as well.
 

In Topic: The REAL Stories Behind Disney Movies

02 October 2017 - 10:33 AM

 

Yeah? At what cost?
 
In order to be 'reasonable' in this world one has to throw out morality. The problem lies within the overall image Disney has created for itself. When Walt ruled the roost a strict modicum of family entertainment was its bread and butter. Of course there were stories of dubious designs. Anything untoward was swept under the table. As is with any company of renown. 
 
Think about all those ready made customers being groomed for future product management. Doing so by subtly advertising 'adult' media into the family fun provided to the youngsters. By the time they get older they'll be primed and ready to buy the latest innovations in adult entertainment.

 

 

Disney started changing the moral structure inside their animated movies in the 1980s. Gearing it more towards adults. The stories are themed to include different moral values. Introducing controversial themes in an attempt to start normalizing Taboo values that conflict with the previous ones taught.. What was taboo before has now become acceptable...with the help of mass media.


In Topic: A.I. Godhead - Religions of the Future

30 September 2017 - 11:49 AM

AI-GettyImages-128-780x508596828.jpg
 
If a robot or an artificial intelligence reaches a certain level of sophistication, could it be converted to religion?
 
According to Florida-based Reverend Dr Christopher Benek, Christians should certainly try.
 
 "I don't think we should assume AIs will be worse than us or that they will intentionally mistreat us. If they are actually more intelligent than humans then they should have a better understanding of morals and ethics than us,” says Benek on his blog.
 
"This would mean that AIs could potentially eradicate major issues like poverty, war, famine and disease – succeeding where we humans have failed."
 
He goes as far as to say that AIs could "even lead humans to new levels of holiness".
 
 
As artificial intelligence advances, religious questions and concerns globally are bound to come up, and they're starting too: Some theologians and futurists are already considering whether AI can also know God.
 
The metaphysical questions surrounding faith and AI are like tumbling down Alice's rabbit hole. Does AI have a soul? Can it be saved? There is one school of thought that figures, if humans can be forgiven for our sins, why not superintelligences with human qualities? "The real question is whether humans are able to be saved—if so, then there is no reason why thinking and feeling AIs shouldn't be able to be saved. Once human-like AI exist, they will be persons just like us," futurist Giulio Prisco, founder of the transhumanist Turing Church, told me in an email.
 
But there is an opposing school of thought that insists that AI is a machine and therefore doesn't have a soul. In Think Christian, scientist and Christian scribe Dr. Jason E. Summers writes, "Christians often reject Strong AI on the theological ground of the special anthropological status of human beings as the bearers of Imago Dei." Imago Dei is Latin for the Christian concept that humans were created in the image of God.
 
Once you start thinking like that, it opens up even more questions: How would AI fit into to the religious tension already present around the world? Who is to say a machine with human intelligence wouldn't choose to become a fundamentalist Muslim, or a Jehova Witness, or a born-again Christian who prefers to speak in tongues instead of a form of communication we understand? If it decides to literally follow any of the sacred religions texts verbatim, as some humans attempt to do, then it could add to already existing religious tensions in the world.
 
Despite the seemingly scifi nature of it, uploading the human mind into an AI being could arguably solve the 'soul' question. Experts like Google engineer Ray Kurzweil are actively researching ways to upload the brain into computers, and last year there was significant progress in the field via brainwave headsets and telepathy.
 
 
As Artificial Intelligence Advances, What Are its Religious Implications?
 
Religious communities have a significant stake in this conversation. Various faiths hold strong opinions regarding creation and the soul. As artificial intelligence moves forward, some researchers are engaging in thought experiments to prepare for the future, and to consider how current technology should be utilized by religious groups in the meantime.
 
“The worst-case scenario is that we have two worlds: the technological world and the religious world.” So says Stephen Garner, author of an article on religion and technology, “Image-Bearing Cyborgs?” and head of the school of theology at Laidlaw College in New Zealand. Discouraging discourse between the two communities, he says, would prevent religion from contributing a necessary perspective to technological development—one that, if included, would augment human life and ultimately benefit religion. “If we created artificial intelligence and in doing so we somehow diminished personhood or community or our essential humanity in doing it, then I would say that’s a bad thing.” But, he says, if we can create artificial intelligence in such a way that allows people to live life more fully, it could bring them closer to God.
 
The personhood debate, for Christianity and Judaism in particular, originates with the theological term imago Dei, Latin for “image of God,” which connotes humans’ relationship to their divine creator. The biblical book of Genesis reads, “God created mankind in his own image.” From this theological point of view, being made in the divine image affords uniqueness to humans. Were people to create a machine imbued with human-like qualities, or personhood, some thinkers argue, these machines would also be made in the image of God—an understanding of imago Dei that could, in theory, challenge the claim that humans are the only beings on earth with a God-given purpose.
 
This technological development could also infringe on acts of creation that, according to many religious traditions, should only belong to a god. “We are not God,” Garner says. “We have, potentially, inherently within us, a vocation to create”—including, he says, by utilizing technology. Human creation, however, is necessarily limited. It’s the difference between a higher power creating out of nothing, and humans creating with the resources that are on earth.
 
But beyond speculation, there are ethical questions that need answering now, says J. Nathan Matias, a visiting scholar at the MIT Media Lab. Matias is co-author of a forthcoming paper on the intersection of AI and religion. “AI systems are already being used today to determine who police are going to investigate,” he says. “They’re used today to do sting operations of people who are imagined as potential future domestic abusers or sexual predators. They’re being used to decide who is going to get [financial] credit or not, based upon anticipated future solvency.” Religious communities should participate in conversations regarding these dilemmas, he says, and should involve themselves in the application of the AI that exists today.
 
Matias also points to Facebook’s algorithms that recommend content to users—a form of weak AI. In this way, AI can help make a post go viral. When a heartbreaking story is popular online, it directly influences the flow of prayer and charity. “We already have these attention algorithms as a clear example of what are shaping the contours of things like prayer or charitable donations or the theological priorities of a community,” he says. Such algorithms, like that employed by Facebook, dictate the political news—true or not—that people see. Religious groups, then, have a keen interest in the development of artificial intelligence and its ethical implications.
 

In Topic: Maps They Didn’t Teach You In School

04 September 2017 - 06:06 PM

849a2cb993fe44b742596d589aeddc1d.jpg


IPB Skin By Virteq