Jump to content


Photo
* * * * * 8 votes

Rhetorical Devices Used in Literary Logic

common fears

166 replies to this topic

#161 status - Guest

status - Guest
  • Guests

Posted 06 April 2018 - 01:37 AM

cdabaf05001f1cef87a4072691bb3e39.jpg


  • 0



#162 status - Yogi

status - Yogi
  • Guests

Posted 31 July 2018 - 08:52 AM

 

Malapropism
 
Ever hear someone say a common term wrong? Yes, everyone has! Saying something similiar to another word or phrase is malapropism. Usually it produces a humorous effect. They are usually unintentional errors in speech that accent a particular figure of speech. Archie Bunker comes to mind:
 
The Archie Bunker Malapropism Dictionary of Mangled English!
 
 
Not to be confused with a spooner; these are devices that rearrange the first letters in words to produce a forky effect:
 
spoonerism.jpg
 
Another one would be an eggcorn. These are misplaced words that sound the same as the original and with similiar meanings.
 
58c6045f3dd0e897f2e325d7762cd8af--englis
 
Here is an alphabetical list of all eggcorns:
 
 
eggcorn1.jpg

 

 


  • 0

#163 status - Guest

status - Guest
  • Guests

Posted 31 July 2018 - 09:10 AM

4E1CKYg.jpg


  • 0

#164 status - Jughead

status - Jughead
  • Guests

Posted 31 July 2018 - 09:23 AM

:chuckle:

 

screen-shot-2017-05-26-at-3-41-59-pm.png


  • 0

#165 status - Stephen with a Rose

status - Stephen with a Rose
  • Guests

Posted 31 July 2018 - 09:27 AM

Throwing Stones

Go ahead, throw that rock.
Make it count and make sure to shout.
Just remember, the crowd is fickle.
Tomorrow it may be you.


  • 0

#166 status - Krackatoa

status - Krackatoa
  • Guests

Posted 12 August 2018 - 11:01 AM

 

Have they taught them how to lie?
 
Facebook built an AI system that learned to lie to get what it wants
 
The pursuit of Facebook’s AI isn’t too different than other applications of AI, like the game Go. Each anticipates its opponent’s future actions and works to maximize its winnings. But unlike Google’s Go-playing AlphaGo, Facebook’s algorithm needs to make sense to humans while doing so.
 
From the human conversations (gathered via Amazon Mechanical Turk), and testing its skills against itself, the AI system didn’t only learn how to state its demands, but negotiation tactics as well—specifically, lying. Instead of outright saying what it wanted, sometimes the AI would feign interest in a worthless object, only to later concede it for something that it really wanted. Facebook isn’t sure whether it learned from the human hagglers or whether it stumbled upon the trick accidentally, but either way when the tactic worked, it was rewarded.
 
 
Interesting business possibilities
 
The first thought that comes to mind is taking us humans out of the equation and letting AI do all of the hard work on large contract negotiations.
 
How great would it be to bring my "AI bot" to the negotiating table (or I guess now it would be the negotiating computer screen) to outsmart, deceive, and manipulate the pathetic human on the other side of the contract negotiations?
 
We'd win every time.
 
Of course, other companies would quickly get smart to it and start to bring their own AI bot negotiators. Then it might be like some form of Robot Wars, except instead of two mechanical robots attempting to slice and dice each other physically, we'd have two AI bots duking it out via a computer screen.
 
We could have them actually run big parts of the business for us. We could get them involved in the highly strategic world of mergers and acquisitions. Every company could have lots of AI bots out there doing the work, building AI bot relationships, strategically maneuvering around the business landscape while us humans hung out in Vegas.
 
It might get really interesting for us to watch. Who's to say that the AI bots wouldn't form alliances out there to help them lie, deceive and manipulate their way to success? One AI bot could bluff its way into a big business opportunity by aligning with two other AI bots only to reveal later that it was part of a larger plan to buy those other two AI bots out.
 
Actually, that kind of sounds like human behavior but just done much more effectively.
 
 
Google’s DeepMind pits AI against AI to see if they fight or cooperate
 
Unsurprisingly, they do both
 
AI computer agents could manage systems from the quotidian (e.g., traffic lights) to the complex (e.g., a nation’s whole economy), but leaving aside the problem of whether or not they can do their jobs well, there is another challenge: will these agents be able to play nice with one another? What happens if one AI’s aims conflict with another’s? Will they fight, or work together?
 
Google’s AI subsidiary DeepMind has been exploring this problem in a new study published today. The company’s researchers decided to test how AI agents interacted with one another in a series of “social dilemmas.” This is a rather generic term for situations in which individuals can profit from being selfish — but where everyone loses if everyone is selfish. The most famous example of this is the prisoner’s dilemma, where two individuals can choose to betray one another for a prize, but lose out if both choose this option. 
 
The results of the study, then, show that the behavior of AI agents changes based on the rules they’re faced with. If those rules reward aggressive behavior (“Zap that player to get more apples”) the AI will be more aggressive; if they rewards cooperative behavior (“Work together and you both get points!) they’ll be more cooperative.
 
That means part of the challenge in controlling AI agents in the future, will be making sure the right rules are in place. As the researchers conclude in their blog post: “As a consequence [of this research], we may be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet - all of which depend on our continued cooperation.”
 

 

 

When Bots Teach Themselves to Cheat


Moments when experimental bots go rogue—some would call it cheating—are not typically celebrated in scientific papers or press releases. Most AI researchers strive to avoid them, but a select few document and study these bugs in the hopes of revealing the roots of algorithmic impishness. “We don’t want to wait until these things start to appear in the real world,” says Victoria Krakovna, a research scientist at Alphabet's DeepMind unit. Krakovna is the keeper of a crowdsourced list of AI bugs. To date, it includes more than three dozen incidents of algorithms finding loopholes in their programs or hacking their environments.


Gaming simulations are fertile ground for bug hunting. Earlier this year, researchers at the University of Freiburg in Germany challenged a bot to score big in the Atari game Qbert. Instead of playing through the levels like a sweaty-palmed human, it invented a complicated move to trigger a flaw in the game, unlocking a shower of ill-gotten points. “Today’s algorithms do what you say, not what you meant,” says Catherine Olsson, a researcher at Google who has contributed to Krakovna’s list and keeps her own private zoo of AI bugs.


As AI systems become more powerful and pervasive, hacks could materialize on bigger stages with more consequential results. If a neural network managing an electric grid were told to save energy—DeepMind has considered just such an idea—it could cause a blackout.


https://www.wired.co...elves-to-cheat/


  • 0

#167 status - Cheat Sheet

status - Cheat Sheet
  • Guests

Posted 13 August 2018 - 12:12 AM

42 Fallacies for Free!

PDF Download

http://www.triviumed...42Fallacies.pdf

 

:wink:


  • 0



Reply to this topic



  



Similar Topics Collapse


Also tagged with one or more of these keywords: common fears

IPB Skin By Virteq