Jump to content


Photo
- - - - -

How A.I. can Detect & Filter Offensive Images


6 replies to this topic

#1 Digger

Digger

    Premium Member

  • Members
  • PipPipPipPip
  • 349 posts

Posted 27 December 2016 - 02:29 PM

Advances in Artificial Intelligence and Deep Learning have transformed the way computers understand images and videos. Over the last few years, innovative neural network structures and high-end hardware have helped research teams achieve groundbreaking results in object detection and scene description. Those structures have in turned been used to build generalist models aiming to recognize any object in any image.
 
It is estimated that more than 3 Billion images are shared every day online, along with millions of hours of video streams. Which is why more and more app owners, publishers and developers are looking for solutions to make sure their audience and users are not exposed to unwanted content. This is a moral as well as a legal imperative, and is key to building a product users trust and like.
 
k5bLmIW.jpg
 
 

  • 0

#2 status - Roger Roger

status - Roger Roger
  • Guests

Posted 27 December 2016 - 05:49 PM

Nice cupcakes! Ya'll think the AI would count this one as offensive?

 

7c761f538354d9bbb082a610daf183b5.jpg

 

:chuckle:

 


  • 0

#3 status - Guest

status - Guest
  • Guests

Posted 01 January 2017 - 01:25 PM

giphy.gif

 

:chuckle:


  • 0

#4 status - whn

status - whn
  • Guests

Posted 24 April 2017 - 08:13 AM

cookiesandspam.jpg


  • 0

#5 Ghost in the Machine

Ghost in the Machine

    Premium Member

  • Members
  • PipPipPipPip
  • 216 posts

Posted 22 May 2017 - 12:36 PM

giphy.gif

 

:chuckle:

 

:Laughing-rolf:

 

http://forum.chicken...ehavior/?p=9677


  • 0

7mDFXjl.gif


#6 status - Bender

status - Bender
  • Guests

Posted 30 July 2017 - 06:34 PM

:Banana_Dance:

 

6045e3a7a5f21c0ef1fb50abc5047cb3.gif

 

:chuckle:

 


  • 0

#7 status - Calculon

status - Calculon
  • Guests

Posted 30 July 2017 - 07:07 PM

 
 
C3bYW9vWIAAdQpn.jpg
 
Technology could do the spotting and sorting for humans.
 
Facebook spares humans by fighting offensive photos with AI
 
Facebook’s artificial intelligence systems now report more offensive photos than humans do, marking a major milestone in the social network’s battle against abuse, the company tells me. AI could quarantine obscene content before it ever hurts the psyches of real people.
 
Facebook’s success in ads has fueled investments into the science of AI and machine vision that could give it an advantage in stopping offensive content. Creating a civil place to share without the fear of bullying is critical to getting users to post their personal content that draws in friends’ attention.
 
When malicious users upload something offensive to torment or disturb people, it traditionally has to be seen and flagged by at least one human, either a user or paid worker. These offensive posts that violate Facebook’s or Twitter’s terms of service can include content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.
 
The occupation is notoriously terrible, psychologically injuring workers who have to comb through the depths of depravity, from child porn to beheadings. Burnout happens quickly, workers cite symptoms similar to post-traumatic stress disorder and whole health consultancies like Workplace Wellbeing have sprung up to assist scarred moderators.
 
But AI is helping Facebook avoid having to subject humans to such a terrible job. Instead of making contractors the first line of defense, or resorting to reactive moderation where unsuspecting users must first flag an offensive image, AI could unlock active moderation at scale by having computers scan every image uploaded before anyone sees it.
 
AI could eventually help Facebook combat hate speech. Today Facebook, along with Twitter, YouTube and Microsoft agreed to new hate speech rules. They’ll work to remove hate speech within 24 hours if it violates a unified definition for all EU countries. That time limit seems a lot more feasible with computers shouldering the effort.
 
 
An AI future: Technology could do the spotting and sorting for humans.
 
The AI is able to monitor video on Facebook Live and flag it, if offensive content is found. The social network also has an automation process to sift through the "tens of million of reports" of offensive content on a weekly basis. 
 
 
To detect and police content across YouTube’s sprawling library and ensure ads don’t run against questionable content, Google must solve an AI problem no one has cracked yet: automatically understanding everything that’s going on in videos, including gesticulations and other human nuances. 
 
A potential solution lies in machine learning, a powerful AI technique for automatically recognizing patterns across reams of data
 
Google’s AI advances sometimes match the hype, but they are not perfect. The company’s cloud division recently released a tool (unrelated to YouTube) that breaks videos into their constituent parts, rendering them “searchable and discoverable.” A group of academics published research last week that showed how to deceive this system by injecting images into videos.
 
Google has used machine learning and other AI tools to master speech, text and image recognition. In 2012, researchers famously got a network of 16,000 computers to teach itself to recognize cats by scanning millions of still images culled from YouTube videos. Understanding entire videos is a lot more difficult. Cats meow, stretch and jump through more than a thousand video frames each minute.
 
Google researchers have applied machine-learning software to classify images and audio inside videos for years (is that video tagged as a Prince song really Prince?), while improving recommendations and ad performance. Another part of Alphabet — a group called Jigsaw — is using AI tools in other ways to curb hate speech online.
 
In a memo to aggrieved YouTube advertisers last month, the company said its machine-learning algorithms will improve the precision and classification of videos. However, it also warned that with the volume of content involved, this can never be 100 percent guaranteed. 
 
 
How Deep Learning Is Teaching Machines To Detect & Filter Inappropriate Responses Like A Boss
 
Conversational Agents or Chat-Bots are also being deployed by various online business portals to provide a more personalised experience for their customers. In fact, some chat-bots, (such as Microsoft Xiaoice – a text-based chat-bot), are evolving beyond being assistants and note-takers to project a persona of their own. They have unique language characteristics, a sense of humor, and the ability to connect with users’ emotions. 
 
AI researchers are looking for automatic techniques for detecting such “inappropriate” or “toxic” content so that it could be employed by machines for effective self-regulation. This technology could also be used for moderating discussions and comments in many online forums and news sites where certain issues can rapidly dissolve into inappropriate abuse and hate commentary.
 
This technique being proposed by some researchers is based on a new field of computer science research known as Deep Learning (DL) – which aims to build machines that can process data and learn in the same way as our human brain does. DL essentially involves building artificial neural networks which are trained to mimic the behavior of the human brain. These networks can learn to represent and reason over the various inputs given to them such as words, images, sounds and so on. The figure below shows an illustration of an artificial neural network.
 
11_1496901715.jpg
 
As shown in the illustration, these neural networks are composed of multiple layers, with input, output, and one or more hidden layers. These artificial neural networks can be trained to perform various tasks. Now for example, if a neural network is trained to understand a given image along with its various objects, the different hidden layers in the network tend to learn different aspects of the image. For instance, the first hidden layer of neurons may just identify edges in the image, with different angles of orientation and the next layer may learn to identify more complex objects using the previously learnt edges, such as detecting triangles, rectangles, and so on. Successive layers could build on previous layers to learn more sophisticated objects and features such as faces. The most interesting thing is – given training data, the model learns all of this on its own. In this case study researchers have proposed a novel architecture for training a network which effectively learns and models the semantic meaning of the given search query. 
 
We are in the middle of an AI revolution where computers are promising to become the trusted lieutenants and adorable friends of humans. This true potential can be only realized if these bots, so to speak, become more aware of their actions and learn to restrain and regulate their automatic responses! An important motto to uphold
 
-  Thou Shalt Not Offend!
 
 
 
 
 

  • 0



Reply to this topic



  



Similar Topics Collapse

2 user(s) are reading this topic

0 members, 2 guests, 0 anonymous users

IPB Skin By Virteq