Jump to content


Photo
- - - - -

How A.I. can Detect & Filter Offensive Images


10 replies to this topic

#1 Digger

Digger

    Premium Member

  • Members
  • PipPipPipPip
  • 365 posts

Posted 27 December 2016 - 02:29 PM

Advances in Artificial Intelligence and Deep Learning have transformed the way computers understand images and videos. Over the last few years, innovative neural network structures and high-end hardware have helped research teams achieve groundbreaking results in object detection and scene description. Those structures have in turned been used to build generalist models aiming to recognize any object in any image.
 
It is estimated that more than 3 Billion images are shared every day online, along with millions of hours of video streams. Which is why more and more app owners, publishers and developers are looking for solutions to make sure their audience and users are not exposed to unwanted content. This is a moral as well as a legal imperative, and is key to building a product users trust and like.
 
k5bLmIW.jpg
 
 

  • 0



#2 status - Roger Roger

status - Roger Roger
  • Guests

Posted 27 December 2016 - 05:49 PM

Nice cupcakes! Ya'll think the AI would count this one as offensive?

 

7c761f538354d9bbb082a610daf183b5.jpg

 

:chuckle:

 


  • 0

#3 status - Guest

status - Guest
  • Guests

Posted 01 January 2017 - 01:25 PM

giphy.gif

 

:chuckle:


  • 0

#4 status - whn

status - whn
  • Guests

Posted 24 April 2017 - 08:13 AM

cookiesandspam.jpg


  • 0

#5 Ghostly Machines

Ghostly Machines

    Premium Member

  • Members
  • PipPipPipPip
  • 272 posts

Posted 22 May 2017 - 12:36 PM

giphy.gif

 

:chuckle:

 

:Laughing-rolf:

 

http://forum.chicken...ehavior/?p=9677


  • 0

7mDFXjl.gif


#6 status - Bender

status - Bender
  • Guests

Posted 30 July 2017 - 06:34 PM

:Banana_Dance:

 

6045e3a7a5f21c0ef1fb50abc5047cb3.gif

 

:chuckle:

 


  • 0

#7 status - Calculon

status - Calculon
  • Guests

Posted 30 July 2017 - 07:07 PM

 
 
C3bYW9vWIAAdQpn.jpg
 
Technology could do the spotting and sorting for humans.
 
Facebook spares humans by fighting offensive photos with AI
 
Facebook’s artificial intelligence systems now report more offensive photos than humans do, marking a major milestone in the social network’s battle against abuse, the company tells me. AI could quarantine obscene content before it ever hurts the psyches of real people.
 
Facebook’s success in ads has fueled investments into the science of AI and machine vision that could give it an advantage in stopping offensive content. Creating a civil place to share without the fear of bullying is critical to getting users to post their personal content that draws in friends’ attention.
 
When malicious users upload something offensive to torment or disturb people, it traditionally has to be seen and flagged by at least one human, either a user or paid worker. These offensive posts that violate Facebook’s or Twitter’s terms of service can include content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.
 
The occupation is notoriously terrible, psychologically injuring workers who have to comb through the depths of depravity, from child porn to beheadings. Burnout happens quickly, workers cite symptoms similar to post-traumatic stress disorder and whole health consultancies like Workplace Wellbeing have sprung up to assist scarred moderators.
 
But AI is helping Facebook avoid having to subject humans to such a terrible job. Instead of making contractors the first line of defense, or resorting to reactive moderation where unsuspecting users must first flag an offensive image, AI could unlock active moderation at scale by having computers scan every image uploaded before anyone sees it.
 
AI could eventually help Facebook combat hate speech. Today Facebook, along with Twitter, YouTube and Microsoft agreed to new hate speech rules. They’ll work to remove hate speech within 24 hours if it violates a unified definition for all EU countries. That time limit seems a lot more feasible with computers shouldering the effort.
 
 
An AI future: Technology could do the spotting and sorting for humans.
 
The AI is able to monitor video on Facebook Live and flag it, if offensive content is found. The social network also has an automation process to sift through the "tens of million of reports" of offensive content on a weekly basis. 
 
 
To detect and police content across YouTube’s sprawling library and ensure ads don’t run against questionable content, Google must solve an AI problem no one has cracked yet: automatically understanding everything that’s going on in videos, including gesticulations and other human nuances. 
 
A potential solution lies in machine learning, a powerful AI technique for automatically recognizing patterns across reams of data
 
Google’s AI advances sometimes match the hype, but they are not perfect. The company’s cloud division recently released a tool (unrelated to YouTube) that breaks videos into their constituent parts, rendering them “searchable and discoverable.” A group of academics published research last week that showed how to deceive this system by injecting images into videos.
 
Google has used machine learning and other AI tools to master speech, text and image recognition. In 2012, researchers famously got a network of 16,000 computers to teach itself to recognize cats by scanning millions of still images culled from YouTube videos. Understanding entire videos is a lot more difficult. Cats meow, stretch and jump through more than a thousand video frames each minute.
 
Google researchers have applied machine-learning software to classify images and audio inside videos for years (is that video tagged as a Prince song really Prince?), while improving recommendations and ad performance. Another part of Alphabet — a group called Jigsaw — is using AI tools in other ways to curb hate speech online.
 
In a memo to aggrieved YouTube advertisers last month, the company said its machine-learning algorithms will improve the precision and classification of videos. However, it also warned that with the volume of content involved, this can never be 100 percent guaranteed. 
 
 
How Deep Learning Is Teaching Machines To Detect & Filter Inappropriate Responses Like A Boss
 
Conversational Agents or Chat-Bots are also being deployed by various online business portals to provide a more personalised experience for their customers. In fact, some chat-bots, (such as Microsoft Xiaoice – a text-based chat-bot), are evolving beyond being assistants and note-takers to project a persona of their own. They have unique language characteristics, a sense of humor, and the ability to connect with users’ emotions. 
 
AI researchers are looking for automatic techniques for detecting such “inappropriate” or “toxic” content so that it could be employed by machines for effective self-regulation. This technology could also be used for moderating discussions and comments in many online forums and news sites where certain issues can rapidly dissolve into inappropriate abuse and hate commentary.
 
This technique being proposed by some researchers is based on a new field of computer science research known as Deep Learning (DL) – which aims to build machines that can process data and learn in the same way as our human brain does. DL essentially involves building artificial neural networks which are trained to mimic the behavior of the human brain. These networks can learn to represent and reason over the various inputs given to them such as words, images, sounds and so on. The figure below shows an illustration of an artificial neural network.
 
11_1496901715.jpg
 
As shown in the illustration, these neural networks are composed of multiple layers, with input, output, and one or more hidden layers. These artificial neural networks can be trained to perform various tasks. Now for example, if a neural network is trained to understand a given image along with its various objects, the different hidden layers in the network tend to learn different aspects of the image. For instance, the first hidden layer of neurons may just identify edges in the image, with different angles of orientation and the next layer may learn to identify more complex objects using the previously learnt edges, such as detecting triangles, rectangles, and so on. Successive layers could build on previous layers to learn more sophisticated objects and features such as faces. The most interesting thing is – given training data, the model learns all of this on its own. In this case study researchers have proposed a novel architecture for training a network which effectively learns and models the semantic meaning of the given search query. 
 
We are in the middle of an AI revolution where computers are promising to become the trusted lieutenants and adorable friends of humans. This true potential can be only realized if these bots, so to speak, become more aware of their actions and learn to restrain and regulate their automatic responses! An important motto to uphold
 
-  Thou Shalt Not Offend!
 
 
 
 
 

  • 0

#8 status - Carpathia

status - Carpathia
  • Guests

Posted 02 August 2017 - 04:03 PM

Facebook-dc239d.png
 
Starbucks, McDonalds, Among others, censor their free WIFI - There's a disturbing reasonal why the chains are weeding out porn.
 
McDonald’s recently announced it has deployed filters on its complimentary WiFi service at its restaurants across the world. However, Starbucks appears to be the latest major chain to declare that its WiFi service will filter X-rated websites, reported CNN.
 
McDonald’s had insisted that the decision to block explicit online content was taken to protect families and, more specifically, children from sexually explicit content being accessed on the premises. Incidentally, while Starbucks has decided to filter their WiFi, they haven’t begun the process, revealed their official statement.
 
“Once we determine that our customers can access our free Wi-Fi in a way that also doesn’t involuntarily block unintended content, we will implement this in our stores. In the meantime, we reserve the right to stop any behavior that interferes with our customer experience, including what is accessed on our free Wi-Fi.”
 
What that essentially means is Starbucks might be actively monitoring what content its customers access when they are sitting in one of the many cafes across the globe. Until the software can take over the process of weeding out websites that offer adult content, the chain might have to rely on older monitoring techniques to prevent access to pornographic websites.
 
 
How Internet Filtering Hurts Kids
 
Zealously blocking their access to certain websites can end up undermining learning.
 
At the core of the ongoing debate is a law passed by Congress in 2000 that mandates all public libraries and schools that receive federal funds for Internet access install blocking software. The Children’s Internet Protection Act (CIPA) specifically requires schools and libraries to block or filter Internet access to pictures and material that are “obscene, child pornography, or harmful to minors” on computers that are used by students under 17 years of age. The fundamental question has been how schools are interpreting the law—and whether districts are acting in the best interests of children or simply functioning as online overlords.
 
In Maine, Portland Public Schools in April 2012 installed filters on high-school students’ school-issued laptops that banned access to social networks, games, and video-streaming sites. At the time, Portland was among the first districts in the state to authorize such stringent filtering on take-home school devices. As the Press Herald reported, Portland High School students had very different responses to the new policy, based on their access to another computer at home: “…those from middle-class families expressed various degrees of annoyance when told of the new filtering measures. A group of immigrant students reacted with anger.”
 
What’s more, in-depth conversations with the families revealed that districts blocked YouTube at school, as well as on school-supplied devices, because some content was deemed inappropriate. And the consequences were steep. “Parents and children depended on YouTube to support homework time, including tutorials to solve math problems and to learn more about historical characters. The problem is that these platforms are multi-use, and those uses change too quickly for district [filtering] policies to easily keep up.”
 
 
Colleges and Hotels Blocking WiFi Signals: Safety or Censorship?
 
Hotel giant Marriott International was issued a $600,000 fine back in August of 2014 for using signal-jamming technology to block the use of personal Wifi hotspots (like those in many cellphones). This was done only in conference rooms and meeting areas, not in guest rooms or public lobbies, however the “where” is far less important than the “what” and the “why.” Marriott had petitioned the Federal Communications Commission for the right to use such technology “whether through clarifying some existing FCC policies or by creating a new rule entirely that would address the situation.” The fine was levied because they went ahead with the action before hearing back.
 
The blocking of personal hotspots essentially forced visitors using the conference and meeting areas to purchase WiFi access from the hotel, which is an expensive proposition. Marriott argues that the action was taken to protect guests from potential hacking and rogue signals over WiFi networks it couldn’t monitor; opponents say it amounts to a hostile sales technique.
 
While it may be one of the points of the Marriott argument, the question regarding college campuses is worth discussing as well. Do students have a right to free and untethered access to all of the Internet, at as high a volume of data exchange as they choose? Or is it right to enforce some regulations, limiting individual freedoms for the greater good?
 
 
3db25347c3e5ce2e901c0156866bee53.jpg
 

  • 0

#9 Ghostly Machines

Ghostly Machines

    Premium Member

  • Members
  • PipPipPipPip
  • 272 posts

Posted 04 August 2017 - 04:55 PM

Learn to use proxies or a vpn.


  • 0

7mDFXjl.gif


#10 status - Lynn

status - Lynn
  • Guests

Posted 25 August 2017 - 11:48 PM

I can't see glp or lop at my local library.

 


  • 0



Reply to this topic



  



Similar Topics Collapse

IPB Skin By Virteq