Skip to content

OPLIN 4cast #389: Shared sentiments

Posted in 4cast

sarcasm alert signLast week, Nextgov reported that the Secret Service has released a request for software to analyze social media data with one of the requested capabilities being the “ability to detect sarcasm.” The reason for the sarcasm request is an attempt to avoid a computer-triggered, aggressive law enforcement reaction to a social media post expressing malicious intent, only to find that the post was sarcastic — but of course, the request unleashed a whole flurry of snarky articles on the interwebs. Actually, though, businesses worldwide have been intensely interested in such an improvement to “sentiment analysis” of social media for years. A bad opinion posted and repeated in social media can do a lot of damage to any organization if the organization is slow to react, but what if a “good” opinion is actually sarcasm?

  • Sarcasm-detecting software doesn’t exist, would be helpful (nymag.com/Jesse Singal)  “The problem is that this is a very tough thing for computers to do — partly because it’s a very tough thing for humans to do. In regular speech, humans can rely on subtle cues that someone is being sarcastic…. These cues obviously aren’t present in text, which explains why jokes often don’t translate over SMS or Twitter. So it’s no surprise that computer scientists haven’t yet been all that successful in training software programs to recognize sarcasm.”
  • US Secret Service wants software to “detect sarcasm” on social media (Ars Technica/Joe Silver)  “Sarcasm analysis in the realm of politics ‘requires some background knowledge, which computers are not good at,’ [computer scientist and author Bing Liu] said. Others argue that the work order shows the intelligence community’s fundamental lack of understanding of how the Internet works. For example, The Consumerist’s Mary Beth Quirk said, ‘Basically, the Secret Services would love it if someone would explain the Internet so it doesn’t go around arresting sarcastic people with itchy social media trigger fingers.’”
  • Even Secret Service computers don’t get sarcasm (BloombergView/Leonid Bershidsky)  “Though developers would have us think their linguistic tools are quite advanced, they should not be trusted to perform anything but the most rudimentary tasks. The generally accepted level of accuracy for sentiment analysis — a branch of computer linguistics that determines the positive or negative slant of a piece of text — is about 65 percent, though some developers claim higher rates.”
  • Stanford algorithm analyzes sentence sentiment, advances machine learning (Stanford University Enginnering/Tom Abate)  “As we increasingly share these opinions via social networks, one result is the creation of vast reservoirs of sentiment that could, if systematically analyzed, provide clues about our collective likes and dislikes with regard to products, personalities and issues. Against this backdrop, Stanford computer scientists have created a software system that analyzes sentences from movie reviews and gauges the sentiments they express on a five-point scale from strong like to strong dislike. The program, dubbed NaSent – short for Neural Analysis of Sentiment – is a new development in a field of computer science known as ‘Deep Learning’ that aims to give computers the ability to acquire new understandings in a more human-like way.”

Articles via Ohio Web Library:

Share