Skip to content

OPLIN 4cast #528: Deep learning

Posted in 4cast

Last week, Facebook announced that its users will be able to search for photos based on a description of the photo, something Google Photos has also worked on. Note that this is not just searching the metadata (tags, alternative text, etc.) that someone has added to the photo; this is actually looking for patterns in the image itself. The technology is an extension of work that had been going on to create automatic alternative text for images, to assist blind users of the internet. And all of this is based on so-called deep learning neural networks, the “brains” behind artificial intelligence.

  • Deep learning will radically change the ways we interact with technology (Harvard Business Review | Aditya Singh)  “Think of the difference between modern voice-assistants like Siri or Alexa, which allow you to ask for things in various ways using natural language, vs. automated phone menu systems, which only perform if you use the specific set of non-negotiable words that they were programmed to understand. By contrast, deep learning-based systems make sense of data for themselves, without the need of an explicit algorithm. Loosely inspired by the human brain, these machines learn, in a very real sense, from their experience.”
  • Building a deep learning neural network startup (Medium | Varun)  “First came the ‘cat experiment’ demo, where researchers fed still-image pictures of cats from cat videos on Youtube to a neural network, and it was able to identify cats in pictures where they were not labelled so. Pass the champagne moment for researchers. Then, the real turning point came Nov 2016 when Google switched to using a deep learning neural network for its Google Translate service, making a drastic switch from prior 10 years of work building algorithms programmatically as the results from the deep learning neural network were orders of magnitude superior.”
  • What deep learning really means (Network World | Martin Heller)  “Understanding why deep learning algorithms work is nontrivial. I won’t say that nobody knows why they work, since there have been papers on the subject, but I will say there doesn’t seem to be widespread consensus about why they work or how best to construct them. The Google Brain people creating the deep neural network for the new Google Translate didn’t know ahead of time what algorithms would work. They had to iterate and run many weeklong experiments to make their network better, but sometimes hit dead ends and had to backtrack.”
  • AI software learns to make AI software (MIT Technology Review | Tom Simonite)  “If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy. Companies must currently pay a premium for machine-learning experts, who are in short supply. Jeff Dean, who leads the Google Brain research group, mused last week that some of the work of such workers could be supplanted by software. He described what he termed ‘automated machine learning’ as one of the most promising research avenues his team was exploring.”

Articles from Ohio Web Library:

Share