My condolences to all the internet kids toughing it out with nothing more than some GIF-making software and a Tumblr account. You've been replaced.
I mean, just look at this thing of beauty:
A computer made that all on its own. In fact, not only did a computer make the above GIF, it actually scanned the original video and decided which bits had the highest GIF potential; a video went in, and a slew of appealing GIFs came out.
We're on our way to what you could call a fully automated, bean-to-bar, GIF-making solution. Or, as its creators at ETH Zurich in Switzerland and Yahoo! Research in New York do, you could call it Video2GIF.
With companies providing GIF-focused web services like Giphy scoring multi-million dollar investment rounds and integration with major social media platforms, Yahoo! may see an opening in the market to pull itself out of the hole it's been languishing in for years. Most recently, UK tabloid Daily Mail was reportedly looking to buy the struggling web giant.
However, Yahoo! doesn't have "any immediate plans to put this research into production," a spokesperson told Motherboard when reached for comment.
"GIFs tell a story, express emotion, turn events into humorous moments, and are the new wave of photojournalism," the authors write in a paper published to the ArXiv preprint server. "We pose the question: Can we automate the entirely manual and elaborate process of GIF creation by leveraging the plethora of user generated GIF content?"
Yes, yes we can.
The challenge of automatic GIF creation is a thorny one for computer science, the team wrote, because it folds in some current research objectives—getting computers to measure the memorability of an image, for example—with a slew of new problems. For example, how to automatically generate a GIF that loops properly.
The team will presented the paper at CVPR 2016, a computer vision conference, in Las Vegas in June.
At the core of Video2GIF is a neural network—"layers" of digital nodes that run their own computations on inputs and rearrange themselves to arrive at an output—that the researchers trained to recognize the most interesting and GIF-able parts of a video using a dataset of 100,000 user-tagged GIFs and their video sources.
The researchers matched up the GIFs with the frames in their videos of origin and fed them into the neural network, along with their user tags. This way, the computer learned which sections of videos are likely to result in good GIFs. After all was said and done, the researchers had a mean GIF-making machine on their hands.
Some limitations of the work included how unhelpful user tags can be, and so the authors suggest trying out a better approach to parse natural language next time around. Video2GIF also only works with standalone sections of video, and so the researchers want to try automatically splicing together different scenes into one GIF.
Good-ass GIFs are an art form. In just a few seconds of looping video frames, they draw from the deepest well of human experience.
Will memes ever be the same again once the machines take over?