Some years ago, a photograph appeared in the international press that proved why research in multimedia forensics is so vital in today’s image-soaked world. Released by Iran’s news service, the photo flaunted four missiles rising from a cloud of smoke. As a show of military sophistication, it looked intimidating, and was meant to. But it wasn’t entirely real. The image had been digitally enhanced to convey a beefier missile capability than Iran actually possessed.
Dr. Matthew Stamm can tell you how we know that, and a whole lot more besides. His research in the growing field of information forensics seeks to determine when images are real, and more importantly, when they are not.
“One of the challenges of the 21st century is that we have tons of information. We’ve gotten pretty good at storing it and communicating it. But how do we pick through it and know what to look at?” asks Stamm, assistant professor in the Department of Electrical and Computer Engineering (ECE) in the College of Engineering.
“Information now is shared very rapidly. There aren’t as many gatekeepers sitting there asking where it’s coming from. I think it’s a good thing that information can flow without gatekeepers. But we really do need tools that let us know whether we can trust what we’re looking at.
“People in positions of authority have to be able to ask, is this real? Do I have to change policy? Do I have to have a military intervention? You can think of so many examples like the one with the missiles, and they don’t even need to be that nefarious.”
By way of example, Stamm flourishes a photo of Jack Nicholson and Leonardo DiCaprio that has been altered to show the actors sitting outdoors together. Like many alterations, this one would be difficult to discern with the naked eye. Forensics can show that Jack was not even originally in the same photograph, but was sitting ringside at a basketball game.
The business, cultural, and governmental applications for media forensics are enormous, and the research is much in demand. Stamm’s lab, the Multimedia and Information Security Lab (MISL), receives funding from the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA), the Army Research Office (ARO), and the Defense Forensics & Biometrics Agency (DFBA). The lab has already delivered a prototype of software to DFBA able to pinpoint exactly which camera model captured a specific photo.
With the help of MISL lab graduate students like Owen Mayer, Stamm hopes to deliver another such prototype for videos within two years.
"I got into multimedia forensics to work on challenging signal processing problems that have significant social implications,” says Mayer. “I think falsified media—e.g. Photoshopped images and ‘fake news’—has affected the way we look at information online and the way we communicate. This problem is only going to get worse. A lot of the work that we're doing will provide tools to people so that they can make better sense of the digital information they see."
PRETEND IT’S A MOUNTAIN
The MISL lab is working at the forefront of signal processing and machine learning to advance algorithms that locate traces of manipulation within a particular image. Detection models used to look at such simplicities as whether a photo had been compressed more than once, a strong indication that it had been manipulated. Today, deep-learning algorithms are shouldering that simplicity aside in place of more adroit methods of detection.
A photographic image is basically a recording of light intensity at different points on that image. Think of it as a mountain range, in which the peaks reflect where the light is brightest and the valleys reflect where it’s darkest. Resting over that range like a cloud is a layer of statistical “noise,” says Stamm. This is how a machine looks at an image in order to locate inauthenticity. Stamm’s algorithms train the computer to search for alterations in the properties of that noise, tiny deviations in what the machine wants to measure versus what’s actually there.
“The way it works, the algorithms are pretty good at separating the content from that noise, so that all we get is that noise hanging out on top. And then we can build models of statistical properties of that noise that say, for instance, when the noise looks like this, it’s unaltered. But when it looks like this, it’s been resized. Or when it looks like this, it maybe came from an iPhone versus a Canon DSLR, or it’s been sharpened twice.
“With these techniques, we can learn what to look for. Now we can say, I don’t know the nature of all these traces, of all that noise. I just need to know these two things don’t look alike, and that could indicate manipulation. We’re really at the cusp of building systems that can say, this region of the image has been falsified and here’s how.”
Stamm joined Drexel’s ECE faculty in the Fall of 2013. He earned his BS, MS, and PhD degrees from the University of Maryland. He won a CAREER Award from the National Science Foundation in 2016 and the 2017 Drexel University CoE Outstanding Early-Career Research Achievement Award. This year, he is serving as the lead organizer of the Institute of Electrical and Electronics Engineers (IEEE) Signal Processing Society’s 2018 Signal Processing Cup competition. And he likes the poet Richard Brautigan. A copy of Brautigan’s poem, “All Watched Over by Machines of Loving Grace,” hangs prominently in his office.
ALL ABOUT THE TEAM
Stamm credits his MISL members—“It’s not me, it’s us. It’s very much us”—with the hard work in the lab, fine-tuning the algorithms and setting up a framework for a machine to learn by feeding it lots of examples. He says that while it is easy to locate students with the technical chops to do this kind of work, what he wanted most was those who were motivated by curiosity. Along with Mayer, three other PhD candidates make up the MISL lab: Xinwei Zhao, Chen Chen, and Belhassen Bayar.
“Everything in this lab is the best,” says Zhao, whose undergraduate work was in power systems but who chose to work in MISL because she believes in the research. “I would say that we are very united. We think of ourselves as one group with everyone having his or her own sparkling points. Our lab works on two complementary aspects of the field, multimedia forensics and anti-forensics. Anti-forensics will help explore the weaknesses of the forensic techniques to help improve them. I am working on removing manipulations traces.”
Stamm, who made his early reputation in the field by building anti-forensic attacks against forensic algorithms, hopes to come at the challenges facing today’s information legitimacy from many angles. The classic areas on which forensics research focus are image, video, and audio products. But there are also researchers focusing on text or speech signal processing.
“The thing that makes you do good work is when you have a question in your head that you can’t really scratch and you’re constantly trying to get at it,” he says. “I’m really interested in information in general. How it’s disseminated. How it’s manipulated.”
Asked what the end goal is for his research, Stamm laughs and waves a hand as if to say that the applications are too broad to articulate easily. But he knows he wants the tools to be accessible for every sort of user; meaning, anyone and everyone who has access to information.
“Wouldn’t it be great if we all had some app on our phone where you see a viral image of a political figure and you ask yourself, is this real?” Stamm says. “And you can click a button that says, no, it’s not real. And here’s why.”