
Above: DedSec is the hacker group in Watch Dogs 2.
VentureBeat: What did you find in the report as far as what types of fraud are growing or being detected most often?
Naumann: You have the classics. Click spam is still really big. That’s any type of fraud where a click gets executed for a user without the user actually clicking on the ad. Those can be completely fabricated clicks that are triggered by a server with device IDs attached to it, or by websites that do ad stacking and automatic clicking on page reload, or any type of app that clicks in the background without the user seeing any ads. Any of those will create engagements that we would attribute, and it gives the fraudster a chance to cash in on the random chance of any of the users they know converting for a popular app.
Click injection is a bit more interesting. The fraudsters here only inject the click after the user has made the decision to download and install an app. There’s an exploit in the Android operating system that allows the fraudster to listen to what’s called the content provider. That makes it available to see which app is being installed from the Google Play store right now. As soon as the user clicks the download button, the content provider has a new entry that says it’s downloading. If fraudsters have a malicious app on that device, they can use that information to inject a click and be the last click, even if there was a legit advertisement in between. That makes for a nice money-printing machine.
VentureBeat: How do you prevent that particular problem?
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Naumann: When we found out about this kind of fraud, we got in touch with Google and asked them to help us clear it up. They changed how the content provider works and changed how the broadcasts work. They substituted the install broadcast with what is now called the Google Referrer API. That released in December of last year. That gives a secure referrer data point, and it also has a timestamp of when the user actually clicked the install button in the Play store. [We can model attribution in a way that we do not attribute installs to any engagements coming in after the user made the decision to download and install an app.]
Spoofing is the rising star on the fraud horizon. It’s a much bigger problem. It’s not just an attribution provider or measurement provider problem. It’s a big problem for all advertising. All data communication between an app and the different backend systems the app shares data with is secured by normal SSL encryption, in basically all cases. Fraudsters have figured out how to break open that encryption, read the payload of what’s being delivered, and once that payload is figured out, they can do replay attacks with the same data structure. You just inject different payloads.
For instance, let’s say you have three different phones, and you’re installing the same app on all three phones. You collect the communication between the app you’re installing and the backend systems that are fed with data — the publisher, the app developer, all of that. When you record all of that, you can compare between the different devices, and that gives you the structure of static URL parts and dynamic URL parts. Static is the structure. Dynamic is the data points being delivered in the payload. When you figure out what the dynamic payload is, you can say, “OK, this is the advertising ID, this is the timestamp, this is the device model.”
Once you figure out what needs to go in there, you can either curate data that looks legit and create URL calls that look like the same thing, or what happened a bit later on, you can put a malicious app on a user’s device and use all that device data to create install data points for installs that never happened on the device. You use 100 percent legit device data from a real device out there in the market, so for us it’s impossible to say that this or that install is real or not. We can’t know if this new install is legit or not because all of the data points are 100 percent legit.
The only way to deal with that is to secure the client-server communication to an extent that normal SSL encryption is not the sole security measure. On the MMP level, this is by now a known problem. Our competitors don’t like to talk about it, because their security package for clients, for instance, is only available to people who buy their full product suite. In our case it’s free to all clients, but it’s a hassle to build in. They need to update their SDKs, which is something people don’t like to do.
There are MMPs out there which built a solution that’s close to our first version we built out, but they’re also not actively advertising it. Frankly, it makes attribution providers look like they didn’t do a good job to begin with. Stuff wasn’t secure, now we’ve had to make it secure, and now the client has to move to get to the security level they might have had a year earlier, which would have been really nice.
All the examples I’ve given are between the advertiser and the MMP — the [mobile] measurement partner — and the fraudster. The problem is, this type of communication we use is the same that any other service in mobile uses as well. It’s really just encoded URLs that have a payload. That means the whole spoofing thing also works for monetization SDKs. That’s something nobody wants to talk about.
Obviously that’s a very unpopular thing to talk about. We still do it because we think our advertisers are better off when we explain to them transparently what the problem is and how to secure themselves. Then it’s their decision to do it, which is already pretty shaky. But the rest of the industry just doesn’t want to talk about it, which I think is borderline malicious.

Above: Check Point Software unearthed a mobile ad fraud scheme.
VentureBeat: Is there a way for the advertiser to double-check what is real traffic and what isn’t? They would use you guys, use Kochava, use AppsFlyer, and theoretically you should all agree on the exact amount of traffic. If one isn’t detecting certain traffic — you guys might have the lowest level of traffic getting through.
Naumann: It depends. When we’re talking about spoofing, if the spoofer knows what they’re doing, it’s 100 percent undetectable. All they have to do is put legit device data in the right places and that’s it. It’s indistinguishable because it’s legit device data.
What is detectable is whenever the spoofers make a mistake. But since this is a fraud scheme that’s been around for 18 months now, you can’t expect them to make a lot of mistakes by now.
VentureBeat: If you detect more fraud, though, you’re going to report that the legitimate numbers were lower, right? Is that consistently what happens? You can detect more than the other guys do?
Naumann: It depends on the fraud scheme, again. There’s a couple of things we don’t touch. Unfortunately there’s a strong incentive on the advertiser side to buy fraudulent traffic. If you’re looking for maximum growth, you want to show to your investors that you can buy market share for a decent price. You’re very much incentivized to buy traffic that spoofs the user engagement.
Say you have a real install from a real user that uses your app and spends money within your app, but that user didn’t install the app because they clicked on a banner in a full-stage interstitial. They installed the app because their friend told them about it, or because they saw it on TV. In that case the advertiser or the UA team of the advertiser is cannibalizing the branding, which is cheap. It makes them look good. The performance, in the end, is good. They can claim unlimited growth, and since the branding people can’t make the connection between a TV ad and an install, they lose out.
That can create a deadly circle of mischief. An m-commerce app with designer clothes and so on created exactly that problem. They had a huge branding budget and a pretty decent user acquisition budget for the app, because they were mobile-only. They spent more and more money on the mobile acquisition that cannibalized from their branding, up to the point where they didn’t do any brand advertising at all, because mobile performance was doing so well. Then their organic user influx stopped, so they couldn’t pay as much for performance. There wasn’t anything to cannibalize, any performance to be had. They never got back on track.
VentureBeat: Are there any other big fraud problems to make note of?
Naumann: Spoofing is definitely the biggest one right now. It’s the one that has the least transparency around it. Most of the market doesn’t want to talk about it, or claims it’s not a problem. It’s the hardest to stop. Again, there are no standards. Everybody needs to do their homework. They have to explain to their clients, “We haven’t been doing this for the last three years, even though we should have. Now it’s on you to protect yourself.”
That’s the hardest part. The app developers need to replace the SDKs they have with secure versions. They need to secure their own communications. It’s the same problem for the advertiser themselves. The install data and event data they’re tracking is also at risk. We’ve seen spoofing phenomena where all of our install data was spoofed, and all the advertisers’ install data was spoofed. There was zero discrepancy. Everything looked legit. But actually getting behind that, getting all of that out of the system and figuring out how to secure it, then resetting the benchmarks and KPIs, is a very painful process. But it has to be done.
VentureBeat: The thing I remembered was that Kochava was doing a lot around blockchain.
Naumann: Yes, that’s their new business they want to grow into. I can’t speak for Adjust, but in my opinion it’s not going to do anything about fraud. The idea is, blockchain is inherently transparent and secure, so it fixes the fraud problem. But it doesn’t. It’s as spoofable as anything else is. And it’s slow. I have no idea how they can keep up with the speed they’re claiming. They want to do 100 transactions per second. I’m not sure about that. Their idea is to do it by daily rolling chains, but if you have daily rolling chains, then the actual transparency is not there. You’re transparent for one day only. If I want to check something that’s 30 days old — I’m not sure how they want to do all that. But they’re also not very transparent about how they’re going to do it, so it’s hard to judge.