The problem there is that the typical approach of using anomalous behavior to sniff out botnets is rendered useless the moment that bot traffic is whitelisted on account of it coming from known and (supposedly) legitimate users.
"So the cookies are getting implicitly trusted and the way that fraud detection usually works is big data, usually anomaly detection," adds Tiffany. "But the botnets get baked into everyone's user info...and [for adversaries] that's the path to winning right there."
There is hope, however; while fighting off such a sophisticated approach can be tricky, it's not impossible. There are ways to detect man in the browser malware by looking at web traffic and differentiating between live human web traffic and a browser that's being driven by remote control (or an entire session that's scripted from the outside).
Techniques to detect these differences don't have to be static, since there are so many subtle ways in which the environment changes when it's remotely controlled. "We have a very huge parameter base and we can cycle through detection techniques fairly quickly," says Tiffany. "We burn techniques and move on."
And this is done in plain sight, though the system doesn't leak any success/fail information back to the adversary. No matter what, either the attempt succeeds or fails -- either they get the money or they don't -- but they have to play round two to know if they won round one.
"They can see our server, they know we're involved, but they can't tell if they got it right or if they duped us," says Tiffany. "Everyone agrees that there's too much attack surface in the browser. Our big insight is that we can use this property against our adversaries: if we can't protect it, neither can they."
Sign up for CIO Asia eNewsletters.