Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Beware Facebook friends who are robots trying to sell stuff

Eagle Gamma | July 15, 2013
How safe is your online social network? Not very, as it turns out. Your friends may not even be human, but rather bots siphoning off your data and influencing your decisions with convincing yet programmed points of view.

Easy to fake
To infiltrate a network, the bots follow a sophisticated set of behavioral guidelines that place them in positions from which they can access and disseminate information, adapting their actions to large scales, and evade host defenses.

To imitate people, social bots create profiles that they decorate, then develop connections while posting interesting material from the Web. In theory, they could also apply chat software or intercept human conversations to enhance their believability. The individual bots can make their own decisions and receive commands from the central botmaster.

The bots operate in phases. The first step is to establish a believable network to disguise their artificial nature. Profiles that people consider "attractive," meaning likable, have an average number of friends. To get near this "attractive" network size, social bots start by befriending each other.

Next, the social bots solicit human users. As the bots and humans become friends, the bots drop their original connections with each other, eliminating traces of artificiality.

Finally, the bots explore their newfound social network, progressively extending their tentacles through friends of friends. As the social bots infiltrate the targets, they harvest all available private data.

UBC researcher Beznosov recalls, "We were inspired by the paper where they befriend your friends, but on a different social network. For example, they know who your Facebook friends are. They can take this information and take a public picture of you, then create a profile on a completely different social network," such as LinkedIn. "At that point, the question we had was whether it's possible to do a targeted type of befriending-where you want to know information about a specific user-through an algorithmic way to befriend several accounts on the social network, eventually to become friends with that particular target account that you're interested in."

That targeting of specific users didn't work, so the researchers decided to test how many people they could befriend, with the penetration expanding over waves of friendship circles. The research exploits a principle called "triadic closure," first discovered in traditional sociology a century ago, where two parties connected by a mutual acquaintance will likely connect directly to each other. "We implemented automation on top of that."

Safeguards aren't secure
Plenty of tools exist to create social botnets.

Researcher Ildar Muslukhov notes that the UBC team had to solve many CAPTCHAs, those alphanumeric visual tests of humanness. Optical character recognition products failed frequently, getting the bot accounts blocked, so the researchers turned to human-powered services. "You can buy 1000 CAPTCHAs for $1. It's people who are working in very poor countries, and they're making $1 a day." CAPTCHA companies coordinate the human responders and automate the service.

 

Previous Page  1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.