Bad actors try bespoke lies to avoid misinformation detection

By on
Bad actors try bespoke lies to avoid misinformation detection
(L-R) Nathaniel Gleicher (Facebook), Del Harvey (Twitter), Robert Joyce (NSA), Peter Singer

NSA, Facebook and Twitter lament weaponisation of the internet.

The weaponization of the internet is becoming more “handcrafted”, with bad actors intentionally tailoring online content to prevent being detected and shut down by automated systems, an expert panel has said.

At the annual RSA Conference, Facebook’s head of cyber security Nathaniel Gleicher said his company treated the problem of bad actors on social media “fundamentally as a security challenge”.

“Whenever you have a space for public debate where people are having meaningful discussion, you’re going to have bad actors who try to target that debate,” Gleicher said.

“That happens as soon as the debate occurs. Because it’s a security challenge, you know that they’re going to continue trying, they’re going to continue developing new techniques, and they’re going to continue evolving their techniques.

“The way you make progress in a security world is you identify ways to impose more friction on the bad actors and the behaviours that they’re using, without simultaneously imposing friction on the meaningful public discussion. That’s an incredibly hard balance, but it’s also the biggest focus we as a company have right now.”

Gleicher said Facebook used human investigators to identify techniques used by bad actors and then code ways to target those techniques in an automated fashion.

“In medicine, doctors will sometimes inject dye into the bloodstream to see where a wound would be,” Gleicher said.

“The bad actors in this space can act a little bit like dye injected into the bloodstream of public debate because they will find and evolve new techniques first.

“When we see those new techniques, we identify the core behaviours they’re using, and then we work to build scaled solutions to make those behaviours more difficult at scale.

“You create this virtuous cycle. There’s always more to be done, but I think if you address this and approach this as a security problem, you can make progress.”

US National Security Agency senior advisor Robert Joyce said that social media platforms were “getting good at understanding … automated amplification techniques” used by bad actors.

However, Joyce warned that it was “handcrafted” rather than automated techniques that now posed some of the biggest problems.

Indeed, Gleicher noted that bad actors in the public discourse had become a lot better at finding ways not to get themselves or their accounts immediately banned.

“The challenge is that the majority of content we see in information operations doesn’t violate our policies, it’s not clearly hate speech and it’s intentionally framed to not fit into that bucket, and it’s not provably false,” he said.

“A lot of this is driven to fit into that grey space.”

Twitter’s vice president of trust and safety Del Harvey said that “content is actually one of the weaker signals that we would have in saying this person is definitely a bad actor.”

Typically, all panellists tended to look for other behavioural traits around identity and amplification of messages to weed out bad actors, and it typically took time to correlate different pieces of evidence.

Harvey also raised the prospect that constant talk of the presence of bots on social networks had damaged discourse generally.

“Because there has been so much conversation that ‘it’s all the bots’, it is amazing the number of times you’ll see two people get into an argument and one of them decides to end it by saying ‘you’re just a bot’,” she said.

“It is demonstrably not a bot. But there’s this increasing almost exit path that people take from conflict, from disagreement, that ascribes anyone that isn’t aligned with them or anybody who has a differing opinion, they’re like ‘you’re just a bot. In fact you’re a Russian bot and you are here to try and sway my mind on the topic of local football teams’.

“I don’t know why but this is something we genuinely see a lot of. We have bot scope creep where everything is a bot.”

Peter Singer, author of the book ‘LikeWar’, said that one of the challenges for social media platforms today was that they had never been designed for uses that were now playing out.

However, since the power of platforms to disseminate misinformation had been proven, Singer worried about how future platforms could evolve.

“The creators of today’s companies didn’t set out to have this war/politics power,” he said.

“They’re the first generation. What happens in the second generation of this where people realise that they have this kind of power within these platforms?”

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Log In

  |  Forgot your password?