After Facebook and its messenger WhatsApp received heavy criticism for not doing enough to stop false information from spreading on its networks, the company has launched some initiatives to better understand the phenomenon.
One effort is WhatsApp’s pledge to fund research groups across the globe who study misinformation and social dynamics on the messenger. The firm just revealed the 20 studies it decided to back, chosen from a pool of 600 applicants, it said. Many of those chosen are from countries like India and Indonesia, where WhatsApp is the dominant mobile messenger.
Among the researchers funded is one group of the Indonesian Paramadina University, which is looking at cases of mob violence triggered by misinformation received through WhatsApp.
Another study in India looks at the differences in reaction to the message formats typically spread on WhatsApp. Specifically, the group wants to investigate whether video formats lead to higher degrees of vulnerability among recipients.
“Addressing the impact of misinformation is a long-term challenge,” WhatsApp writes in the research grant announcement. “We hope these awards will help us build a model of engaging with academic experts to develop culturally relevant, long-term, and sustainable solutions to this complex problem.”
Facebook also launched fact-checking initiatives in several countries to help identify and flag fake news stories on its site.
But in some observers’ eyes, it’s not enough. Social platforms are exploited for nefarious reasons in so many ways that more fundamental changes to the model are necessary to regain control, they argue. “Facebook’s pledge to eliminate misinformation is itself fake news,” said one Guardian article, pointing out several cases in which the social network was still slow to respond to misinformation.
Editor: Ben Jiang