Facebook taking measures to prevent harmful live-streaming


New Zealand terror attack streamed live on Facebook

It was the swiftness of technology that enabled the killer of Muslims in Christchurch to create maximum terror by getting his grisly act live-streamed on Facebook, the global leader in interactive communications. The White supremacist was aware of the efficacy of the app relevant to Facebook and utilised it very effectively. The propagation of this dastardly act that claimed lives of 50 innocent people elicited worldwide condemnation compelling Facebook to take preventive measures.

Facebook acknowledged the resentment of people questioning the usage of its platform for broadcasting acts of terrorism. It announced undertaking three measures aimed at preventing a repeat of such action in future. It mentioned strengthening the rules for using Facebook Live, taking further steps to address hate on its platforms and supporting the New Zealand community.

Facebook is looking into barring people who have previously violated the social network’s community standards from live-streaming on its platform. The social network is also investing in improving software to quickly identify edited versions of violent video or images to prevent them from being shared or re-posted. In this respect Facebook identified more than 900 different videos showing portions of the streamed violence and advised not to share and re-share them.

Hidden agenda
Sheryl Sandberg announcing restrictions

Facebook also informed its global audience that it is now using artificial intelligence tools to identify and remove hate groups in Australia and New Zealand and in the meanwhile these groups will be banned from Facebook services. Facebook also announced that it would ban praise or support for white nationalism and white separatism as part of a stepped-up crackdown on hate speech. The ban would be enforced starting next week on the leading online social network and its image-centric messaging service Instagram.

Facebook has already banned posts endorsing white supremacy as part of its prohibition against spewing hate at people based on characteristics such as race, ethnicity or religion. The ban had not applied to some postings because it was reasoned they were expressions of broader concepts of nationalism or political independence.

Facebook said that conversations with academics and “members of civil society” in recent months led it to view white nationalism and separatism as linked to organised hate groups. People who enter search terms associated with white supremacy will get results referring them to resources such as Life After Hate, which focus on helping people turn their backs on such groups.

Needs effective prevention

Amid pressure from governments around the world, Facebook has ramped up machine-learning and artificial intelligence tools for finding and removing hateful content. This is the most relevant part of the new policy as AI has the wherewithal to purvey the signs of anti-social behaviour by paying attention to its pre-conceived notions.

While Facebook remains the most potent platform of global communications but its usage is now getting close to the attributes defined colloquially as nuisance. Many people are dissociating themselves from it because it has lost its novelty and has become a source of compromising individual privacy as has been recognised by Facebook itself. In actual fact the platform has become unwieldy with profusely loaded unnecessary apps that the platform considers valuable from its standpoint but are actually used by exclusive groups for purposes suitable to their designs. TW

Hoor Asrar Rauf has remained a swimming champion and is a budding entrepreneur      


Please enter your comment!
Please enter your name here