A Meta-commissioned report has revealed that the company violated the rights of Palestinian users during Israel’s attack on the Gaza Strip last May.
Commissioned by Meta last year and conducted by the independent consultancy Business for Social Responsibility, or BSR, the report focuses on the company’s censorship practices and allegations of bias during bouts of violence against Palestinian people by Israeli forces.
Following protests over the forcible eviction of Palestinian families from the Sheikh Jarrah neighborhood in occupied East Jerusalem, Israeli police cracked down on protesters in Israel and the West Bank and launched military airstrikes against Gaza that injured thousands of Palestinians, killing 256, including 66 children, according to the United Nations.
Many Palestinians attempting to document and protest the violence using Facebook and Instagram found their posts spontaneously disappeared without recourse, a phenomenon the BSR inquiry attempts to explain.
The report confirmed long-standing criticisms of Meta’s policies and their uneven enforcement as it relates to the Israeli-Palestinian conflict: It found the company over-enforced rules when it came to Arabic content and under-enforced content in Hebrew.
“Meta’s actions in May 2021 appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” says the long-awaited report.
It, however, did not find intentional bias at Meta, either by the company as a whole or among individual employees. The report’s authors said they found “no evidence of racial, ethnic, nationality or religious animus in governing teams” and noted Meta has “employees representing different viewpoints, nationalities, races, ethnicities, and religions relevant to this conflict.”
In response, Meta said it plans to implement some of the report’s recommendations, including improving its Hebrew-language “classifiers,” which help remove violating posts automatically using artificial intelligence.
“There are no quick, overnight fixes to many of these recommendations, as BSR makes clear,” the company based in Menlo Park, California, said in a blog post on Thursday.
“While we have made significant changes as a result of this exercise already, this process will take time — including time to understand how some of these recommendations can best be addressed, and whether they are technically feasible.”