Meta's Yet Another PR Stunt
- Manushya Foundation

- Jan 16
- 7 min read
Manushya Foundation's Digital Rights Advisor Jean Linis-Dinco slams Meta's 2024 "Human Rights" Report and breaks down how Big Tech’s accountability theatre fuels exploitation, reproductive justice backlash, and genocide.

Another public relations stunt from Meta came last December in the form of their so-called Human Rights Report, a paper wrapped in shiny language that they frame as the cornerstone of their supposed commitment to the non-binding, toothless United Nations Guiding Principles on Business and Human Rights.
About the same time last year, we also received a copy of their 2023 report of which we, at Manushya Foundation, found utterly dismissive of the real, material harms people experience on Meta’s platforms and of any serious notion of corporate accountability.
Instead of grappling with how its surveillance-driven business model fuels exploitation and genocide, the 2023 report hid behind colourful words, pretending that a few policy tweaks could somehow offset the violence built into the platform. It erased the Kenyan moderators who were fired and blacklisted for trying to unionise, ignored the extraction of public data to train ‘AI’ and sidestepped the blatant regional double standards in data protection between the EU, Australia, Southeast Asia and beyond.
If the 2023 report was a textbook example of public relations dressed up as human rights, the 2024 report arrives as a continuation of the same song.
What concerns us the most is the accompanying document on Palestine, which reads like an institutional manifesto struggling to name the forces it is entangled with. The footnote on page 4 summarises the whole strategy, laying out highly defensive parameters around the company’s own accountability.
“Meta’s publication of this response should not be construed as an admission, agreement with, or acceptance of any of the findings, conclusions, opinions, or viewpoints identified by BSR, or the methodology employed to reach such findings, conclusions, opinions, or viewpoints. Likewise, while Meta references steps it has taken, or plans to take, that may correlate to points BSR raised or recommendations it made, these also should not be construed as an admission, agreement with, or acceptance of any findings, conclusions, opinions, or viewpoints.” (emphasis added, Meta, 2025, p.5)
What Meta is doing here is a calculated move by actively trying to perform engagement whilst refusing the political meaning of that very engagement. It is the abandonment of responsibility by changing a policy without the admission why the change was needed in the first place. When Meta commissioned Business for Social Responsibility back in 2021, it positioned the exercise as proof that it was willing to look critically at its own role in shaping the information landscape during one of the most politically charged moments in the region. But the company’s posture since then tells an opposing story.
Human Rights Watch published a detailed account of Meta’s systematic suppression of Palestinian content. Their analysis shows that the company’s behaviour during the violence that escalated in October 2023 makes it clear that the promises Meta made two years earlier were never fulfilled.
BSR’s fifth recommendation was to provide users with a more specific and granular policy rationale when strikes are applied. This is extremely helpful for activists to navigate the contested online space rather than a guessing game shaped by opaque enforcement of terms and conditions. Meta’s decision to drop this recommendation exposes how unwilling the company is to relinquish that ambiguity. Why not? Ambiguity serves Meta because it keeps the company flexible in a landscape defined by shifting political pressures.
At times, the only way to get out of Meta jail card is contingent on whether someone knew a Meta employee. We at Manushya Foundation have seen this first hand on several occasions. The informal network carries more weight than the formal appeal process, which shows how broken the system is. Alajaji of Electronic Frontiers Foundation illustrates how activists, clinics, and researchers working on reproductive health were routinely silenced, and the only accounts restored quickly were the ones with access to someone on the inside. They describe how appeals stretched on for weeks or never resolved at all, whilst those with personal connections saw their accounts reinstated almost overnight.
The refusal to implement more granular user messaging on violations rests on the claim of ‘higher priority workstreams,’ which is another PR term for either ‘we don’t have money for that’ or ‘we don’t care’. Given the billions of revenue in 2025, we know the answer.
Meta also boasts improvement (see Recommendation 9) through the establishment of a system that routes Arabic dialects to the most ‘appropriate’ moderators. But we see here how content moderation is treated as a panacea that supposedly neutralises structural injustice that Meta’s business model relies on.
Content moderation in the context of genocide and decades of occupation cannot be reduced to dialect routing.
The violence shaping Palestinian speech online is not a linguistic misunderstanding, but the outcome of a political order in which one population holds overwhelming military and technological power whilst another lives under constant siege. Moderators understanding the difference between Levantine and Gulf Arabic does nothing to address the asymmetry of who is allowed to speak without fear and who is systematically flagged, shadow-banned or silenced. Sure, this technical fix allows Meta to portray the problem as a matter of accuracy, but it keeps the discussions within the corporate walls that shifts the attention away from the reality.
Further, Meta claims resource limitations (see above) as the main reason to ditch BSR’s recommendation 16, which is centred around tracking the prevalence of hate speech by protected characteristics. And yet the company routinely deploys high-stakes machine-learning systems, particularly its 14.3 billion investment in Scale AI across billions of users. When Meta declines to measure specific forms of targeted hostility, it does so by choice. And choosing not to measure means choosing not to know. And choosing not to know is a political stance by itself that protects the company from demonstrating the extent of differential harm. In a region defined by racialised and ethnicised violence, no one in their right mind would think this omission is defensible.
Meta on AI: Sales Pitch, not a Human Rights Analysis
Moving back to the main document of the Human Rights Report, what emerges is a similar pattern. The AI sales pitch was written for investors and product marketing and not as a serious human rights analysis. It opens with a breathless story about how generative AI is transforming how we ‘communicate, learn, create and work,’ then reassures us that it ‘recogni[s]e[s]’ the human rights risks. But the key line on page 19 is the main give away: “Our long-term vision is to build personal superintelligence, and make it widely available so everyone can benefit.” Is this the point where we should thank Meta for saving the day?
The next few paragraphs became a long brag sheet on Llama 3, 3.1, 3.2, 3.3, Ray-Ban glasses, Meta AI packed into every surface of their stack, watermarking, AI tools for advertisers, ‘impact’ grants and so on. None of these addresses the basic questions a real human rights report would ask. Where did the data come from? Who laboured to label, moderate and clean it? Which communities bear the environmental cost of the massive compute and water consumption that underpins these models? Whose languages, dialects and experiences count as training material? Whose are sidelined or commodified without consent?
If you go on a little further in the report, you will find the section on AI in elections. Sure, Meta claims to be the responsible adult and prevent deceptive content from undermining elections, a promise that we have heard over and over again. But that misses the mark as it shifts away the content of the debate from the elephant in the room. Why are these corporations allowed to unilaterally build and deploy systems of this scale at all, especially when they are integrated with advertising, surveillance and political messaging? So, it means that we have a company that profits from virality and polarisation pledges to suddenly…‘protect elections’?
Meta’s human rights report is a glossy investor narrative draped in human rights language.
It wraps data extraction and labour exploitation in a story about benefits for all. Anyone who is not a techbro reading the report with experience of how AI and platforms actually operate in the global majority will recognise what is going on. History is evidence of the future. Looking back in Meta’s history, we know that this is a document written to secure legitimacy and market share, not to confront the harms that the company created and continues to amplify.
There is, of course, more to this human rights report than this short post can evaluate. What we have laid out here is not exhaustive, and just as we wrote in 2023, a full counter-report would be needed to systematically unpack every claim, every omission and every convenient half truth buried in Meta’s framing. That kind of work would have to sit the entire report side by side with lived experience from workers, affected communities and regions in the global majority, and treat Meta’s own words as evidence, not as a starting point for trust.
This piece has a more modest aim and that is to show that the pattern we saw in 2023 remains intact in 2024. If nothing else, we hope this piece arms readers, activists and movements with enough arguments to treat the next fashionable human rights report from Meta with the suspicion it deserves, and to push for real structural change from the roots instead of accepting corporate lies as some form of superficial progress.
CITATIONS:
Alajaji, R. (2025, September 17). When knowing someone at Meta is the only way to break out of “content jail”. Electronic Frontier Foundation. https://www.eff.org/pages/when-knowing-someone-meta-only-way-break-out-content-jail
BSR. (2022, September). Human rights due diligence of Meta’s impacts in Israel and Palestine in May 2021.
Cadwalladr, C., & Graham-Harrison, E. (2018, March 18). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
Conversation. (2024, September 24). Meta’s AI-powered smart glasses raise concerns about privacy and user data. https://theconversation.com/metas-ai-powered-smart-glasses-raise-concerns-about-privacy-and-user-data-238191
Human Rights Watch. (2023, December 21). Meta’s broken promises: Systemic censorship of Palestine content on Instagram and Facebook. https://www.hrw.org/report/2023/12/21/metas-broken-promises/systemic-censorship-palestine-content-instagram-and
Manushya Foundation. (2024, October 11). On Meta’s human rights public relations report. https://www.manushyafoundation.org/post/on-meta-s-human-rights-public-relations-report
Meta Platforms, Inc. (2025). 2024 Meta human rights report. https://humanrights.fb.com/wp-content/uploads/2025/12/2024-Meta-Human-Rights-Report.pdf
Manushya Foundation. (2025, May 13). #StandWithTamtang | #StopTechnoColonialism 🚨#AbortionIsAHumanRight. https://www.manushyafoundation.org/post/standwithtamtang-stoptechnocolonialism-abortionisahumanright
Meta Platforms, Inc. (2025, December). Final update: Israel and Palestine human rights due diligence. https://humanrights.fb.com/wp-content/uploads/2025/12/Israel-Palestine-HRDD-Implementation-update-2025.pdf
Meta Platforms, Inc. (2025, October 29). Meta reports third quarter 2025 results. https://investor.atmeta.com/investor-news/press-release-details/2025/Meta-Reports-Third-Quarter-2025-Results/default.aspx
Open Source Initiative. (2025, February 18). Meta’s LLaMa license is still not open source (J. Maris, Author). https://opensource.org/blog/metas-llama-license-is-still-not-open-source
Vanian, J. (2025, October 29). Meta CEO Mark Zuckerberg defends AI spending: “We’re seeing the returns”. CNBC. https://www.cnbc.com/2025/10/29/meta-ceo-mark-zuckerberg-defends-ai-spend-were-seeing-the-returns-.html
.png)






I found this piece very informative in explaining why Phentermine dosing varies between individuals. The article successfully shows that there’s no universal “magic number,” but rather a range that depends on many personal factors. I appreciated the practical tone and emphasis on safe dose progression. It feels like the kind of explanation that gives you confidence rather than confusion. A helpful resource for anyone considering structured weight-loss support.
https://valhallavitality.com/blog/understanding-phentermine-doses-what-you-need-to-know
I appreciate your critique on Meta’s PR tactics! I'm curious if you considered that, while their moves often seem opportunistic, they might also push for more transparency in tech. It's a mixed bag, Tunnel Rush!
Controlling Geometry Dash is simple. Players only need a mouse click or the spacebar to jump, while arrow keys are used in certain platformer sections. The controls are easy to learn, but mastering the constantly changing icon takes skill, focus, and practice.