From censorship to chaos, we must steer clear of the extremes of social media
Bangladesh’s online space today reflects a shift that was perhaps inevitable, but also difficult to manage. It is not simply a story of increasing freedom. It is a reaction shaped by years of suppression, followed by a hard reset without a corresponding structure to guide the future.
Bangladesh today is dealing with the legacy of overbroad regulation and the rise of unregulated online hostility. Addressing only one will not restore balance. The space in between, where disagreement can exist without fear from either the state or the crowd, is currently unprotected. And this is where policy must shift.
For much of the past decade, laws like the Digital Security Act, 2018, were repeatedly misused. There were numerous instances where social media posts led to arrests, prolonged detention, or criminal proceedings, including against journalists, students and the general public. Even when the law was reworked into the Cyber Security Act, 2023, the stated objective was to reduce misuse by softening penalties and altering certain provisions. Yet concerns persisted that the core structure remained largely unchanged, continuing in widespread suppression of dissent.
However, that phase did not end without consequences. When control over expression is prolonged and then suddenly shattered, people do not simply return to neutral behaviour. They push outwards. Comparative studies on societies emerging from overly restrictive environments show that expression often becomes more forceful and less restrained, as individuals assert what was denied previously. Bangladesh is now experiencing a version of that shift. Online disagreement increasingly turns into coordinated backlash. Often, individuals who take moderate positions or who do not align clearly with dominant narratives are subjected to collective targeting or pile-ons that discourage participation. It is a pattern of behaviour that makes civil dialogue difficult.
The national conversation, however, remains incomplete. It continues to focus on censorship versus freedom, as if one must be reduced for the other to survive, which oversimplifies the problem. The Cyber Security Ordinance, 2025, later ratified as Cyber Security Act, 2026, recognises internet access as a civic right, which indeed is a commendable addition. However, we are still far from home. Vaguely termed provisions and a limited mandate for threat intelligence sharing are still present, leaving space for broad interpretations and little to no space for incorporating platform obligations through public-private collaboration. To overcome these hurdles, we must not aim for control in a broad sense, but for structured protection of the space for participation itself. This requires a more deliberate set of measures.
First, legal provisions must be narrow and conduct-based. Direct threats, incitement to violence, and sustained targeted harassment should be clearly defined and actionable. This allows the law to address real harm without opening the door to interpretation that can be used against dissent. Second, and equally important, the law must recognise coordinated harassment as a distinct harm. Current frameworks tend to focus on individual posts, but much of the pressure online comes from collective targeting. Addition of an intelligence-sharing protocol can open the door to a workable system allowing patterns of behaviour, repeated targeting by multiple accounts, organised pile-ons to be identified and addressed through platform obligations and legal recognition.
Third, protections for users must be built in. The Cyber Security Act has very lenient provisions regarding the time allowed for finishing investigation and lacks a fast-track complaint mechanism for victims of coordinated abuse, which requires platforms to respond within defined timeframes and ensure transparency in how complaints are handled. Without this, victims will continue to withdraw from participation. Fourth, safeguards against misuse must be structural, not optional. Independent oversight, judicial authorisation for serious action, and public reporting on enforcement are essential. These are not additions; they are what make regulation credible in a post-abusive environment.
Finally, there must be an explicit recognition that protecting civil discourse is a policy goal. This means acknowledging that online space is not only about individual rights, but about maintaining conditions where disagreement can occur without intimidation. Without that, freedom of expression exists formally, but not meaningfully.
Global approaches offer direction here. Countries are increasingly holding platforms accountable for systemic risks, including organised harassment, rather than treating online harms as isolated incidents. Under the EU’s Digital Services Act, major tech companies must proactively identify and reduce online harms or face massive penalties totaling 6% of their worldwide earnings. In the UK, the Online Safety Act 2023 imposes a duty of care on platforms to protect users from abuse and harassment, with statutory enforcement powers. Additionally, Australia’s Online Safety Act 2021 empowers the eSafety Commissioner to ensure platforms address cyber‑bullying and other online abuse, introducing mandatory compliance measures.
The relevance is not in copying these models directly, but in recognising the shift towards addressing behaviour at scale. Bangladesh’s challenge is to adapt that understanding to its own context. The next phase of policy must actively protect our digital space, especially for those who are neither loud nor aligned.
Md Yeasinur Rahman is undergraduate teaching assistant and research assistant at the Department of Law in North South University.
Views expressed in this article are the author's own.
Follow The Daily Star Opinion on Facebook for the latest opinions, commentaries, and analyses by experts and professionals. To contribute your article or letter to The Daily Star Opinion, see our guidelines for submission.
Comments