Responsible AI adoption can drive economic growth: experts
Bangladesh should adopt artificial intelligence to boost economic growth, while ensuring responsible AI use to protect women's safety, privacy, and human creativity, experts said yesterday.
The statements were made at a discussion titled "Responsible AI for Bangladesh: Policy and Design Challenges," held at the Anwarul Azim Chowdhury Lecture Gallery, Department of Microbiology, University of Dhaka, where experts also stressed the urgent need for an ethical and structured approach to AI adoption in the country.
"We have to embrace Bangladesh's AI future with responsible growth," said Sharifa Sultana, assistant professor, Department of Computer Science, University of Illinois Urbana-Champaign.
"We are deeply optimistic about AI's potential to transform Bangladesh's development and economic growth. But we also recognise the complex ethical, social, and legal challenges that come with it. Researchers, policymakers, and industry leaders must work together to outline a strategic approach that harnesses AI's benefits responsibly, in line with our national goals and cultural values," she added.
Sultana highlighted the key pillars of responsible AI -- fairness, transparency, security, and accountability -- calling them essential for building trustworthy and equitable systems.
She also pointed to urgent risks, including threats to data privacy from widely used AI models, the decline of critical thinking due to over-dependence on AI, and societal harms such as deepfakes and surveillance.
"Bangladesh stands today on the edge of its most dangerous illusion: believing that connectivity is intelligence, that digitisation is governance, and that dashboards are decisions," said Zulkarin Jahangir, assistant professor at North South University and a member of UNESCO AI Experts Without Borders.
He added that UNESCO's AI Readiness Assessment Map (RAM) study highlighted this issue. Instead of celebrating AI adoption, the study served as a stress test, showing how ambition has outpaced institutional capacity and how fragmented governance struggles to manage systems requiring coherence, accountability, and trust.
"What emerged from the assessment was not a technology gap, but an institutional one. Policies are stalled, data is scattered, and infrastructure varies across sectors and districts. AI pilots are advancing in pockets, but without a shared ethical, legal, or operational backbone. We are building AI systems on top of institutions that have not yet learned to handle power responsibly in digital form," Jahangir said.
He added that AI is not just a software problem -- it is an institutional stress test. Universities must go beyond teaching tools and techniques and examine power, incentives, and exclusion. Research should focus on social consequences, including bias, labour displacement, data sovereignty, and who benefits from automation. Otherwise, academia risks documenting outcomes rather than shaping them.
From an industry perspective, RAM shows both opportunity and fragility. Bangladesh has potential in AI-adjacent sectors such as data annotation and applied services, but without standards, safeguards, and workforce protections, growth could reproduce informality rather than value. "Speed without trust will not scale," Jahangir warned.
For policymakers, Jahangir stressed, AI cannot be governed through isolated circulars or stalled drafts. It requires coordinated action, investment in data infrastructure, and mechanisms for public accountability. "AI policy is not a technology roadmap, it is a social contract," he added.
Ishtiaque Ahmed, associate professor in the Department of Computer Science at the University of Toronto, and Rashed Mujib Noman, country director of Augmedix Bangladesh, also spoke at the event.
Comments