Published in The Daily Star | 28 January 2024

In Bangladesh, a common joke revolves around the absence of a Bangla equivalent for ‘privacy’. Goponiyota suggests confidentiality, and ekaante thaka, used as a verb, implies leaving someone alone. Some argue the lack of a specific term in our vocabulary might suggest a perceived absence of the right to privacy. There are TikTok videos or Instagram Reels humorously referencing the lack of privacy in joint families, or how parents get into the business of their teenagers, or how even a newlywed couple can’t catch a break with a house full of guests. 

Do these ‘real world’ privacy principles, or lack thereof, apply to the digital ecosystem? Sharing mobile devices is a common behavior across South Asia, attributed to economic and cultural factors, according to research. In other words, if people are comfortable with sharing devices, does this indicate that they do not care about their privacy? By the same assumption, is it fair to infer that the public lacks a reasonable expectation of privacy when sharing their personal data while buying a SIM card or on digital platforms?

Several months ago, while traveling in southern Bangladesh, I met with women in savings circles. Some had their own devices, while others shared a mobile phone with their spouses or parents. When asked about their concerns, almost everyone indicated that they wanted more privacy. This ranges from owning their own devices to finding more secure ways to send mobile payments to safely accessing their social media accounts. An overwhelming majority had accounts on Facebook, TikTok or Imo, where they feared not only ‘abusive’ content attacking them, including sharing invasive photos and videos, but also the possibility that reporting to law enforcement would grant unauthorized access to all of their data. For a community that’s facing longstanding societal discrimination, these women—housewives, small business owners, farmers, and garment workers—were well attuned to their expectations around privacy.

This shouldn’t come as a surprise because women and minority communities worldwide bear the brunt of ‘digital abuse’. They face a disproportionate risk of privacy erosion, including invasion of their personal spaces, non-consensual sharing of visual content, and the use of personal data for surveillance and blackmail. However, these risks are more acute for communities in low- and middle-income countries (referred as the Global Majority), who lack the institutional safeguards that wealthier Western democracies can sometimes take for granted. Moreover, the rights of the poor are frequently undermined with the promise of techno-solutionism, an idea that the ‘right’ technologies—code, devices, algorithms, platforms and artificial intelligence—can solve society’s problems. 

A year ago, Marium Akter (pseudonym) received her smart national identification (NID) card. At the time of collecting her personal information and biometric data, the ‘officer’ promised that this would ease receiving her social safety benefits, and provide security against fraud when accessing any device or online services. Weeks after signing up, Marium started receiving strange phone calls at midnight. The caller claimed to have access to her NID information, even shared some of it accurately, and blackmailed her for money in exchange for not leaking her information online. They threatened to provide false criminal allegations about her to the local police, suggesting it would impact her government benefits. Soon after, Marium got messages from marketing services, many that she had never heard of. Her phone was constantly ringing, straining her relationships with her husband. Based on the threats, it appeared that the police and local government officers might be involved in the scam, leaving her unsure on whom to approach. She eventually disconnected her device out of fear.  

Mariam’s case may seem anecdotal, but last year, TechCrunch, along with multiple national dailies, reported that a Bangladeshi government website leaked personal information of more than 50 million citizens. To grasp the scale and severity of the breach, it’s worth noting that personal information in the government’s NID database is tied to an individual’s birth certificate, SIM card registration, bank accounts, passport, voter cards, and pretty much every service one can imagine. At the time, the government acknowledged the breach and attributed it to “weak web applications” and “poor security features” of “some government organizations”. A few months later, NID data was available on Telegram, easily accessible and searchable using a bot. The then system manager of the NID Wing of the Bangladesh Election Commission confirmed that 174 organizations have access to the NID server; anyone could have their security compromised. 

In the months preceding the leaks, the then Home Minister, Asaduzzaman Khan, told the press that there was a process underway to shift the central NID database from the Election Commission to the Ministry of Home Affairs, referring that most countries maintain their citizen records more securely under the executive branch. The National Identification Registration Act was passed in September last year, confirming the move. In November, a Wired story found that millions of NID data, along with other sensitive personal information, was left exposed online by the National Telecommunications Monitoring Center, a national intelligence outfit under the Home Affairs Ministry. 

But that’s just the tip of the iceberg.

Jasmine Begum (pseudonym), a small business owner, recently joined Facebook to promote her handcrafted items. She kept seeing explicit and gambling ads on her feed, obscuring content about her customers or friends. She was perplexed why she was seeing these ads, or how to stop them, and feared family disapproval and disowning. Despite actively avoiding adding her husband or in-laws on her Facebook account, she kept receiving recommendations about their Profiles, adding to her fear that they may find out about her online activities. On the backdrop of a patriarchal and conservative community, Jasmine was afraid that her husband would think she was involved in explicit or gambling activities in the guise of entrepreneurship, take away her device or even physically hurt her, leaving her with no recourse.

For nearly two decades, social media companies have collected vast amounts of personal data, extending from activities on the platforms themselves to third-party websites, browsers and devices. The data is not only used to micro target ads, but also to decide what should appear on someone’s feed, what product features they can access, ‘friends’ recommendations and the entirety of their online experiences. Jasmine’s example illustrates how algorithmic curation of content breaches individual privacy expectations and could even endanger people.

Although there is increasing public and regulatory pressure to protect user data, leading to some product changes globally, these have little to no impact on communities outside of the U.S. and other Western democracies. The privacy policies are not written for the average non-native English speaker, and even with translations, are framed in ways that are incongruous with Global Majority behaviors. Similarly, transparency features like Why Am I Seeing this Ad or standard privacy controls are opaque, contextually inappropriate and do not address the needs of non-U.S. communities. Eighty-nine percent of social media users in 19 surveyed countries, including Bangladesh, indicated they do not understand platform privacy policies or product features, according to a study conducted by the Tech Global Institute.

And if large platforms are one side of the dystopian coin, the other side belongs to a plethora of app-based startups. Women’s health apps (mHealth) are increasingly popular in low- and middle-income countries, but research on 23 of the most popular mHealth apps have found all of them allow behavioral tracking. Sixty-one percent of the apps also allow location tracking and 87 percent shared data with third parties. A separate research on 224 fintech and loan apps, targeting African and Asian customers, found 72 percent had some level of cybersecurity risks that exposed sensitive personal and financial data, and shared data, without explicit consent, with third parties. 

Where does the individual citizen turn? Neither the government nor private entities can be trusted to safeguard their privacy. 

In an ideal system, legislative action would have been a way forward to hold both the public and private sectors accountable. The draft Personal Data Protection Act, having received in-principle approval from the Cabinet Division, should have been a step in the right direction. However, it became a concoction of provisions drawn from the EU’s GDPR, the Indian Digital Personal Data Protection Act, and the Singaporean Personal Data Protection Act, while retrofitting within Bangladesh’s legacy institutional frameworks. In simpler terms, the draft Act consists of arbitrary consent mechanisms, undue compliance burdens, and weak grievance redressal systems, combined with data access obligations without procedural safeguards, similar to requirements under the Cyber Security Act and the Bangladesh Telecommunications Regulatory Act. When read together, sections 33 and 34 of the draft Act imply that government institutions do not have the same duty of care as private entities towards safeguarding personal data.

In a nutshell, by replicating existing frameworks, the draft Personal Data Protection Act misses out on critical local nuances, rendering it likely ineffective in addressing privacy concerns. 

An alternative approach could have been for the draft Act, and other data protection and privacy interventions, to mandate product and policy changes that would meet privacy expectations. For example, firstly, it could have required tech companies and digital products to simplify their terms and privacy policies, including providing visual cues and modularizing consent, and ensuring it can be easily understood by communities. Secondly, it could have instituted a robust grievance redressal mechanism within tech companies and government agencies, with clear timelines for resolution that can be used by anyone, irrespective of their digital literacy skills.

These changes, however, are not about one legislation or lever. Fundamentally, privacy, as a practice within digital ecosystems, has never been investigated in Global Majority contexts. It is largely still seen through either legacy or imperialistic lens, resulting in weak regulatory interventions and performative safeguards that pose significant risks of undermining fundamental rights. For decades, people in poor countries were made to believe they have to choose between using a great product and expecting it to protect privacy, be safe and respect human values. And it is their fault, their lack of knowledge, that made technologies difficult, intimidating and harmful. More often than not, mitigation approaches try to change the behaviors of the end consumer, rather than centering design, development and governance around what works for the people. 

Research indicates that mobile devices equipped with multiple profiles, akin to Windows or Mac operating systems, offer privacy safeguards, rather than attempting to alter device-sharing behavior in collectivist societies like Bangladesh. While there are recent efforts to incorporate human-centered design into pro-poor technology solutions, it is built on economic values rather than human rights. And perhaps, this is the fundamental frame-shifting that we need to do. To begin respecting the rights of the poor on par with meeting their economic aspirations, instead of believing in the fallacy of a zero-sum game. 

Leave a comment