key: cord-0311311-teecq6id authors: Ramesh, Divya; Kameswaran, Vaishnav; Wang, Ding; Sambasivan, Nithya title: How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India date: 2022-05-11 journal: nan DOI: 10.1145/3531146.3533237 sha: 8f951c073db88d36884e7ede4df44ae8145592da doc_id: 311311 cord_uid: teecq6id Accountability, a requisite for responsible AI, can be facilitated through transparency mechanisms such as audits and explainability. However, prior work suggests that the success of these mechanisms may be limited to Global North contexts; understanding the limitations of current interventions in varied socio-political conditions is crucial to help policymakers facilitate wider accountability. To do so, we examined the mediation of accountability in the existing interactions between vulnerable users and a 'high-risk' AI system in a Global South setting. We report on a qualitative study with 29 financially-stressed users of instant loan platforms in India. We found that users experienced intense feelings of indebtedness for the 'boon' of instant loans, and perceived huge obligations towards loan platforms. Users fulfilled obligations by accepting harsh terms and conditions, over-sharing sensitive data, and paying high fees to unknown and unverified lenders. Users demonstrated a dependence on loan platforms by persisting with such behaviors despite risks of harms such as abuse, recurring debts, discrimination, privacy harms, and self-harm to them. Instead of being enraged with loan platforms, users assumed responsibility for their negative experiences, thus releasing the high-powered loan platforms from accountability obligations. We argue that accountability is shaped by platform-user power relations, and urge caution to policymakers in adopting a purely technical approach to fostering algorithmic accountability. Instead, we call for situated interventions that enhance agency of users, enable meaningful transparency, reconfigure designer-user relations, and prompt a critical reflection in practitioners towards wider accountability. We conclude with implications for responsibly deploying AI in FinTech applications in India and beyond. Accountability is necessary to ensure that articial intelligence (AI) is deployed responsibly, especially given the wide applicability of AI algorithms to several automated decision making contexts with 'high stakes' [40, 57, 58, 81] . While automated decision systems (ADS) [106] have the potential to make more ecient and fairer decisions than their human counterparts [45, 52] , they could also produce harmful outcomes, worsening inequality in society [20, 23, 50, 54, 92, 94, 98] . Through accountability relationships, the actors responsible for harms caused by the ADS can be obligated to provide 'accounts' to the individuals who are harmed; the individuals or their representatives may then judge the accounts, and seek to impose consequences if necessary [121] . In this way, we can ensure that the use of ADS occurs in accordance with the interests of all stakeholders. Facilitating organizational and technical transparency could reduce distrust among stakeholders, and enhance accountability relationships [10, 53, 90, 112] . Given their success in the US and UK, transparency mechanisms such as audits and explainability are being mandated in policies worldwide [9, 28, 91] . Consequently, information disclosure by technology providers 1 is often viewed as a precursor for algorithmic accountability [53, 64, 89] . However, recent work suggests that the success of enhancing accountability relationships through transparency may be limited. Perceived agency of stakeholders [69, 81] , their education levels [43] , and their optimism in AI [73] , could complicate the rhetoric of 'stakeholder distrust in ADS. ' Further, the ecacy of transparency mechanisms towards accountability depends on the presence of a critically-aware public, legislative support, watchdog journalism, and the responsiveness of technology providers [17, 60, 76] . Unfortunately, these preconditions may be unique to Global North contexts [109] . Understanding the limitations of current approaches in varied socio-political conditions is crucial to help policymakers adopt context-appropriate interventions, and ensure wider accountability. Prior work has sought to ease the burden on technology providers towards fullling transparency obligations [89, 102] , and studied their impacts on users aected by ADS [42, 113, 122 ]. Yet, we know little about on-the-ground manifestations of accountability in ecosystems where some of its preconditions do not hold true. To ll this gap, we examined how algorithmic accountability is mediated in existing interactions between vulnerable users and a 'high-risk' ADS in a Global South setting; one where there is weak legislation and nation-wide high optimism for AI. We conducted a qualitative study with nancially stressed low and middle income users of instant loan platforms in India. These platforms target 'thin-le' borrowers (i.e., users ineligible to oerings from formal nancial services) with various small credit oerings, often in the range of INR 500 -INR 100,000 (USD 7 -USD 1500). The loan platforms use machine learning algorithms trained on alternative data 2 to model risk and make lending decisions [5] . Instant loan platforms have risen to prominence in recent years through a combination of factors such as aordable smartphones [80] , the state's push for widespread digital adoption [97] , promotion of nancial technology (FinTech) as the poster child of AI success in India [11] , and nancial challenges to users brought by the COVID-19 pandemic [18] . Through semi-structured interviews with 29 users of instant loan platforms from low and middle income groups in India, we examined how nancially stressed users made meaning of their experiences with the 'high-risk' ADS, and how they perceived their relations to accountability. We found that users were drawn to loan platforms due to the promises of immediate money, minimal verication, and long tenure periods, which were enabled by instantaneous and synchronous aspects of AI. Users also perceived additional benets such as enhanced privacy and dignity, preserved social ties, and social mobility through the use of these platforms. Since users had few avenues to seek nancial assistance, they perceived instant loans as 'boons', and developed emotional attachments towards lenders. Users perceived and fullled several obligations towards lenders, even at the risks of undergoing abuse, discrimination, emotional and reputation harms, and self-harm from them. Yet, instead of being enraged with loan platforms, users shared responsibility for their negative experiences. Through this work, we make the following contributions: First, we explore the relationship between ADS experiences of users, their social conditions and accountability. In doing so, we build upon previous work in FAccT, and explore the social dimensions 2 non-traditional nancial modeling data such as mobile phone and social media usage, nancial transactions, images and videos used to model risk of accountability through a case study on loan platforms in the Global South. We make an empirical contribution regarding how low-powered users perceive and demonstrate a dependence on the 'high-risk' ADS, holding themselves responsible for its failures; and how these user behaviors release high-powered actors from accountability obligations. Next, situating our ndings in the literature on accountability, we argue that algorithmic accountability is mediated through platform-user power relations, and can be inhibited by socio-political realities of the context. We urge caution to policymakers in adopting universal technical interventions to foster accountability, and instead propose situated [114] approaches towards achieving parity in platform-user power relations. Our proposal includes: 1) Enhancing user agency through critical awareness, 2) Enabling meaningful transparency through collective spaces, 3) Re-conguring designer-user relations through community engagement, and 4) Committing to justice through critical reection. We conclude with implications for using 'alternative data' in FinTech applications in the Global South. In this section, we give a brief overview of the platform-user dynamics envisioned in literature on algorithmic accountability, the mechanisms designed to structure accountability relationships, and a glimpse into India's AI landscape. Technical and organizational opacity are viewed as primary barriers to fostering accountability [27, 98] , suggesting the need for transparency from technology providers [35] [36] [37] . Technology providers have also taken steps towards increasing transparency due to a combination of user demands and self-imposed responsibility [19, 84, 117, 123] . For instance, researchers studying the experiences of ADS among users have reported increased distrust among users and their desire for transparency into ADS [25, 47] , Uber drivers accused the company of deception due to its use of an opaque ADS and demanded more transparency [116] , and Yelp users expressed the need for transparency into its recommendation algorithm [48] . Consequently, several regulatory policies mandate public access to information, hoping that aected users will use this information towards making accountability demands from technology providers [16, 32, 51, 59, 99] . In fact, such an approach has been extremely successful recently. After audit trails of harmful facial recognition systems were made available to the public, there were widespread public campaigns that eventually led to their regulation in the US and UK [26] . Twitter took steps to modify its biased image cropping algorithm to satisfy user demands [123] . However, recent work casts a doubt on the generalizability of these platform-user dynamics to varied contexts. Favorable outcomes to users from otherwise discriminatory ADS may impede their accountability eorts [48, 120] , users may resist imputing moral responsibility to ADS [21] , and notions of accountability could vary by users' backgrounds [70] . In addition, platform-user relations may be more nuanced than the often cited rhetoric of distrust. Nation-states and users in the Global South view ADS aspirationally and deferentially [109] ; where users may attribute far reaching capabilities to ADS, placing misguided trust in them [93] ; and ADS may enjoy a legitimized power to inuence users' actions, even with little or no evidence of their true capabilities [73] . Prior work calls for aligning transparency with user needs [42, 75] . These ndings warrant a closer examination of the accountability dynamics in varied socio-political conditions; a gap that we seek to ll in this paper. A recent survey of algorithmic accountability policies in the public sector from 20 national and local governments found that transparency was the prime focus of policies [9] . Under a dynamic where users express skepticism, and seek to take action towards accountability, transparency (i.e., of models, datasets and practices surrounding the development of ADS) are viable mechanisms [53, 64, 89] . Mechanisms to increase transparency can be standalone such as documentation (i.e., of source-code, datasets, models, and processes surrounding the development of ADS) [21, 30, 55, 64, 83, 105] and explainable decisions (i.e., to help the users make informed choices when interacting with ADS) [10, 21, 42, 83, 100, 119] ; or be embedded in other mechanisms such as algorithmic audits [26, 46, 112] and impact assessments [101, 102] . In fact, some studies with audits and explainability mechanisms have documented positive outcomes such as raising users' critical awareness [100] , increasing their desires to seek accountability from the designers of ADS [113] , and inuencing technology providers to make changes in ADS [46, 101] . However, the ecacy of mechanisms in fostering accountability also relies on other factors such as a critically-aware public, legislative support, watchdog journalism, and the responsiveness of high-powered actors [17, 60, 76] . Raji and Buolamwini acknowledge the importance of consumer awareness and capitalistic competition in complementing their audit eorts in facial recognition regulation [101] . Unfortunately, such surrounding conditions for accountability are not universally available [109] . Organizationallevel changes from technology providers most often occur as a result of regulatory and user pressures [103] , low-powered users may nd it challenging to regain their agency displaced by platforms [69, 81] , and mechanisms may have limited ecacy where there is power asymmetry [17, 76] . This line of work calls for examining platform-user power relations for designing mechanisms. India, a country with 1.38 billion where half the population is under the age of 25, is considered an emerging force in AI due to its growing information technology workforce, research in AI, investments and cloud-computing infrastructure [31] . India envisions AI as a force for socio-economic upliftment, which is seen through state-supported industry initiatives [49] and wide deployment of AI in surveillance [67] , agriculture [34] , and welfare processing systems [49] . However, such promotion is supported with weak legislation. The two national AI strategies i.e., the AI Task force report [115] , and the NITI Aayog's National Strategy for AI [91] are focused on increasing adoption of AI [115] or include prescriptive guidelines towards accountability with insucient enforcement mechanisms [91] . Similar recommendations are found in state-level policies on AI in India [56] . Policy-oriented research from the FAccT and HCI communities have pointed out how adopting accountability frameworks from the Global North may fail without due consideration to the local contexts where they are applied [71, 85, 109] ; Sambasivan et al., have noted the dierences in axes discrimination and notions of fairness [109] . Kalyanakrishnan et al. and Marda et al., documented the amplication of biases when using Western frameworks in the Indian context [71, 86] . We contribute to this emerging line of work on AI policy and research agenda in India. We conducted 29 semi-structured interviews with low and middle income individuals (16 men, 13 women) primarily from Karnataka and Tamil Nadu regions in India. We recruited participants who had used instant loan apps from non-banking nancial companies between 6 months and 2 years of our study through DoWell Research agency and snowball sampling. We provided INR 1500 (USD 20) as incentives to our participants. We sampled participants based on age, gender, prior experience using instant loan applications and success of loan approval. We conducted virtual interviews in English, Kannada, Tamil and Hindi lasting 35-110 minutes (average of 55 minutes). The rst author conducted 26 interviews (2 with the help of a translator), and a non-author colleague conducted 3 interviews. We sought prior written consent, and informed verbal consent before the start of the interviews. During the interviews, we focused on eliciting narratives [79] from participants to understand 1) their interests, their education and family backgrounds, 2) their nancial situations during the pandemic, 3) their experiences with lending and borrowing through instant loan apps and other means, and 4) their notions of justice in lending and borrowing. Interviews were transcribed and/or translated within 2-3 days of each interviews. We used a professional service for transcribing the regional language interviews, which were all then individually veried by the rst author. Towards the end of the interviews, we used a scenario as a probe to elicit participants' opinions on alternative credit. We rst explained what AI meant to participants' through examples of Youtube and Facebook, and then presented this scenario: Due to COVID, many people are in need of money but don't have jobs, or access to PAN cards and bank accounts. Some apps suggest using AI to make lending decisions. Instead of bank details, they will look at users' mobile phone information such as biometrics, location, call logs, nancial transactions and shopping apps used on devices, and users' social media activity to make decisions on loan applications. They believe that this approach will increase people's access to loans. What are your thoughts about this? We conducted reexive thematic analysis to analyze our data [24] . In the familiarization phase, the rst author listened to each audio recording at least once, and read each transcript at least twice, paying close attention to participants' choice of phrases, especially in regional languages, their emotional reactions to questions, hesitations, pauses, and repetitions. We recorded these observations and reections and shared them during weekly research meetings with the rest of the team, which then served as aids in coding the data. In the coding phase, the lead author followed an open-coding approach rst, staying close to the data (i.e., needing money urgently, not telling friends the reason for money) [108] , and iteratively revised the codes with the second author (i.e., 'instant' money, preserving privacy, feelings of indebtedness), resolving disagreements through discussion. We generated and rened themes by going over the data, engaging with literature, and through weekly research meetings with third and fourth authors. In this work, we present 3 themes that we generated from 11 stable codes: (1) We approached this topic with great care, knowing the dire circumstances of participants. We reected carefully if this study was time-appropriate. Several participants were ecstatic to be part of our research to express their gratitude through our report towards loan companies. One participant requested extra time to share their experiences in depth. These incidents helped us viewed our participants as individuals in their own right, rather than as victims of their circumstances, and gave us condence that this research was timely. During the interview, we let participants guide the discussion towards the experiences that were most salient to them. We stored data on Google drive and restricted access to the research team. We also took care to anonymize the data and report them in this paper. We intentionally do not specify the names of loan platforms that we recruited users from to preserve anonymity. Although we attempted to recruit participants across gender, our sample skews more towards men. We also do not have any perspectives from nonbinary identifying individuals. Due to the COVID-19 pandemic, we conducted all our interviews over video and phone, which limited our ability to include observations and contextual inquiry. The rst author's caste and class privilege (evident through name and dialect) may have inuenced participants' responses. Our research team has over 10 years of experience working with marginalized populations in the Global South. Reecting on our positionality, we elicited narratives with care, and analyzed the data extensively to cover multiple themes, and ensure validity. 10 participants belonged to urban-middle income groups, and the rest 19 participants belonged to urban-low or lower-middle income groups. 25 of our participants worked in the service sectors as accountants and chefs in restaurants, carpenters, customer service, sales and marketing representatives, tailors, taxi and auto-rickshaw drivers, or owned small businesses. 2 participants worked in health and education sectors, and 2 participants identied their primary roles as "house-wives." All our participants incurred signicant loss of incomes during the pandemic. Several participants (n=16) were responsible for supporting 4-5 member households with reduced or no incomes; they had pledged or sold the few assets they possessed, and in few cases, the very assets that were sources of income to them. In addition, vulnerability for them meant having to comply with exploitative rules from informal lenders, from their children's schools, from local state oces, and participants having no monetary or social capital to even claim their rights. Instant loan platforms, primarily classied as 'FinTech' provide technical infrastructure to connect NBFCs (shadow banking entities that oer nancial services without a banking license) [1] with borrowers. They oer small, short term loans, typically INR 500 -500,000 over a period of 15 days -6 months, using machine learning on a combination of CIBIL scores 3 , and 'alternative credit data.' Although the workings of these loan apps are proprietary, most instant loan apps, in their privacy policy, disclose using the following as 'alternative credit data': 'know-your-customer' (KYC) data such as names, addresses, phone numbers, PIN codes, reference contacts, photos, and videos, personal account number (PAN), Aadhar number (unique identication number); device information such as location, hardware model, build model, RAM, storage, unique device identiers like Advertising ID, SIM information that includes network operator, WIFI and mobile network information, cookies; nancial SMS sent by 6-digit alphanumeric senders, and information obtained from 3rd party providers for making credit decisions [2, [4] [5] [6] [7] . Applications also use this data for analyzing user behavior for advertising and security purposes. Apps use AI in several other ways including use facial recognition for completing verication, natural language processing for information extraction and contract automation, machine learning for fraud detection and market analysis, and chatbots to provide customer service [3] . These platforms, targeted at borrowers from low and middle income groups, have proliferated the market recently, and are hailed by the state as the 'drivers of economic growth' for the 'unbanked' India [11] . All participants unanimously cited the promise of 'instant cash' as primary reasons for trying instant loan platforms. We found that this promise was the precursor to a cycle of reciprocal exchanges between loan platforms and the users, which we discuss with the help of the following themes: (1) Participants who were successful in availing instant loans through the applications expressed great excitement and gratitude towards these platforms. While many of our participants faced signicant nancial hardships even before the COVID-19 pandemic, almost all of them experienced exacerbated diculties during the pandemic. Several participants (n=16) reported seeking loans to either supplement or substitute their loss of incomes. These platforms also oered attractive benets, giving our participants the perceptions of loans with no-strings attached. We highlight perceived benets of instant loan platforms with the help of the following codes: 1) Being able to access anytime, anywhere, 2) Ensuring dignity and privacy, 3) Preserving social ties, and 4) Promising social mobility. 5.1.1 Being able to access money anytime, anywhere. Participants' enthusiasm for instant loans often highlighted their distrust in formal banking sectors, a nding also reported in research on nancial experiences of other vulnerable populations [95, 118] . Our participants despised extensive verication processes of formal loans. Formal loan processes required applicants to submit a long list of identity verication documents such as birth certicates, caste certicates, assets documents, employment certicates. Several of our participants did not have these documents to begin with, 4 shutting them out of formal nancial systems. Finding the right set of documents to produce is never an easy task for anyone, and was exceptionally dicult for those participants who had lower levels of education and literacy. Further, participants were required to seek willing guarantors who would support their applications, open up their homes to unannounced visits from loan ocers, and haggle with them for weeks, even after which there was no guarantee of a loan. Instant loan platforms were sources of immediate money and also the only means of survival for participants during dicult times when they ran from pillars to posts to seek nancial help. Participants used instant loans to manage their everyday expenses, ranging from buying groceries, paying for school fees of their children, to clearing outstanding debts. Thus, as in the old adage, several participants equated the loan platforms with friends, and expressed intense feelings of indebtedness towards loan companies. P01, who sought instant loans when his business went haywire said, "It really helped me during my tough times, so I actually owe them and I'm actually [still] owing them... I would recommend this app to so many of my contacts and I would say just like how 'a friend is in need is a friend indeed'." Our participants perceived and fullled several obligations toward the loan platforms, that we explain through the following codes: a) Accepting harsh terms and conditions, b) Over-sharing sensitive data, and c) Making high fee payments. In the anticipation of 'instant' money, several participants acknowledged that they had simply clicked on 'I agree' to terms and conditions of the apps, without expending the slightest eort to understand what they were consenting to. Likewise, very few participants recalled the specic terms that had been imposed by the apps, which we found ranged from peculiar to extremely harsh. For instance, P4 explained how he generated 3D views of his face instructed by a facial recognition bot, "It will ask for a sele. Turn both the sides, open the mouth... blink your eyes, rotate your head. " Obliging such requests were mandatory if participants wished to proceed with their applications. Some others recalled agreeing to potential legal actions and home visits in the case of defaulting on small loans. Such acceptances were viewed as mere formalities in getting access to progressively large credit limits. For instance, P3 recalled their speedy acceptance of terms, and the subsequent ascent in credit limits, "[I]f you're not repaying the loan then they will take the legal action. They can also come to the home, and you have to pay a penalty of eight rupees per day... [Y]ou have to say okay to all these things. [...] After this, they will give you 500 (USD 7) rupees rst. Once you repay, they will give you 1000 (USD 15) (and so on). " Quite naturally, our participants did not expect to negotiate the terms and conditions of instant loans. In fact, they believed that if they were being oered money during a nancially dicult time, they had an obligation to accept all the terms and conditions associated with the money. In addition, almost all our participants strongly believed that regardless of the terms and conditions, a loan once sought must be rightfully returned to its owner to 'restore justice. ' Consequently, our participants assumed all responsibility for a loan borrowed, and frequently associated defaulting on loans with ideas of 'cheating' and 'injustice' to the lender. These ideas were also supported by participants' cultural and religious beliefs. P15 explained: "[I]f you intend to cheat somebody, you shouldn't take a loan... If we have taken a loan, we must repay it correctly [...] [Else], we will face sazaa (punishment) from Allah. " Over-sharing sensitive data. Several loan platforms also required access to users' media and gallery, phone books, Whatsapp and Gmail contacts, location information, nancial transaction texts, app usage analytics, and other device information. We found that our participants perceived sharing such data as tests of credibility. For them, withholding data meant a lack of condence in their own abilities to repay loans. Attempting to borrow loans despite not being condent was equivalent to 'cheating' loan platforms. P16 explained, "It is okay if they collect information. If I have an intention to cheat then I should be scared. [...] If I am willing to repay fairly, I need not be scared." We also found that participants' mental models of instant loans shaped their data-sharing practices in complex ways. Several participants expressed some discomfort sharing such data. Some associated sensitive data with ideas of 'intimacy. ' As P15 put it, "If they are tracking where I'm going and what I'm doing, it's like sharing my family background (colloquial: wife's background) with them." Others discussed fears of misuse and online scams. Yet, all participants had either already enabled permissions unknowingly, or showed willingness to do so. P24 weighed her discomfort against the need for money and arrived at a compromise, "I got a thought that they will hack. But at that time money was important... [N]ow in home loan we pledge papers, in gold loan we pledge gold, in the same way digitally we have to pledge all our information. " Being used to models of lending and borrowing where trust in the exchange was facilitated through the value of pledged assets, our participants 'pledged' sensitive data as high-credibility collateral assets. Instant loans came at high initial costs to participants. Platforms charged processing fees, disbursal fees, down-payments, often taking away 20-25% of loan amounts during disbursal. This was in addition to the high oating interest rates (15 -35%) and penalty charged by platforms. For P10, these high fees were small costs of the convenience during what were dicult times for her: "in case we don't pay consecutively for a month, some charges are there, but I wouldn't call it a disadvantage. When you are getting all these advantages that's a common thing. That's perfectly ne. " In addition, participants discussed how repeatedly borrowing through the same platform easily oset such costs. They received attractive benets like promotional codes, discounts, and better terms on new loans as rewards for their loyalty; these reciprocal exchanges were perceived as mutually benecial by participants. FinTech companies often tout narratives of nancial inclusion for 'unbanked' users through instant loans [5-7]. We found that such inclusion came at the cost of participants' dependence on loan platforms. Participants circumvented barriers to access loans, borrowed cyclically through loan platforms, tolerated abuse from predatory lenders, and shared responsibility for their negative experiences, potentially leading to their nancial and technology exclusion. We discuss these ndings with the following codes: 1) Circumventing algorithmic discrimination, 2) Recurring debts, 3) Tolerating abuse, and 4) Assuming responsibility for loan platforms' failures. While instant loan apps are designed as single-user applications, we found evidence of intermediated use among our participants, as is commonly reported in previous research on technology-use in the Global South [12, 41, 72, 110] . Participants sought the help of others, often immediate family members or trusted close friends, to download and navigate the apps, submit their applications, and manage payments. In some cases, we also found that participants attempted to borrow loans through others' devices and proles. For instance, P10 borrowed through her husband's phone after intuitively recognizing that instant loan limits could be inuenced by gender pay gap, and gendered patterns of digital activities. She even suggested a friend to borrow through her husband's phone saying, "Maybe our salary cycle is less and husbands' salary cycles are more. And they have the credit cards and stu. Maybe it's interlinked. " While such stereotype reinforcement through ADS could be viewed as potential barriers to access, evoking strong reactions in the West [22] , our participants did not perceive them so. In fact, they underscored the importance of reading intentions in attributing experiences to discrimination. P07 acknowledged that disparate treatment could lead to unfair outcomes, but asserted that instant loan platforms did not intentionally discriminate based on gender: "If they are giving less to women and more to men, it will not be correct. [...] But when it comes to loans, mostly they will give equal amounts to everybody. " Our participants shared similar views towards other issues of algorithmic discrimination. P08, who identied as dark-skinned, talked about whitening their face digitally to get around potential intersectional accuracy disparities [26] in instant loan technology: "We could use 'FaceApp' to modify our looks. If they (loan platforms) are grand thieves, we are petty thieves. That is the only dierence. " Instant loan platforms made borrowing money pleasurable for participants by oering gamied engagement. In addition to in-app discounts, surprise oers and virtual coins that we discussed earlier, we found that some apps also gamied credit limit increases; the platforms would rst oer small amounts like INR 5,000-10,000 (USD 75-135); users would then 'unlock' higher credit limits when they neared their repayment terms, mimicking level-increases in virtual games. Some participants suspected that such gamied mechanisms were recovery nudges in disguise. P7 with an outstanding loan of INR 10,000 (USD 135) noted, "Now that INR 2,00,000 (USD 2700) lock has opened up... I feel they opened the lock to show me that I will be eligible for a larger amount when I close this loan. " We found that these mechanisms, in addition to ready acceptances of instant loans by our participants, had led several of them to borrow beyond their capacity. Such participants had then engaged in cyclical borrowing from several dierent apps to 'balance' their loans. P23 had once gotten into an addictive rhythm of unlocking higher credit limits in the loan apps: "[Let's say] we have cleared the rst level, so it seems like they have condence in us, and have automatically increased the limit. .] I would take a loan from another app to repay the friend. I had loans from 4 apps at one point. " Other participants didn't consider themselves 'addicted' to instant loans, but regretted cyclical borrowing. They justied recurring loans as unavoidable by-products of their nancial vulnerability and social obligations. We also found evidence of abuse in our study. Through loan platforms, some participants fell prey to predatory lenders who employed aggressive recovery tactics for small amounts of money, as little as INR 2000 (USD 27). Their tactics included repeatedly harassing borrowers for repayments through calls and texts, issuing threats of legal action, broadcasting sensitive information to borrowers' contacts on WhatsApp and other social media, shaming defaulters, targeted harassment of borrowers' contacts, and home visits. Digital medium allowed predatory lenders to abuse borrowers at scale. For instance, lenders performed semantic association on borrowers' contact lists to identify their close contacts (sometimes inaccurately) and harass them. Such tactics caused immense emotional and reputation harm to participants, and damaged their dignity. P4 encountered stigmatization in their social circles: "They contacted my friends and family through WhatsApp. They shared my photo and published my details saying I had taken loan and hadn't repaid and started harassing them... Because of this I lost a lot of friends. I even had troubles with relatives. I ended up losing my job. [...] I was very upset but did not share it with anyone. At one point, I even tried to commit suicide. " Unfortunately, until December 2020, instant loan platforms had received little attention from the Reserve Bank of India (RBI). Thus, several predatory loan platforms had ourished, causing many borrowers to die by suicide [18] . Our participants were aware of such risks; yet, very few were critical of the platforms. Participants' emotional attachments made it challenging for them to seek accountability from the platforms. P08 described the extent of vulnerability, "When we are unable to borrow from our friends, this loan app is helpful to us, just as a friend in need. When we don't even know where we can get money, this app decides automatically and at least grants us INR 1000 (USD 14). It does not see caste or religion or skin color. They simply believe us based on what we type (our data) [...]. So, we cannot nd fault with the app. [...] This app helps us when all others have abandoned us. " Our participants often viewed their negative experiences through the lenses of 'incompetency', and assigned self-blame for their experiences. Even if loan platforms were at fault, they were seemingly oering loans with no asset requirements; therefore, any rule or tactic was justiable. P23, a survivor of abuse from a predatory loan platform reected on their learnings: "I was very rm about not availing app loans but since my friend suggested it I took it. [...] I did not think about whether we could repay the loan during dicult times. Corona has taught me a very good lesson. " Other negative experiences included losing money to fake apps, or being rejected by loan platforms without due explanations. Contrary to normative expectations of recourse, such negative technological experiences induced feelings of 'shame' in our participants who were less likely to share such experiences with their peers or seek help. In addition, participants' ardent optimism in technology and a lack of condence in their technical abilities often led them to assume unfair responsibility for their negative experiences. P22 who was condent about her creditworthiness blamed her lack of technical skills for an unexplained loan rejection: "Maybe I made some mistake while typing. Because if they look at my PAN card, they will denitely give me a loan. So I feel that there must be some kind of mistake that I made. " Unfortunately, for participants with futuristic outlook on technology, negative experiences reinforced their beliefs that they would never be the intended audience for 'high tech' applications, resulting in technology abandonment. As P2 put it, the doors to an AI-powered future remained closed to them: "I felt that this gate had closed for me. I felt I shouldn't go around and ask for money, or on these apps. " Our work lls a critical gap in the research on algorithmic accountability: we provide an understanding of social conditions of accountability through the experiences of (potentially) vulnerable users who are constrained in their capacity to seek accountability from technology providers. We situate these ndings in the larger discourse on algorithmic accountability, and provide some suggestions for contextualizing the design of accountability mechanisms. We conclude with implications of our work for the use of alternative data in FinTech applications in the Global South. Current discourse on algorithmic accountability rests on the existence of accountability relationships between technology providers responsible for causing harm through ADS, and the individuals experiencing harm through ADS (or their representatives) [121] . In this relationship, the technology providers are obligated to provide 'accounts' to the those individuals who are harmed [15, 96, 101, 123] ; these individuals or their representatives may then judge the accounts and seek to impose consequences if necessary. Consequently, much work in algorithmic accountability often presents 'sharing of information' by technology providers as the rst phase of accountability [53, 64, 89, 121] . Prior work calls for involving aected individuals in designing accountability mechanisms to ensure that the information is meaningful to them [42, 75] . Our work extends this argument to show that purely technical approaches to accountability obscure the socio-political realities of stakeholders that make such 'information sharing' necessary in the rst place. In our study, exchanges enabled by AI-based instant loans recongured users' relations to instant loan platforms in ways that distract from the goals of algorithmic accountability. First, users were placed into positions of 'indebtedness' with loan platforms. Users in our study were largely 'thin-le' borrowers, making it dicult for them to secure loans from formal nancial institutions. They had primarily relied on informal loans for their borrowing needs, which had come with huge social costs to them. Thus, instant loans, with seemingly no-collateral-requirement, no-stringsattached were viewed as a huge 'favor' by users. Under users' debt relationships with loan platforms, it was the users, rather than the platforms, who perceived obligations. Users fullled these obligations in both material and intangible ways, and persisted despite human and other costs, such as abuse, discrimination, recurring debts, privacy harms, and self-harm to them. Contrary to the normative behaviors of outrage in users documented from work in the West [101] , users in our study did not believe it was in their right to question the terms and conditions of lenders. Instead, they assumed responsibility for their failures of loan platforms, thus demonstrating a dependence, and releasing those high-powered actors from the obligations of accountability. Thus, we argue that algorithmic accountability is mediated through platform-user power relations, and can be stymied by on-the-ground socio-political conditions of users. Responsible development of AI cannot be universally achieved without paying close attention to these situated [114] power dynamics. We need more research on the relationship between accountability mechanisms, agency of users, and the impetus for action in dierent socio-political contexts to ensure responsible AI more widely. We build on the work of Katell et al. [74] , and propose a situated approach to algorithmic accountability. 5 6.1.1 Enhancing agency of the forum through critical awareness. New internet users, with vastly dierent mental models of AI can place misguided trust in ADS [82, 93, 109] . Such high user-trust in AI systems played out in several ways in our study: ready acceptances of terms, conditions, and loan decisions, often to the extent of users reevaluating their own competencies and abilities. However, design and research in user-centered AI often assumes low trust in AI, and begins with questions of 'how might we design for increased user trust in AI'? Instead, designs must plan for appropriate failures assuming high-user trust in AI systems [14] . Research must address questions such as decreasing user trust or increasing user distrust in AI systems. Further, we saw that users who beneted from the instant loan applications developed deep emotional attachment towards these applications. This suggests that users' mental models of AI systems must be calibrated appropriately and at regular intervals of use. On-boarding users to AI systems via guides may be a viable rst step to align users' mental models with AI systems [29] . However, such measures must be complemented by widespread AI literacy programs. Trust and safety initiative for users in India by Google is one such example [66] . More support must be given to grassroots organizations that are working to raise public awareness. An outstanding example is Internet Freedom Foundation's Project Panoptic that is raising awareness on public-facing facial recognition systems in India [67] . Such eorts must be supported by programs that not only up-skill citizens to be AI designers, but also critical thinkers who can be AI testers and AI auditors. These initiatives can help recognize the largely invisible work of maintenance and repair involved in responsibly deploying AI [44] . Transparency is a widely called for mechanism for accountability [8] . Making registries of datasets, models and processes available for public scrutiny [16, 59, 99] is a good rst step. However, lack of technical expertise among the public could render such transparency meaningless. Thus, corporate actors and governments must work with civil advocacy groups to create toolkits that consumer advocates can use towards accountability eorts. The Algorithmic Equity toolkit by ACLU Washington could serve as a model for such aims [74] . Further, for transparency to serve the goal of answerability, it must generate sucient pressure from the forum that forces actors to respond to violations. When platform-user relations are entrenched in power dierences, individual actions by vulnerable users may not be successful in large scale social changes [39, 78] . Therefore, we must go beyond transparency for individual users, and towards transparency of collective users. One way to achieve this may be through designing spaces where vulnerable users can mobilize support towards demanding collective accountability. Through our study, we saw that new internet users are often ashamed of their negative experiences, making it unlikely for them to share their experiences with other users oine. Anonymity provided by digital ecosystems can be leveraged to reduce such barriers for them. Such a platform could also lead to normalization of negative experiences, leading to discourse and then political action. Ahmed et al's Protibadi [13] , a system to mobilize support against sexual harassment in Bangaladesh, and Irani and Silberman's Turkopticon [68] to invert requester-turk worker power relations are examples of intervention opportunities for researchers interested in algorithmic accountability. 6.1.3 Re-configuring designer-user relations through community engagement. Algorithmic harms such as bias and discrimination are extensively studied in FAccT, and receive extensive attention especially in Western media [26, 61, 87, 123] . However, we saw in our study that users accessing instant loans were undeterred by algorithmic discrimination. Rather, they expressed signicant concerns about alternate forms of harms from ADS systems such as data leaks, gossip in social circles from data leaks, reputation damage and social frictions. Prior work has already pointed to the need to re-contextualize harm measurements [109] . We extend this argument and draw on work by Metcalf et al. to suggest that we must co-construct measurements of harms with the community of stakeholders involved [88] . While doing so, we must also recognize that a purely computation framing of harms fails to address injustices caused by structural oppression [63, 77] . Such structural oppression is at the root of what has 'excluded' these individuals from technology spaces, and created designer-user binaries. We therefore echo the calls made by scholars to recongure these relations through design practices situated in community values [33] . Design Beku [104] is an excellent model for how this could be done. 6.1.4 Commiing to justice through critical self-reflection. Users behaviors towards AI-based predatory applications including justication, tolerance, acceptance and self-blame led to extreme consequences such as abuse, reputation harm and self-harm. Such experiences are a violation of users' privacy, and users' right to dignity. Thus, the ndings in our study also point that responsible AI is a human rights' issue. What recourse mechanisms can we aord to these users in the case of undergoing data leaks that are the equivalent of emotional harm? Further, what recourse mechanisms can we aord to these users when there is intentional reputation harm? How far we can go in addressing human rights' issues with technical interventions? What does accountability mean when predatory lenders create mobile applications with open-sourced machine learning algorithms and datasets, and slap a usable interface to prey on vulnerable users? Can our radical vision of democratizing AI hurt more than help? What forms of accountability can be assumed when the tools we created land into the hands of malicious actors? While we acknowledge that we do not have the answers to these large challenges, we believe critical reection might be a good rst step. 'Alternative lending' uses mobile phone data to solve information asymmetry problems of lenders, who traditionally depend on tangible collateral assets [57] . Such models could also carry huge benets to borrowers: As we saw in our study, they could open up opportunities for users who have never been a part of formal nancial systems. These benets were especially signicant to our participants given the economic challenges brought on by the COVID-19 pandemic. Unfortunately, alternative lending could also have extreme downsides; without regulation or rules to dene the limits of what counts as 'alternative data', the judgements made based on these data are largely arbitrary. In addition, new internet users in the Global South (such as the users in our study), may overshare sensitive data in the name of high quality collateral assets to unveried platforms, risking privacy harms. Current techniques around privacy, data rights and data sovereignty rarely account for data as collateral assets, calling for research to re-frame designs around privacy, safety and trust. The harms of alternative credit often extend beyond the instance of decision-making. That is, data assets can themselves be elite resources [111] , and are often the products of uneven social relations [38] . For instance, loan platforms reproduced gender relations prevalent in economic and social spheres when women participants used their husbands' phones to seek loans. If the goal of AI-based lending is to achieve equitable nancial inclusion, we must account for such data disparities in our imaginations of AI systems. Further, data collection mechanisms may be predatory. Users in our study reported receiving ads on their phones even when they were unsuccessful with the apps, or several months after they had stopped using the apps. While one could argue that such predatory mechanisms could be curtailed with better user privacy, we remind the reader that giving consent and accepting privacy policies were unparalleled obligations to nancially stressed users in comparison to 'instant' cash. New data privacy and consent models such as collective consent [107] may be viable options. As an immediate call-to-action, we urge designers to implement AI systems based on established industry practices [15, 96] . Such practices include sourcing data responsibly i.e., ensuring that users' personally identifying information is protected at all times, preparing a data-maintenance plan for the life-cycle of the product, collecting routine user feedback, aligning feedback with model improvements, and communicating the value and time-to-impact to users, identifying factors that go into user trust, helping users calibrate their trust, calibrating trust through the product experience, and managing inuence on user decisions [15, 96, 111] . We also call on designers to supplement these eorts with awareness campaigns on data and privacy rights for vulnerable users. Beyond these implications, our work opens up policy questions such as: How do we communicate the potential risks of 'instant' money to users in dire circumstances? What educational and nancial aid would they need? Who should assume responsibility? We believe these could be important future directions. We presented a qualitative study of 29 nancially-stressed users' interactions with instant loan platforms in India. We reported on the perceptions of instant loan platforms among users, and their feelings of 'indebtedness' towards those platforms. We elaborated on the ways in which these users fullled obligations, and enacted dependence on loan platforms. By situating our ndings in the algorithmic accountability discourse, we presented an argument that algorithmic accountability is mediated through platform-user power relations, and can be hindered by on-the-ground socio-political conditions of users. We proposed situated accountability interventions such as enhancing agency of the forum, enabling collective transparency, reconguring designer-user relations, and committing to critical self-reection to ensure wider accountability. We conclude with implications for FinTech applications in India and beyond. Non-Banking Financial Company Dhani -India's Trusted Site for Finance 2022. Five ways that AI augments FinTech 2022. Get line of credit up to Rs. 5 Lakhs -MoneyTap Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda AI Now Institute, and Open Government Partnership. 2021. Algorithmic Accountability for the Public Sector Peeking inside the black-box: a survey on explainable articial intelligence (XAI) AI powered FinTech: The drivers of Digital India Privacy, security, and surveillance in the Global South: A study of biometric mobile SIM registration in Bangladesh Protibadi: A platform for ghting sexual harassment in urban Bangladesh Toward Responsible AI by Planning to Fail Guidelines for human-AI interaction Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. new media & society Shame, suicide and the dodgy loan apps plaguing Google's Play Store Google Mistakenly Tags Black People as 'Gorillas Race after technology: Abolitionist tools for the new jim code It's Reducing a Human Being to a Percentage' Perceptions of Justice in Algorithmic Decisions Man is to computer programmer as woman is to homemaker? debiasing word embeddings Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon Thematic analysis Toward algorithmic accountability in public services: A qualitative study of aected community perspectives on algorithmic decision-making in child welfare services Gender shades: Intersectional accuracy disparities in commercial gender classication How the machine 'thinks': Understanding opacity in machine learning algorithms Internet Information Service Algorithm Recommendation Management Regulations Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making Machine learning interpretability: A survey on methods and metrics. Electronics Mapping India's AI Potential France opens the source code of tax and benets calculators to increase transparency Design justice: Community-led practices to build the worlds we need Dhruvin Vora, Balasaheb Dhame, Raghu Dharmaraju, and Rahul Panicker. 2020. Pest Management In Cotton Farms: An AI-System Case Study from the Global South Algorithmic accountability: Journalistic investigation of computational power structures Enabling accountability of algorithmic media: transparency as a constructive and critical lens. In Transparent data mining for Big and Small Data Algorithmic transparency in the news media 2020. Data feminism HCI and environmental sustainability: the politics of design and the design of politics The accuracy, fairness, and limits of predicting recidivism Locating the Internet in the Parks of Havana Expanding explainability: Towards social transparency in ai systems The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations Repairing innovation: A study of integrating AI in clinical care Could machine learning help companies select better board directors? Exposing Error in Poverty Management Technology: A Method for Auditing Government Benets Screening Tools I always assumed that I wasn't really that close to [her User attitudes towards algorithmic opacity and transparency in online reviewing platforms Odisha launches AI based online life certicate system for pensioners Automating inequality: How high-tech tools prole, police, and punish the poor Toward algorithmic transparency and accountability Automated underwriting in mortgage lending: Good news for the underserved? Housing Policy Debate Datasheets for datasets The relevance of algorithms Explaining explanations: An overview of interpretability of machine learning Government of Tamil Nadu. 2020. Tamil Nadu Safe and Ethical Articial Intelligence Policy Behavior Revealed in Mobile Phone Usage Predicts Loan Repayment Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs Public AI Registers: Realising AI transparency and civic participation in government use of AI Global AI ethics: a review of the social impacts and ethical implications of articial intelligence Algorithmic bias: From discrimination discovery to fairness-aware data mining Situated accountability: Ethical principles, certication standards, and explanation methods in applied AI Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse Towards accountability for machine learning datasets: Practices from software engineering and infrastructure India National Family Health Survey NFHS-4 2015-16. Mumbai: IIPS and ICF Helping users stay safe online Internet Freedom Foundation Turkopticon: Interrupting worker invisibility in amazon mechanical turk Algorithmic management in a work context 2021. I Agree with the Decision, but They Didn't Deserve This: Future Developers' Perception of Fairness in Algorithmic Decisions Opportunities and challenges for articial intelligence in India Cash, Digital Payments and Accessibility: A Case Study from Metropolitan India Because AI is 100% right and safe": User Attitudes and Sources of AI Authority in India Toward situated interventions for algorithmic equity: lessons from the eld Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning Transparent to whom? No algorithmic accountability without a critical audience A mulching proposal: Analysing and improving an algorithmic system for turning the elderly into high-nutrient slurry Exploring sustainability research in computing: where we are and where we go next Doing interviews. Sage Mr Ulric Eriksson von Allmen, Ms Ratna Sahay, et al. 2020. The Promise of Fintech: Financial Inclusion in the Post COVID-19 Era Working with machines: The impact of algorithmic and data-driven management on human workers Who Is Included in Human Perceptions of AI?: Trust and Perceived Fairness around Healthcare AI and Cultural Mistrust Questioning the AI: informing design practices for explainable AI user experiences Facebook Apologizes After A.I. Puts 'Primates' Label on Video of Black Men. The New York Times Articial Intelligency Policy in India: A Framework for Engaging the Limits of Data-Driven Decision Making Data in New Delhi's predictive policing system A survey on bias and fairness in machine learning Algorithmic impact assessments and accountability: The co-construction of impacts Model cards for model reporting Assembling Accountability: Algorithmic Impact Assessment for the Public Interest Algorithms of oppression It cannot do all of my work": Community Health Worker Perceptions of AI-Enabled Mobile Health Applications in Rural India Weapons of math destruction: How big data increases inequality and threatens democracy Working digital money into a cash economy: The collaborative work of loan payment People + AI Guidebook Digital payment and its discontents: Street shops and the Indian government's push for cashless transactions The black box society Building Public Algorithm Registers: Lessons Learned from the French Approach Explanations as mechanisms for supporting algorithmic transparency Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products Closing the AI accountability gap: Dening an end-to-end framework for internal algorithmic auditing Where Responsible AI Meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices Design Beku: Toward Decolonizing Design and Technology through Collaborative and Situated Care-in-Practices Model-agnostic interpretability of machine learning Dening and Demystifying Automated Decision Systems When One Aects Many: The Case For Collective Consent The coding manual for qualitative researchers Re-imagining algorithmic fairness in india and beyond Intermediated technology use in developing communities Everyone wants to do the model work, not the data work": Data Cascades in High-Stakes AI Auditing algorithms: Research methods for detecting discrimination on internet platforms No explainability without accountability: An empirical study of explanations and feedback in interactive ml Plans and situated actions: The problem of humanmachine communication No Report of Task Force on Articial Intelligence Australian Uber drivers say the company is manipulating their ratings to boost its fees Twitter taught {Microsoft}'s friendly {AI} chatbot to be a racist asshole in less than a day Questionable concepts: critique as resource for designing with eighty somethings Designing theory-driven user-centric explainable AI Factors Inuencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Dierences What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability A qualitative exploration of perceptions of algorithmic fairness Uthaipon Tantipongpipat, and Shubhanshu Mishra. 2021. Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency We thank Azhagu Meena S P for assisting with interviews, and Vinodkumar Prabhakaran, Nikola Banovic, Jane Im, Nel Escher and Anindya Das Antar for helpful feedback on this work. We also thank the reviewers at CHI'22 where a previous draft was rst submitted, and the reviewers of FAccT for their helpful comments. Finally, we thank our participants who trusted us with their experiences; without them this research would have never been possible.