key: cord-0151358-pkirstch authors: Devitt, Susannah Kate; Pearce, Tamara Rose; Chowdhury, Alok Kumar; Mengersen, Kerrie title: A Bayesian social platform for inclusive and evidence-based decision making date: 2021-02-13 journal: nan DOI: nan sha: dd651f19e552d5d2ae6c34b77c8fb3d89b0fe487 doc_id: 151358 cord_uid: pkirstch Against the backdrop of a social media reckoning, this paper seeks to demonstrate the potential of social tools to build virtuous behaviours online. We must assume that human behaviour is flawed, the truth can be elusive, and as communities we must commit to mechanisms to encourage virtuous social digital behaviours. Societies that use social platforms should be inclusive, responsive to evidence, limit punitive actions and allow productive discord and respectful disagreement. Social media success, we argue, is in the hypothesis. Documents are valuable to the degree that they are evidence in service of, or to challenge an idea for a purpose. We outline how a Bayesian social platform can facilitate virtuous behaviours to build evidence-based collective rationality. The chapter outlines the epistemic architecture of the platform's algorithms and user interface in conjunction with explicit community management to ensure psychological safety. The BetterBeliefs platform rewards users who demonstrate epistemically virtuous behaviours and exports evidence-based propositions for decision-making. A Bayesian social network can make virtuous ideas powerful. Social Media 5 Background 7 Social information processing 8 Data-driven decisions 8 Environments, letters, and online communities 8 Brainstorms, workshops and conferences 9 Epistemic justification of social platforms 10 An evidence-based social platform 11 The business case 12 Introduction …when it comes to the direction of human affairs, all these universities, all these nice refined people in their lovely gowns, all this visible body of human knowledge and wisdom, has far less influence upon the conduct of human affairs, than, let us say, an intractable newspaper proprietor, an unscrupulous group of financiers or the leader of a recalcitrant minority-H.G. Wells (1938) In January 2021 a mob of supporters of Donald Trump stormed the Capital of the United States (Bergengruen and Time Photo Department, 2021) . Despite no evidence of electoral fraud, and over failed 60 lawsuits to this effect, the rioters believed that their duty as Americans was to take back their country, to 'stop the steal' (Rutenberg et al., 2020 , AP/Reuters, 2021 . The mob believed that Joe Biden had been elected fraudulently, that democracy was at risk and that members of Congress had to be stopped certifying the electoral votes that would instate Joe Biden as the 46 th president of the United States (McSwiney, 2021) . False beliefs were incubated and amplified not by evidence, but by Donald Trump's posts on social media platforms, particularly Twitter and Facebook. Once posted on social media, Trump's messages went viral on social media and via a network of online forums and media creating a 'right wing echo chamber' (Tharoor, 2021) . There is no doubt that social media platforms sow disinformation and misinformation just as easily (perhaps much more easily) than true, verifiable information (Singer and Brooking, 2018) . In the wake of the Capital riots, media commentors have reflected on issues of free speech and moderated content as they pertained to social media (Breton, 2021) , wondering about the price society pays, particularly democratic societies, when lying becomes normalized (Tenove and McKay, 2021) . The unrest in Washington is proof that a powerful yet unregulated digital spacereminiscent of the Wild West -has a profound impact on the very foundations of our modern democracies (Breton, 2021) Where years of anguish and lament from ideologues has failed to change misinformation behaviours in the media and social media, corporate litigation has stepped in. Under the threat of defamation lawsuits, media outlets are now changing their behaviours (Brynbaum, 2021) . Such lawsuits are having an immediate impact on misinformation narratives, e.g. during a rightwing media Newsmax interview 3 rd Feb 2021, a host walked off camera to avoid engaging in discussions around unsubstantiated electoral fraud (MSNBC, 2021) . Against the backdrop of a social media reckoning, this paper seeks to demonstrate the potential of social tools to build virtuous behaviours online. If we believe that humans would benefit from incorporating philosophical theories into discourse and social knowledge structures, then social media platforms should be created, modified and updated based on our best normative theories in epistemology and the philosophy of science, rather than corporate monetisation metrics. That is to say, the impact of digital content on society should be proportional to the evidence we have for ideas and the comprehensiveness of this evidence. The more justified the ideas (e.g. climate In this chapter we investigate whether mis-and disinformation can be fought using a social platform that resembles existing platforms, but simultaneously encourages virtuous information behaviours by its design. The rise of social media in some ways has marked the demise of the document as a primary unit of information (Buckland, 1991 , Wright, 2007 . Rather than building up knowledge in expert systems, social media encourages ephemeral, unexpert ejaculations. Social media builds on human gossip mechanisms for shared belief, rather than co-constructing more faithful representations of reality. This chapter suggests new path for social media in an age of uncertainty and a hunger for evidence-based collective thinking. There is evidence that crowds can be wise, if the circumstances of deliberation and dissent are considered, and mechanisms of groupthink avoided (Solomon, 2006 , Sunstein, 2011 . Social media success, we argue, is in the hypothesis. The document has long reigned as the unit of information with keywords, indexes and other signals indicated connections to other documents. In the platform we create, the primary unit of information is the hypothesis. Here documents are not intrinsically valuable, but valuable to the degree that they are evidence in service of or to challenge an idea for a purpose (Devitt, 2013) . Such a reframing allows for and anticipates documents to be error-prone and variable in usefulness in accordance with the ambitions of Bayesian epistemology (Bovens and Hartmann, 2004 , Hajek and Hartmann, 2009 , Dunn, 2010 , Gwin, 2011 . Centering the hypothesis removes the barrier to using diverse information while limiting the influence of evidence used disproportionately or inappropriately. Traditional social media prioritizes the idea too, but to the detriment of evidence and expertise. Social media's infinite feed of assertions with little evidence creates almost the opposite information environment than that perfected by the book, the document, the card catalogue and the database. The future of informed conversations requires far better utilisation of the global 'world brain' 1 of information through intuitive, yet structured social platforms. To this end a group of researchers have created a Bayesian social platform for evidence-based collective decision making which we articulate below. The internet (more broadly), and social media (more specifically) has invited democratic participation in the espousement and evaluation of ideas. Wishing to remain impartial, social media companies have generally welcomed all who wish to register and share their data with them to monetise (Zuboff, 2019, Barnet and Bossio, 2020) . Simple popularity metrics have been employed to adjudicate and share ideas, such as 'up voting' and 'starring' content; and retweeting and sharing content within or across platforms. But few features are built or deployed that explicitly work towards improving both the veracity or quality of information shared or the ability of users to effectively evaluate poor information or misinformation. Instead users share and like information amongst like-minded peers (Schmidt et al., 2017) , reducing the friction of dissent and creating epistemic echo chambers. In-group messages expressing righteous or virtuous anger are propagated, while calm, moderate or evidence-based messages are shared less (Singer & Brooking, 2018) . The science of human behaviours on current dominant social media suggests that, left to their own devices, humans are more likely to reinforce beliefs signalling social group membership/identification and less likely to collectively promote evidence-based beliefs. This is despite a decade of empirical and theoretical social media research on ways people experience information on platforms such as Twitter and normative guidance for platform producers. For example, Zubiaga and Ji (2014) found that credibility perceptions of tweet authors played a significant role in how users trusted tweets. Basically, the more credible the 'tweeter', the more the tweet would be reshared. Only after 14yrs has Twitter added a feature that asks users to employ metacognitive skills, to consider their actions, 'would you like to read the article before retweeting it?' In 2020, they ask this question if a user tries to retweet before reading a link (sharing based on trust), rather than opening the link (sharing based on knowledge)-see Figure 1 . Figure 1 . Twitter Support tweet explaining the new feature, a prompt to encourage informed discussion. See https://twitter.com/TwitterSupport/status/1270783537667551233?s=20 The experiment with some platforms (starting with Android) went extremely well, with users opening articles 40% more before sharing them, that Twitter has rolled out the feature across all platforms (Hatmaker, 2020) . Twitter explains this feature because sharing an article can 'spark conversation' and opening articles (implied, 'reading articles') helps promote informed discussion-see Figure 1 . . A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge. 7 Social media has traditionally avoided censoring individuals (bad for business) and have allowed networks to grow and their advertising revenue to grow beside it, for example: At YouTube, we've always had policies that lay out what can and can't be posted. Our policies have no notion of political affiliation or party, and we enforce them consistently regardless of who the uploader is (Novacic, 2020) . Disregard for political affiliation has led to the rise not only of political extremism but has also made social media the locus of political action such as recruitment, propaganda and collective action. For example, Facebook, Twitter and YouTube were central in the rise of cyber jihadists and Isis (Awan, 2017) . Facebook enabled warring militias in Libya's civil war to generate and sustain power (Singer and Brooking, 2018, Walsh and Suliman, 2018) . While white supremacists and conspiracy groups such as QAnon in the United States have grown and strengthened with the comprehensiveness of open information on the internet and social media (Hannah, 2021) . Social media companies do have guidelines to pull down content that includes hate speech, inappropriate content, support of terrorism, or spam. But, they also rely on inscrutable decision-making, large cohorts of preciously employed content moderators and automated tools (Ganesh and Bright, 2020 , Roberts, 2019 , Gillespie, 2018 . However, after the unprecedented mob attack on the US Congress 6 Jan 2021, incited by weeks of delegitimising the US election, Twitter first suspended the personal Twitter account of the President of the United States Donald Trump and then permanently deleted it when the user did not obey Twitter's governance rules. Facebook also deleted Trump's accounts and Apple and Google removed the social media app Parlour from its app stores. Amazon removed Parlour from its web hosting services. Within a week of the attacks thousands of accounts inciting violent insurrection against the US government were removed by Twitter and Facebook. The question remains whether the solution to social media lies less in content moderation, and perhaps more in the way interaction occurs and information is used. Democratic participation needs to value inclusion and diversity, but also prioritise the knowledge and experience of experts and expertise. Evidence must be drawn from a defensible range of stakeholders and there must be a reasonable opportunity to submit ideas and evidence. Similar to the slow-food movement, future social media must gather and analyse data for propositions over longer temporal periods. The digital social epistemology movement must find a way to encourage interactivity, thoughtfulness and genuine engagement, while also mitigating human cognitive and affect limits, human biases and tendencies. This chapter considers how groups of people might come together more effectively to understand a problem space and to propose actionable solutions. . A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge. 8 The field of social information processing has long questioned the role of social interaction on social information processing from in person office interactions to online virtual experiences (Festinger, 1954 , Salancik and Pfeffer, 1978 , Meyer, 1994 , Ahuja and Galvin, 2003 . Individuals are motivated to communicate with others in order to establish socially derived interpretations for events and their meanings when judgments are important, but evidence is ambiguous or non-existent and information complex (Salancik and Pfeffer, 1978, Meyer, 1994) . Groups of people desire to fit in and will be motivated to agree with the group. With repetition, ideas are likely to convince individuals, that is make them believe them. Humans use social reasoning as a tool to make sense of uncertainty. Social platforms provide epistemic checking for groups. People will tend to believe what others in their group believe. If evidential reasoning is valued and social reasoning requires evidence, the group may collectively believe propositions for which there is corresponding evidence. A problem that has arisen across social media and within traditional organisations is that while overt strategy might recommend 'data driven decisions' (Haller and Satell, 2020) , in actual fact, decisions are largely made based on political will, trend, and biases arising from limited time and resources to evaluate ideas. Even when organisations use data for decisions, often data is incomplete, inaccurate, irrelevant or otherwise problematic to use to base decisions on (Provost and Fawcett, 2013) . Data is rarely used by itself in raw form, but is transformed via human or machine interpretation, so when we speak of 'data' in this chapter, we mean data, models and algorithms; as well as whether data are classed as assertions (aka hypotheses) or evidence for or against hypotheses. Whatever one defines data as, two things are true, 1) data is thought to be valuable and 2) data is difficult to use. What is a real method to use data to make decisions, even when it is partial, messy, of varying quality problematic? The method suggested in this chapter is highly pragmatic, yet grounded on solid philosophical foundations. The method allows a risk-based approach to data-driven decision-making, where stakeholders to the decision are 'at the table' and given a timeline to contribute to decisions, but that there is an end to deliberations and hand-wringing. There is also political heft to decisions proportionate to the diversity and range of stakeholders invited to contribute and the quantity and quality of contributions by said stakeholders. Unlike significance testing in the social sciences, there is no magic threshold of evidence under which truth can be presumed. But, following the tenants of Bayesian epistemology, beliefs ought to get stronger the greater the evidence there is to believe in them. Humans have been shaping their environments for hundreds and thousands of years to convey knowledge through acts such as path-making, cave painting, creating physical sequences for making or using tools or carving messages on objects or paper (Sterelny, 2012 , Sterelny, 2003 . . A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge. Letters formed the beginning of written dialogues between humans and are acknowledged as pivotal in shaping the beliefs of social groups (particularly dyads). Letter writing has affected the history of ideas such as Princess Elisabeth of Bohemia who wrote to Descartes 1643 to 1650 (Descartes, 1989) . Email quickly took over the traditions of letter-writing in the 1990s and early 2000s, leaving digital rather than physical records of interactions. At the same time, online discussion boards created social communities to interact and share ideas. The rise of social media in the mid-2000s saw archival communications massively reduced. It remains incredibly (and intentionally) difficult for users to re-find the ideas they have expressed on platforms such as Facebook and Twitter. There has recently been a backlash of sorts against the ephemeral group interactions on Facebook and Twitter and a renewed interest in 1-on-1 engagement via apps such as Messenger. The role of social media in the future of communication remains 'up for grabs'. Still these written forms maintain a dialogue between people, a relationship, with no specific end date or event in mind. Organisations and inter-organisational groups use the mechanisms of decision-oriented meetings (brainstorming sessions, strategy, or evaluative), workshops or conferences to build social epistemic communities. These interaction events are spatiotemporally limited to achieve particular outcomes. Since Covid-19, online meetings, workshops and conferences have become the standard for group interactions. But the tools used to experience these meetings often lack in-depth interactivity to mimic the experience of in-person events. Typical workshops and conferences encourage discussions between presentations, and it is often acknowledged that conversations, 'at the bar' is where and when intellectual progress really occurs. There are assumptions made about the value of these meetings, sometimes explicit, though often implicit or taken for granted. There are two broad overlapping categories of supposed benefit from these meetings. One is social and about building human relationships through shared experience. The second is disseminating individual knowledge (testimony) and constructing group knowledge. Group knowledge can result in the production of co-authored publications including reports, journal papers and edited books, or single-authored publications that are more likely to refer to and cite the ideas of others invited to said workshops and conferences. There is a non-rigorous distinction between the workshop and the conference. Workshops can be bespoke, idiosyncratic and useful for a point of time to achieve a specific end and those attending may never meet again. They tend to be more interactive, with greater emphasis on using tools such as sticky notes, white boards, mind-mapping software and design thinking to overcome the individual for the sake of collective production for a purpose. Conferences tend to be reoccurring events to build an epistemic community over time. Attendees and presenters shape the future direction of the collective. The psychological pull of attending the same events year after year is to maintain social relationships and witness one's own part in shaping the direction of the group's thinking over time. . A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge. Now, more than ever, there is a gap in digital social tools that promote the epistemic aims of communities embarking in knowledge sharing and building. This is what has brought us to consideration of what justifies social platforms (or could justify social platforms)? Ideas are posted to social platforms, but how is any information shared justified? Or to put it another way, what gives ideas and information authority, trustworthiness or credibility for decision-makers to progress decisions? Once we can identify what sorts of information we want to see on platforms, then we can consider how to advocate for virtuous online behaviours to manifest better information amongst participants and better management or treatment of this information buy decision-makers. This section will go through some of the main sources of justification for information pertinent to digital information sharing. Discussions we won't go into include those around internalism vs. externalism that seek to ground human beliefs against 'brain in a vat' style arguments. For the sake of the chapter, we assume the following: Realism: basic human beliefs are, for the most part, grounded in perceptions and experiences in the external world that correspond with external reality, e.g. humans really see tables, chairs and trees (Devitt, 1997 , Kornblith, 2002 and are not in skeptical conditions (Unger, 1978 , Audi, 2010 . Digital Skepticism: human beliefs are increasingly influenced by veristically-challenged online information environments that require skeptical vigilance (Cooke, 2018 , Cooke, 2017 . The saturation of AI-generated (Ippolito et al., 2020) , false and misleading digital information increases minimally accurate, inaccurate and false beliefs depending on an agent's ability to curate, manage and correct information flows. Digital skepticism is particularly important information and behaviour promoted by media companies that seek to monetise user attention (Zuboff, 2019, Singer and Brooking, 2018) and information and behaviours suggested and reinforced by social peers (Eckles et al., 2016 , Bailey et al., 2019 and echo chamber effects (Quattrociocchi, 2017 , Cinelli et al., 2020a , Cinelli et al., 2020b . Justification: beliefs ought to have both a justified foundation (e.g. via perception, memory, expert testimony) and ought to cohere with other well-justified beliefs (Goldberg, 2012 , BonJour, 2017 . Information found in books and online need to be verified and justified on a case-by-case basis, but influenced by features such as authority, plausibility and support, independent corroboration, and presentation (Fallis, 2004 , Fallis, 2008 , Fallis, 2006 , Zubiaga and Ji, 2014 Combining these premises, we form a conception of humans interacting in information environments where their connection to reality via traditional modes such as visual perception and memory are grounded by virtue of being evolved to live and succeed in the real world (P1). Yet, human beliefs are increasingly under threat from the deliberate or incidental misinformation from online information environments (P2). In order to be justified in their information habits, humans must develop justified methods to find, sort and evaluate information sourced from a variety of sources (P3). The endeavour to improve epistemic habits is best done within physical and digital social groups (P4). The ambition then is to create digital infrastructure that provides the sort of justification that holds up to the highest epistemic standards. The benefit of digital tools is that time can be spend honing them over time against our best normative theories. An evidence-based social platform Researchers at Queensland University of Technology (authors of this chapter) set out to make an evidence-based social platform that builds virtuous social information behaviours using interaction mechanisms that instantiate epistemic norms (Devitt et al., 2018) . The researchers have diverse backgrounds from philosophy and cognitive science; business innovation and design; Bayesian statistics; machine learning and information technology. By encouraging social and evidence-based behaviours, the platform sought to build more scientific and inclusive digital cultures. Beginning as a research project, the team were funded by industry and grants to develop a minimal viable product (mvp) and then minimal marketable product (mmp) for market, creating a start-up around the platform 'BetterBeliefs' 2 . At its core, BetterBeliefs imagines ideas as hypotheses, representing by horses competing in a 'hypothesis horse race'. In order to progress in the race, the horses are fuelled by evidence, a little bit like the 20 th C. carnival racing game where metal horses compete based on the number of interactions they receive from players (see Figure 2. ). We thought it would be a breakthrough if data was connected to and presented for or against hypotheses, and data was psychologically engaging, rather than stored in databases hoping for a query to dig it up. The core functions of the platform for users are: • Submit hypotheses for consideration • Submit evidence for and against hypotheses • Vote on hypotheses to signify approval or disapproval • Rank the quality of evidence provided for and against hypotheses • Make a decision based on the degree of belief and weight of evidence of a hypothesis The business case Organisations ineffectively use the data sets available to them and fail to maximise the value of expensive business intelligence systems (Drucker, 1999 , Sharma and Djiaw, 2011 , Richards et al., 2019 . While organisations use business intelligence well for budgeting, financial and management reporting, they don't use them for corporate level decision-making (Richards et al., 2019) . As an Academic start-up dependent on industry-funding, the team needed the platform 'to sell', to have a clear value-proposition for business. We found evidence that social decision-making and innovation was good for business. For example, crowdsourcing using information systems can support management decision making through several stages of solving a problem (Chiu et al., 2014 , Ghezzi et al., 2018 , Lindič et al., 2011 such as: 1. Intelligence (e.g. search, prediction and knowledge accumulation), 2. Design (e.g. idea generation and co-creation) and 3. Choice (e.g. voting and idea evaluation) which lead to implementation. However crowdsourcing can be a double-edged sword, particularly regarding problematic issues such as crowd attitudes and motives; and groupthink and other human biases (Chiu et al., 2014) . Crowdsourcing using social platforms may help mitigate some biases in decision making for innovation but may introduce or exacerbate other biases depending on both platform features and how the platform is used (Bonabeau, 2009 Enterprise Social Media (ESM) is another information system that is a potential mechanism to share ideas across organisational silos, connect people and ideas, and enable innovation. Although the context of ESM is vastly different to commercial social media platforms discussed earlier, the literature on ESM shows that some of the decision making risk factors for social platforms translate across domains with echo chamber effects and biases including balkanisation and groupthink being highlighted as issues (Leonardi et al., 2013 , Leonardi, 2014 . The business innovation literature revealed that high ideation rates (having lots of ideas) correlate with growth and net income across organisations. More specifically, there were four key elements essential to high ideation rates (Minor et al., 2017) : • Scale (more participants) • Frequency (more ideas) • Engagement (more people evaluating ideas), and • Diversity (more kinds of people contributing Designing a platform that encouraged these elements of ideation is a social platform that also addressed the thorny issue of effective, evidence-based decision making for innovation led to the creation of BetterBeliefs. To design BetterBeliefs, rather than reinvent the wheel of interaction, we selected intuitive mechanisms from existing social media and peer evaluation (e.g. Facebook, Twitter and Reddit). The essential functions of social media are: 14 Figure 3 Add a hypothesis to the BetterBeliefs platform When users add evidence (see Figure 5 ), they provide a URL, a brief argument that explains how their evidence supports or refutes the hypothesis (e.g. by example, abduction, analogy, defeasible, induction, deduction), rank their evidence and identify whether their evidence supports or refutes the hypothesis. Note encouraging refuting evidence is a key part of BetterBeliefs that we believe no other social platform offers as a mechanism for epistemic evaluation. Once the platform has hypotheses and evidence, the 'newsfeed' view shows users flaming horses and offers an opportunity to 'thumbs up' or 'thumbs down' the horses. The degree of belief in the horse is represented by the position of the horse in the black 'racing box'. A horse to the left-hand side is poorly believed in. A horse to the right-hand side is 'winning the race', aka is highly believed in. However, just because a horse is on the right-hand side is not sufficient for a win-they need evidence too. To that end, the horses change colour depending on the weight of evidence for or against them. White horses lack sufficient evidence. Pink horses have much evidence. Blue horses lack evidence. Black horses have evidence largely against them. For example, a pink horse galloping to the right-hand side of the black box would be a good pick for decision-makers to progress. Whereas a white horse is better ignored until more interactions have occurred on it. In fact, a hypothesis will not turn from white to coloured until multiple users have interacted on the hypothesis in terms of both evidence and voting it up or down-See The degree of belief (DoB) metric takes the total upvotes and downvotes to create a likelihood that a hypothesis is true given user belief in it using Bernoulli-Beta distributions with 95% credible intervals represented to users. Our confidence in the degree of belief score increases the more users vote hypotheses 'up' or 'down (see Not all evidence is created equally, so the quality of each piece of evidence must be evaluated to the degree that it supports or refutes hypotheses. When designing the platform, the researchers benefitted from work in statistical science as well as information science on the qualities of information that make it valuable (see Table 1 .). The statistical methods that underpin the platform are currently not available to the public. They derived six dimensions: credible, accurate, relevant, comprehensive, recent and informative. During the initial design phase, the team considered inviting users to rate evidence on each dimension, but quickly felt that this would prove too taxing, generating an unwieldy user experience. In the end we created on a single star ranking (see Table 2 .) that allowed users to rank evidence based on any combination of dimensions they felt was relevant to the rank. The team felt that the quality of evidence ranking, in aggregate, would produce 'better beliefs' for the collective than not having the ranking or requiring too much effort. Once users have interacted with both hypotheses and evidence items the Evidence Engine produces the degree of belief (DoB) and weight of evidence (WoE) metrics-see figures 10 and 11 and Table 3 . The degree of belief is represented between 0.0-1.0, where 1.0 indicates 100% belief, absolute certainty in a hypothesis 0.5 indicates genuine uncertainty and 0.0 indicates absolute disbelief. The weight of evidence is on a linear scale with no upper end limit. This choice is because theoretically there can always be further items of evidence that might increase the likelihood that a hypothesis is true. In reality, users engage with the platform for a finite period of time and there is a limit to the quality and quantity of evidence available to decision makers. Users can view the outputs of the Evidence Engine through the 'decision dashboard'-see Figure 11 . Devitt, S.K., Pearce, T.R., Chowdhury, A., Mengersen, K. (2021). A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge. Figure 10 . Users add many kinds of evidence to the platform to support or refute hypotheses. This information is translated into degree of belief and weight of evidence metrics Figure 11 . The Decision dashboard represents weight of evidence along the y-axis and degree of belief along the x-axis. Hypotheses are sorted into groups: green, yellow, red and white. 22 Table 3 Breakdown of decision quadrants: green, red, amber and white The green box represents hypotheses that are 'greenlit for action' because they meet the decision-makers' threshold for both evidence and belief. Note that the decision maker can use the sliders to change the threshold depending on their own view of what is important for their decision and the consequences for making the decision. If it is a low-risk decision and/or a cheap or easy consequence from the decision, then the decision-maker may set a low threshold. However, if a decision has a lot of risk or the consequences of the decision may involve great costs or time, then the decision maker may require a higher threshold. In each case, due to the inevitable incompleteness of the evidence and limitations of contributors, decision makers will need to satisfice their choice-do 'enough' under limitations rather than optimise. They may make threshold decisions based on the number of hypotheses that end up in the green box and/or change the parameters of actions once the decision is made, e.g. if all hypotheses are insufficiently evidenced under one reward program, then instead of offering, say seed grants to highly believed hypotheses, they offer a 'revise-and-resubmit' to those landing in the green box. The red box represents hypotheses that are highly believed in yet lack sufficient evidence. A red hypothesis gives the pulse of belief and emotional buy-in. Red hypotheses mean different things depending on the expertise and diversity of participants. If participant intuitions are based on experience, decision makers might divert funds or resources to interrogate why hypotheses are highly believed yet short of evidence. It might be that evidence exists to back up high degrees of belief but are have not been added the platform. Or it might be that beliefs are in fact not sufficiently justified and there is only supposition. Either way decision makers can request users to seek out better evidence for their beliefs or suggest that they downgrade their degree of belief to be commensurate with their evidence. The amber box represents hypotheses that are have ample evidence, but are not highly believed in. An organisation may wish to 1. conduct information or education campaigns to communicate evidence in favour of these beliefs. 2. engage in safe social discussions to combat cognitive dissonancewhere individuals are aware of evidence against their beliefs, but struggle to change them (Beck, 2017) . 3. encourage unbelievers to add counterevidence to the platform to better justify their beliefs. The white box represents hypotheses that are contentious: have mixed belief or low belief and/or have mixed or limited evidence. There is a diversity of responses to these hypotheses, but the decision-maker is unlikely to progress actions on the basis of incomplete or contentious hypotheses. Still the controversy itself is evidence for decision-makers (Christensen, 2009) . True disagreement offers an opportunity to rethink, reframe and reinvest in seeking good reasons for ideas and taking seriously arguments against them. . A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge. Finally, users of BetterBeliefs can search the platform for keywords, they can filter hypotheses by recency, degree of belief, the number of evidence items and weight of evidence. Analytics are also available for each hypothesis to get a view of a hypothesis over time. The platform is designed to: 1. Motivate the creation of more relevant options (hypotheses) 2. Evaluate options by explicitly linking to evidence 3. Harness stakeholder justifications for how evidence supports or opposes these hypotheses 4. Rank evidence to the degree it is a) quality, b) relevant to hypothesis it's connected with, and c) informative to evaluating hypotheses it is linked with. 5. Inform decision-makers about stakeholder ideas and vice versa 6. Harness the attraction of social media to teach the scientific method 7. Empower groups to make strategic decisions based on stakeholder generated and evaluated hypotheses The platform aims to Evidence A central justification for having beliefs is the degree of evidence one has for them. Much of the history of thinking about evidence in epistemology is about individual rather than collective beliefs. For example, if a person sees 40% chance of rain on the weather report, they should have some degree of belief that it will rain today. The more evidence a person has the more they should believe a proposition. The greater the risk of having a belief, the more evidence a person should have for that belief. A person should firmly believe a proposition when they have sufficient evidence for it. For example, Jill definitely believes that it is raining when she feels rain falling on her shoulders. In general, awareness of one's evidence for beliefs is considered a good thing, but the degree to which reflective access is required to be justified in believing is debated (see Dougherty, 2011) . A reliablist may believe that dog know that it is raining even if the dog does not understand how she came to this belief (perhaps it was the smell of petrichor and the sound on the roof). It's a premise in this paper that justification in social epistemology stems from reliably formed beliefs, rather than depending on contributors to have reflective knowledge of what justifies their beliefs. That being said, the social platform discussed in this Devitt, S.K., Pearce, T.R., Chowdhury, A., . A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge. paper has only been used in use cases where the invitees were carefully selected for expertise. The platform tries to optimise the likely results with a bias towards diversity and expertise, plus the requirement that all ideas added to the platform are evidence-based. The platform also empowers a decision-maker to adjust the threshold of both the degree of evidence and degree of belief required for a hypothesis to be selected for some future action. Traditional epistemology tends to treat beliefs as 'all-or-none', either a person believes in p or ~p. Beliefs in a functionalist theory of the mind play a certain functional role in the cognitive architecture of an agent. If the agent believes p, then they act as though p were true. Beliefs provide scaffolding to guide and constrain behaviours as well as generating other cognitions such as desires or hopes. For example, if a person believes the US election was fraudulent, then they may storm the capital to take back democracy. Beliefs drive behaviour, even if they are objectively false. Bayesian epistemology takes a different perspective on beliefs. Instead of all-or-none, typical beliefs exist (and are performed) in degrees, rather than absolutes, represented as credence functions. This idea stems from Thomas Bayes who argued that our success in the world depends on how well credence functions, represented in our minds, match the statistical likelihoods in the world (Bovens and Hartmann, 2004) . This statistical approach to beliefs enables agents to hold multiple beliefs, even contradictory beliefs in their minds at the same time with less certainty. There is evidence that the mind is Bayesian to a certain extent, using adaptive inference to change credence functions in response to evidence (Clark, 2015 , Perfors, 2012 , Gopnik and Wellman, 2012 . We aim to reduce cognitive and motivational biases (Kahneman, 2011, Montibeller and von Winterfeldt, 2015) by: • Providing multiple and counter anchors • Prompting employees to consider reasons in conflict with anchors • Building explicit probability competence • Providing counterexamples and statistics • Capitalising on multiple experts with different points of view about hypotheses • Challenging probability assessments with counterfactuals • Probing evidence for alternative hypotheses • Encouraging decision makers to think about more objectives, new alternatives and other possible states of the future • Prompting for alternatives including extreme or unusual scenarios The BetterBeliefs platform reduces biases in crowdsourcing in three ways, algorithmically, interactively and culturally-see Figure 12 . . A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge. Figure 12 BetterBeliefs reduces biases algorithmically, interactively and culturally. Changes to the user interface can reduce biases caused by the way information is displayed and choices are made. Biases can also be reduced culturally through the way the platform is used along with other workshop, ideation and research methods, training events and promotion of virtuous online behaviours by groups. Algorithmic methods to address bias include measurement of user interactions on the system and identifying biased or non-virtuous behaviours. An example of algorithmic bias detection potential of the platform is using item-response methods (Embretson and Reise, 2013) to identify users that diverge from average response. In an analysis of one-use case of the platform, we could compare the success of ideas posted of skeptical users (those who tended to rate evidence as having less quality than the average user) with ideas posted by generous users (those who tended to rate evidence as having greater quality than average user). Some preliminary, correlative data (unfortunately unavailable in the public domain) suggests that a skeptical culture amongst groups who also engage in prolific hypothesis generation and evaluation may produce more successful ideas than more generous groups. But such a conjecture is purely speculative at this point and further experiments should be conducted to explore how diverse approaches to interaction on the platform affect the quality of outcomes for different purposes. By encouraging virtuous epistemic behaviours (thinking of many ideas, justifying ideas with evidence and evaluating other people's ideas and evidence) and inhibiting unvirtuous . A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge. 26 behaviours, the platform ought to reduce a set of biases identified by Montibeller and von Winterfeldt (2015) including: anchoring bias, myopic problem representation, availability bias, omission of important variables, confirmation bias, and overconfidence bias-see Appendix 1. Biases reduced using the BetterBeliefs platform. Increasing the number and diversity of hypotheses under consideration and encouraging individuals to justify them can improve decision making even if individual justifications are less than ideal (Oaksford et al., 2016) . This comports with a Bayesian approach to evidence, which allows for evidence itself to vary in quality, so long as low-quality evidence is weighted less then higher quality evidence. In addition to better hypotheses generation, there are significant benefits to decision makers of having a robust and dynamic set of evaluated hypotheses across teams and work hierarchies to amplify collective intelligence. The norms of Bayesian epistemology recommends that more diverse stakeholders and more numerous independent evidential interactions on hypotheses will produce more defensible results to inform decision makers (Bovens & Hartmann, 2004; Devitt, 2013; Hajek & Hartmann, 2009 ). Diversity of stakeholders can be achieved in three different ways (Pinjani and Palvia, 2013 ): 1. demographic or surface-level diversity, e.g. age, sex, gender, race, 2. deep-level diversity, e.g. idiosyncratic attitudes, values, and preferences) or 3. functional diversity, non-overlapping knowledges and expertise in contributors, producing a larger knowledge base on which to draw. Participants on a successful Bayesian social platform ought to encourage participation from all three kinds of diverse groups, as the likelihood of independence is increased by diversity. Not only did we seek functional diversity, but also to foster the ideas of those on the margins of groups and social networks. Weak ties between individuals have been shown to be good for innovation, where as strong ties between individuals have been shown to be good for productivity (Minor et al., 2017 , Levin et al., 2011 , Granovetter, 1973 . The platform supposes that the more competent, independent users on the platform considering ideas, the more likely a majority of those users are correct in accordance with Condorcet Jury Theorem (CJT). Condorcet Jury theorem supposes that incorporating the views of many minds (so long as they are competent and independent) will produce truthful propositions. Not only is diversity important, but so is trust (Palvia, 2009) . Contributors must trust that they are able to 'speak their mind' and given the benefit of the doubt, be treated with respect, be treated fairly, without unreasonable punitive actions being taken against them. This method encourages an inclusive, yet evidence-based approach aiming for more reliable and useful results for stakeholders. Users and decision-makers can download data added to the platform including hypotheses, evidence items, degree of belief, weight of evidence, average quality of evidence, up votes, downvotes, vote count, rating count, total contributors and authors-see Figure 13 . Users can choose real names or pseudonyms when they register. The privacy agreement on using the platform models best practise as per GDPR including making the privacy statement as clear as possible. Figure 13 . Sample of downloadable output from the BetterBeliefs platform (authors' names withheld) The platform can use algorithmic means to identify online behaviours lacking value, such as: Careless: a user that endorse hypotheses or pieces of evidence without paying attention Conformity: a user being more likely to upvote hypothesis with high Degree of Belief (DoB) and give a high rank to those with high Weight of Evidence (WoE) Authorship: a user that downvotes or give low rank to refuting pieces of evidence on a hypothesis they entered and endorsed as well as the inclination to downvote or give low rank to pieces of evidence contrary hypotheses to author's Group bias and manager fear bias: Users that tend to favour an evidence/hypothesis from their area or added by their direct managers [or anyone higher in hierarchy]. Political coup: a group of individuals acting cooperatively to achieve political ends. This may not be problematic if good and balanced evidence added. But, detecting such a bias could allow for early intervention on the coup Once alerted to poor behaviours, moderators can intervene upon or remove users who are not confirming to community guidelines for online behaviours. There is still much work to be done to ensure moderators have appropriate checks on their own power to influence data production, manipulation and use. Being transparent about how data is generated and used to make decisions is critical in building and maintaining community trust. To date the BetterBeliefs platform has been used in organisational contexts where corporate, university or government ethics and decision-making is bound by explicit codes of conduct, human resource policy and legislative obligations. Virtuous online digital communities seem like a great improvement over apathetic ones, so what could go wrong? In this section I outline some of the issues that are faced by online content providers and obligations they have to maintain a just and fair society as well as a knowledgeproducing and truth-disseminating one. Key concerns include the tendency of platforms to exploit user attention and data to progress financial gain (particularly from advertising) to the detriment of user wellbeing; (Zuboff, 2019) , the opaque use of surveillance and censorship (Lee and Scott-Baumann, 2020) , lack of responsibility taken for damaging content posted to and disseminated on platforms in addition to a lack of regulatory oversight. We go through some of these issues in turn. Digital platforms have responsibility for both their function and their content. This means that they must have governance structures to evaluate and act on content shared on them if that content is misleading or false as well as causing harm or potentially causing harm. Facebook's Oversight Board is beginning to rule and have impacts on how Facebook manages content, such as the move to remove vaccine misinformation off the platform (Isaac, 2021) . From responsibility also comes advocacy. Social platforms ought to take a stance on issues (such as public health) and justify behaviours based on this stance. We argue that supporting verifiable content and rejecting demonstrable falsehoods is a critical obligation of social platforms. However, content removal decisions ought to be scrutinised and held to a high standard, lest unwarranted censorship occurs. Online platforms ought to encourage the free expression of ideas. Mark Zuckerberg has defended the value of free speech to justify not taking down posts with problematic content with the exception of posts that could lead to immediate direct physical harm to people on or off the platform. Free speech remains a controversial right as it is frequently misinterpreted as a freedom to say whatever an individual or group wishes to express. On the one hand, freedom is the founding value of the United States where many of the biggest social platforms arose, on the other hand, free speech is misunderstood as including falsehoods and asserting harmful propositions. The Oversight Board has called for Facebook to create more concrete policies that guide their content moderation decisions. The Board…found Facebook's misinformation and imminent harm rule… to be inappropriately vague and inconsistent with international human rights standards. A patchwork of policies found on different parts of Facebook's website make it difficult for users to understand what content is prohibited (Facebook Oversight Board, 2021) . Social platforms must abide by the legal obligations in the Sovereign nation within which they are based and abide by International legal frameworks that seek to minimise harms to others. Freedom of expression ought to be endorsed in so far as it maintains authenticity, safety, privacy, dignity and the ability of others to also express themselves. Online platforms ought to provide privacy to individuals and their content to the degree that users express a preference (Bernal, 2014) . Such a view would defend a platform for allowing encryption to hide user content as well as allowing users to publicly promote their material. It would also obligate platforms not to conduct unnecessary surveillance or censorship upon users. Platforms must commit to security of data and information and to resolving data breaches quickly on behalf of users. There are ethical concerns with encryption, such as the wide dissemination with child pornography on communication apps that uses encryption. Material that might not be acceptable to the standards of society is likely to be shared via encrypted means. However, encryption also forms a necessary method and means by which citizens can mobilise against an unjust government or fight for their rights as citizens (Daly et al., 2019) . Social platforms must remain vigilant with regards to best practice in privacy and security management and vow to continuously update their policies and action to meet the expectations of society and to progress a just and fair society. Social platforms ought to be GDPR compliant (or compliant with emerging local governance structures that promote user data rights) (European Parliament and Council, 2016) . Data subjects ought to be able to request their data and to delete their data. Data activists ought to be able to access and make sense of social platform data creating new ways of knowing the world, creating data countercultures (Milan and Van der Velden, 2016) . In general citizens ought to be more empowered to access and use data to progress their ends, particularly the most marginalised and disenfranchised (Daly et al., 2019) . Social platforms can learn from the emerging consensus in ethical AI with regards to how to consider the potential impacts of their technology on the society they serve-see Appendix 2. Comparison of AI Ethics Principles. To date the BetterBeliefs platform has been used by organisations for closed groups for specific events including workshops, hackathons, design jams and stakeholder engagement for strategic policy setting. In closed settings, moderators and the platform designers have worked side-byside to manage the ethics of platform use and disclosure to users. In the future, the platform team will need to carefully weigh up the excitement of expansion with the ethical risks such an expansion might reveal. Researchers have developed a technology that could be the first step in creating epistemic groups that use social platforms that are inclusive, responsive to evidence, limit punitive actions Devitt, S.K., Pearce, T.R., Chowdhury, A., . A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge. 30 and allow productive discord and respectful disagreement. BetterBeliefs improves evidencebased, collective ideation-a virtuous digital platform. Our design puts the hypothesis ahead of the document as the unit of information and evidence in the service of or arguing against hypotheses in accordance with the norms of Bayesian epistemology. The platform is designed to help reduce cognitive biases that emerge when groups produce too few hypotheses, hypotheses are too similar or conservative, collective knowledge is ignored, lost or underutilised, evidence is not comprehensive or is drawn from conforming groups or contexts. Our platform encourages individuals to generate numerous and diverse hypotheses, prompts for different kinds of evidence to support or refute hypotheses, invites users to evaluate the quality of evidence, and scientifically calculates two kinds of metrics for the quality of hypotheses based on how people engage: a 'degree of belief' metric that measures how much confidence the group has in a hypothesis; and a 'weight of evidence' metric that measures how much evidence the group has considered for or against a hypothesis. The platform can be inclusive, intuitive and rewarding to use. However, while there is potential in using new types of social platforms, platform designs and providers must abide by emerging best practices in social platform governance and responsible innovation, ensuring responsibility, support of free speech, privacy by design, data rights and the opportunity for data activism. Overconfidence bias occurs when the decision makers provide estimates for a given parameter that are above the actual performance (overestimation) or when the range of variation they provide is too narrow (over precision). Found frequently in quantitative estimates, such as in defence, legal, financial, and engineering decisions. Also present in judgments about the completeness of a hypothesis set. Ways to debias include probability training; starting with extreme statistics (low and high), avoid central tendency anchors, use counterfactuals to challenge extremes. Use fixed value instead of fixed probability elicitations. Devitt, S.K., Pearce, T.R., Chowdhury, A., Socialization in virtual groups Facebook to censor 'stop the steal' phrase, as social media companies boot US President Donald Trump from their platforms On the measurability of information quality Epistemology: A contemporary introduction to the theory of knowledge Cyber-extremism: Isis and the power of social media Peer effects in product adoption Netflix's The Social Dilemma highlights the problem with social media, but what's the solution? The Conversation This article won't change your mind: The facts on why facts alone can't fight false beliefs A pro-Trump mob stormed the halls of Congress. Photographs from inside the chaos at the Capitol Internet privacy rights: rights to protect autonomy Decisions 2.0: The power of collective intelligence The dialectic of foundationalism and coherentism. The Blackwell guide to epistemology Ethos, pathos and logos in Aristotle's Rhetoric: A re-examination. Argumentation A Bayesian social platform for inclusive and evidence-based decision making Thierry Breton: Capitol Hill -the 9/11 moment of social media Lawsuits take the lead in fight against disinformation Information as Thing History of fake news What can crowdsourcing do for decision support? Decision Support Systems Disagreement as evidence: The epistemology of controversy Selective exposure shapes the Facebook news diet Surfing uncertainty : prediction, action, and the embodied mind Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age. The library quarterly Fake news and alternative facts: Information literacy in a post-truth era Noting the mind: Commonplace books and the pursuit of the self in eighteenth-century Britain GPT-3: What's it good for? Good Data, Amsterdam: Institute for Network Cultures Correspondance avec Elisabeth Paris Realism and Truth Homeostatic epistemology: Reliability, coherence and coordination in a Bayesian virtue epistemology Strategic decision support platform for collective ideation and evidence-based decisions incorporating Bayesian rationality Evidentialism and its Discontents Knowledge-worker productivity: The biggest challenge. California management review Bayesian epistemology and having evidence Estimating peer effects in networks with peer encouragement designs Item response theory General Data Protection Regulation (GDPR) A Bayesian social platform for inclusive and evidence-based decision making FB-XWJQBU9A: Case decision 2020-006-FB-FBR On verifying the accuracy of information: Philosophical perspectives Annual review of information science and technology Toward an epistemology of Wikipedia A theory of social comparison processes Autobiographical Memory and the Construction of a Narrative Self: Developmental and Cultural Perspectives Countering extremists on social media: challenges for strategic communication and content moderation Crowdsourcing: a review and suggestions for future research Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media Reconstructing constructivism: Causal models, Bayesian learning mechanisms, and the theory theory The strength of weak ties The virtues of Bayesian epistemology Bayesian epistemology Data-driven decisions start with these 4 questions QAnon and the information dark age Twitter plans to bring prompts to 'read before you retweet' to all users Automatic detection of generated text is easiest when humans are fooled Facebook says it plans to remove posts with false vaccine claims The global landscape of AI ethics guidelines Does media literacy help identification of fake news? Information literacy helps, but other literacies don't How high-school students find and evaluate scientific information: A basis for information literacy skills development Thinking, fast and slow Knowledge and its place in nature, Amazon Kindle Edition Paper machines: about cards & catalogs A Bayesian social platform for inclusive and evidence-based decision making Digital ecology of free speech: Authenticity, identity, and self-censorship Social media, knowledge sharing, and innovation: Toward a theory of communication visibility Enterprise social media: Definition, history, and prospects for the study of social technologies in organizations Dormant ties: The value of reconnecting Knowledge management technologies and applications-literature review from 1995 to 2002. Expert systems with applications Deploying information technologies for organizational innovation: Lessons from case studies The quality and qualities of information Why were the Capital rioters so angry? Because they're scared of losing grip on their perverse idea of democracy. The Conversation Social information processing and social networks: A test of social influence mechanisms The alternative epistemologies of data activism Data from 3.5 million employees shows how innovation really works Cognitive and Motivational Biases in Decision and Risk Analysis MyPillow Fight: Lindell Clashes With Newsmax Over Trump's 2020 Loss | The 11th Hour | MSNBC Dynamic inference and belief revision. International Conference on Thinking The role of trust in e-commerce relational exchange: A unified model Fact-checking and debunking: A best practice guide to dealing with disinformation Bayesian Models of Cognition: What's Built in After All? Trust and knowledge sharing in diverse global virtual teams Data science and its relationship to big data and datadriven decision making Inside the echo chamber Business intelligence effectiveness and corporate performance management: an empirical analysis Behind the screen: Content moderation in the shadows of social media A Bayesian social platform for inclusive and evidence-based decision making Trump's fraud claims died in court, but the myth of stolen elections lives on A social information processing approach to job attitudes and task design Anatomy of news consumption on Facebook Realising the strategic impact of business intelligence tools LikeWar: The weaponization of social media Groupthink versus the wisdom of crowds: The social epistemology of deliberation and dissent STERELNY, K. 2012. The evolved apprentice Deliberating groups versus prediction markets (or Hayek's challenge to Habermas) Trump's lies about the election show how disinformation erodes democracy. The Conversation The need to reckon with Trump's lies. The Washington Post, 11 January. UNGER, P. 1978. Ignorance: A case for scepticism A Facebook War: Libyans Battle on the Streets and on Screens World Brain Glut: mastering information through the ages Cataloging the world: Paul Otlet and the birth of the information age Tweet, but verify: epistemic study of information verification on twitter The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of A Bayesian social platform for inclusive and evidence-based decision making Anchoring bias occurs when the estimation of a numerical value is based on an initial value (anchor), which is then insufficiently adjusted to provide the final answer Ways to debias: avoiding anchors, providing multiple and counter anchors, and use experts with different anchors Availability bias (or 'ease-of-recall') occurs when ease of recall dominates the assignment of probability to an event.Found in frequency estimates, frequency of lethal events, and rare events that are anchored on recent examples.Ways to debias include conducting probability training, provide counterexamples and provide statistics.Confirmation bias occurs when there is a desire to confirm one's belief by selectively acquiring and using evidence.Found in many settings such as information gathering, selection tasks, evidence updating and evaluation of one's own judgment. It has been shown in real-world contexts such as medical diagnostics, judicial reasoning and scientific thinking.Ways to debias confirmation bias include using multiple experts with different points of view about hypotheses, challenging probability assessments with counterfactuals and probe evidence for alternative hypotheses. Found when participants focus on a small number of alternatives, a small number of objectives, or a single future state of the world.Ways to debias trying to encourage decision makers to think about more objectives, new alternatives and other possible states of the future. Found in the definition of objectives, identification of decision alternatives and hypothesis generation. Ways to debias prompt for alternatives and objectives, ask for extreme or unusual scenarios or use group elicitation techniques. . A Bayesian social platform for inclusive and evidence-based decision making. [under peer review] M. Alfano, C. Klein and J de Ridder (Eds.). Social Virtue Epistemology. Routledge.