key: cord-020909-n36p5n2k authors: Papadakos, Panagiotis; Konstantakis, Giannis title: bias goggles: Graph-Based Computation of the Bias of Web Domains Through the Eyes of Users date: 2020-03-17 journal: Advances in Information Retrieval DOI: 10.1007/978-3-030-45439-5_52 sha: doc_id: 20909 cord_uid: n36p5n2k Ethical issues, along with transparency, disinformation, and bias, are in the focus of our information society. In this work, we propose the bias goggles model, for computing the bias characteristics of web domains to user-defined concepts based on the structure of the web graph. For supporting the model, we exploit well-known propagation models and the newly introduced Biased-PR PageRank algorithm, that models various behaviours of biased surfers. An implementation discussion, along with a preliminary evaluation over a subset of the greek web graph, shows the applicability of the model even in real-time for small graphs, and showcases rather promising and interesting results. Finally, we pinpoint important directions for future work. A constantly evolving prototype of the bias goggles system is readily available. There is an increasing concern about the potential risks in the consumption of abundant biased information in online platforms like Web Search Engines (WSEs) and social networks. Terms like echo chambers and filter-bubbles [26] depict the isolation of groups of people and its aftereffects, that result from the selective and restrictive exposure to information. This restriction can be the result of helpful personalized algorithms, that suggest user connections or rank highly information relevant to the users' profile. Yet, this isolation might inhibit the growth of informed and responsible humans/citizens/consumers, and can also be the result of malicious algorithms that promote and resurrect social, religious, ethnic, and other kinds of discriminations and stereotypes. Currently, the community focus is towards the transparency, fairness, and accountability of mostly machine learning algorithms for decision-making, classification, and recommendation in social platforms like twitter. However, social platforms and WSEs mainly act as gateways to information published on the web as common web pages (e.g., blogs and news). Unfortunately, users are unaware of the bias characteristics of these pages, except for obvious facts (e.g., a page in a political party's web site will be biased towards this party). In this work, we propose the bias goggles model, where users are able to explore the biased characteristics of web domains for a specific biased concept (i.e., a bias goggle). Since there is no objective definition of what bias and biased concepts are [27] , we let users define them. For these concepts, the model computes the support and the bias score of a web domain, by considering the support of this domain for each aspect (i.e., dimension) of the biased concept. These support scores are calculated by graph-based algorithms that exploit the structure of the web graph and a set of user-defined seeds representing each aspect of bias. As a running example we will use the biased concept of greek politics, that consists of nine aspects of bias, each one representing a popular greek party, and identified by a single seed; the domain of its homepage. In a nutshell, the main contributions of this work are: -the bias goggles model for computing the bias characteristics of web domains for a user-defined concept, based on the notions of Biased Concepts (BCs), Aspects of Bias (ABs), and the metrics of the support of the domain for a specific AB and BC, and its bias score for this BC, -the introduction of the Support Flow Graph (SFG), along with graph-based algorithms for computing the AB support score of domains, that include adaptations of the Independence Cascade (IC) and Linear Threshold (LT) propagation models, and the new Biased-PageRank (Biased-PR) variation that models different behaviours of a biased surfer, -an initial discussion about performance and implementation issues, -some promising evaluation results that showcase the effectiveness and efficiency of the approach on a relatively small dataset of crawled pages, using the new AGBR and AGS metrics, -a publicly accessible prototype of bias goggles. The rest of the paper is organized as follows: the background and the related work is discussed in Sect. 2, while the proposed model, and its notions and metrics are described in Sect. 3. The graph-based algorithms for computing the support score of a domain for a specific AB are introduced in Sect. 4. The developed prototype and related performance issues are discussed in Sect. 5, while some preliminary evaluation results over a relatively small dataset of web pages are reported in Sect. 6. Finally, Sect. 7 concludes the paper and outlines future work. Social platforms have been found to strengthen users' existing biases [21] since most users try to access information that they agree with [18] . This behaviour leads to rating bubbles when positive social influence accumulates [24] and minimizes the exposure to different opinions [31] . This is also evident in WSEs, where the personalization and filtering algorithms lead to echo chambers and filter bubbles that reinforce bias [4, 12] . Remarkably, users of search engines trust more the top-ranked search results [25] and biased search algorithms can shift the voting preferences of undecided voters by as much as 20% [8] . There is an increasingly growing number of discrimination reports regarding various protected attributes (e.g., race, gender, etc.) in various domains, like in ads [7, 29] and recommendation systems [13] , leading to efforts for defining principles of accountable 1 , auditing [28] and de-bias algorithms [1] , along with fair classifiers [6, 14, 34] . Tools that remove discriminating information 2 , flag fake news 3 , make personalization algorithms more transparent 4 , or show political biases in social networks 5 also exist. Finally, a call for equal opportunities by design [16] has been raised regarding the risks of bias in the stages of the design, implementation, training and deployment of data-driven decision-making algorithms [3, 11, 20] . There are various efforts for measuring bias in online platforms [27] . Bias in WSEs has been measured as the deviation from the distribution of the results of a pool of search engines [23] and the coverage of SRPs towards US sites [30] . Furthermore, the presence of bias in media sources has been explored through human annotations [5] , by exploiting affiliations [32] , the impartiality of messages [33] , the content and linked-based tracking of topic bias [22] , and the quantification of data and algorithmic bias [19] . However, this is the first work that provides a model that allows users to explore the available web sources based on their own definitions of biased concepts. The approach exploits the web graph structure and can annotate web sources with bias metrics on any online platform. Below we describe the notions of Biased Concepts (BCs) and Aspects of Bias (ABs), along with the support of a domain for an AB and BC, and its bias score for a BC. Table 1 describes the used notation. The interaction with a user begins with the definition of a Biased Concept (BC), which is considered the goggles through which the user wants to explore the web domains. BCs are given by users and correspond to a concept that can range from a very abstract one (e.g., god) to a very specific one (e.g., political parties). For each BC, it is required that the users can identify at least two Aspects of Bias (ABs), representing its bias dimensions. ABs are given by the users and correspond to a non-empty set of seeds (i.e., domains) S, that the user considers to fully Table 1 . Description of the used notation. The first part describes the notation used for the Web Graph, while the second the notation for the proposed model. An |A|-dimensional vector with support 1 in all dimensions support this bias aspect. For example, consider the homepage of a greek political party as an aspect of bias in the biased concept of the politics in Greece. Notice, that an AB can be part of more than one BCs. Typically, an AB is denoted by AB sign(S) , where sign(S) is the signature of the non-empty set of seeds S. The sign(S) is the SHA1 hash of the lexicographic concatenation of the normalized Second Level Domains (SLDs) 6 of the urls in S. We assume that all seeds in S are incomparable and support with the same strength this AB. The domains in the set of seeds S are incomparable and equally supportive of the AB sign (S) . The user-defined BC of the set of ABs A ⊆ A U , where |A| ≥ 2 and A U the universe of all possible ABs in the set of domains doms(W) of the crawled pages W, is denoted by BC A and is represented by the pair < d A , desc A >. The d A is an |A|-dimensional vector with |A| ≥ 2, holding all AB sign(S) ∈ A of this BC in lexicographic order. desc A is a user-defined textual description of this BC. In this work, we assume that all ABs of any BC are orthogonal and unrelated. ABs in a userdefined BC are considered orthogonal. Using the notation, our running example is denoted as where d R is a vector that holds lexicographically the SHA1 signatures of the nine ABs singleton seeds of greek political parties R = { {"anexartitoiellines.gr"}, {"antidiaploki.gr"}, {"elliniki − lisi.gr"}, {"kke.gr"}, {"mera25.gr"}, {"nd.gr"}, {"syriza.gr"}, {"topotami.gr"}, {"xryshaygh.com"}}, and desc R = "politics in Greece" is its description. A core metric in the proposed model is the support score of a domain dom to an aspect of bias AB sign(S) , denoted as sup(AB sign(S) , dom). The support score ranges in [0, 1], where 0 denotes an unsupportive domain for the corresponding AB, and 1 a fully supportive one. We can identify three approaches for computing this support for a dataset of web pages: (a) the graph-based ones that exploit the web graph structure and the relationship of a domain with the domains in seeds(AB sign(S) ), (b) the content-based ones that consider the textual information of the respective web pages, and (c) the hybrid ones that take advantage of both the graph and the content information. In this work, we focus only on graph-based approaches and study two frequently used propagation models, the Independence Cascade (IC) and Linear Threshold (LT) models, along with the newly introduced Biased-PageRank (Biased-PR), that models various behaviours of biased surfers. The details about these algorithms are given in Sect. 4. In the same spirit, we are interested about the support of a specific domain dom to a biased concept BC A , denoted by sup(BC A , dom). The basic intuition is that we need a metric that shows the relatedness and support to all or any of the aspects in A, which can be interpreted as the relevance of this domain with any of the aspects of the biased concept BC A . A straightforward way to measure it, is the norm of the s dom d A vector that holds the support scores of dom for each AB in A, normalized by the norm of the 1 |A| vector. This vector holds the support scores of a 'virtual' domain that fully supports all bias aspects in BC A . Specifically, 1] . By using the above formula two domains might have similar support scores for a specific BC, while the support scores for the respective aspects might differ greatly. For example, consider two domains dom and dom , with dom fully supporting only one aspect in A and dom fully supporting another aspect in A. Then sup(BC A , dom) ∼ sup(BC A , dom ). Below we introduce the bias score of a domain regarding a specific BC, as a way to capture the leaning of a domain to specific ABs of a BC. The bias score of a domain regarding a BC tries to capture how biased the domain is over any of its ABs, and results from the support scores that the domain has for each aspect of the BC. For example, consider a domain dom that has a rather high support for a specific AB, but rather weak ones for the rest ABs of a specific BC. This domain is expected to have a high bias score. On the other hand, the domain dom that has similar support for all the available ABs of a BC can be considered to be unbiased regarding this specific BC. We define the bias score of a domain dom for BC A as the distance of the s dom We use the cosine similarity to define the distance metric, as shown below: In this section, we discuss the graph-based algorithms that we use for computing the support score of a domain regarding a specific AB. We focus on the popular Independence Cascade (IC) and Linear Threshold (LT) propagation models, along with the newly introduced Biased-PageRank (Biased-PR) algorithm. Let W be the set of crawled web pages, doms(W) the set of normalized SLDs in W, links(W) the set of crawled links between the domains in doms(W), and G(W) the corresponding graph with doms(W) as nodes and links(W) as edges. With link dom,dom we denote a link from domain dom to dom | dom, dom ∈ doms(W), while inv(link dom,dom ) inverses the direction of a link and inv(links(W)) is the set of inverse links in W. Furthermore, for the links we assume that: Although the above assumption might not be precise, since links from a web page to another are not always of supportive nature (e.g., a web page critizing another linked one), or of the same importance (e.g., links in the homepage versus links deeply nested in a site), it suffices for the purposes of this first study of the model. Identification of the nature of links and the importance of the pages they appear is left as future work. Given that the assumption holds, part or whole of the support of dom regarding any AB can flow to dom through inv(link dom,dom ). Specifically, we define the Support Flow Graph as: Support Flow Graph (SFG) Definition. The SFG of a set of web pages W is the weighted graph that is created by inversing the links in G(W) (i.e., the graph with doms(W) as nodes and inv(links(W)) as edges). The weight of each edge is w dom,dom = outInvLinks(dom,dom ) So, given an SFG(W) and the seeds(AB sign(S) ) of an AB we can now describe how the support flows in the nodes of the SFG(W) graph. All algorithms described below return a map M holding sup(AB sign(S) , dom) ∀ dom ∈ doms(W). The IC propagation model was introduced by Kempe et al. [17] , and a number of variations have been proposed in the bibliography. Below, we describe the basic form of the model as adapted to our needs. In the IC propagation model, we run n experiments. Each run starts with a set of activated nodes, in our case the seeds(AB sign(S) ), that fully support the AB sign(S) . In each iteration there is a history independent and non-symmetric probability of activating the neighbors of the activated nodes associated with each edge, flowing the support to the neighbors of the activated nodes in the SFG(W). This probability is represented by the weights of the links of an activated node to its neighbors, and each node, once activated, can then activate its neighbors. The nodes and their neighbors are selected in arbitrary order. Each experiment stops when there are no new activated nodes. After n runs we compute the average support score of nodes, i.e., sup(AB sign(S) , dom) ∀ dom ∈ doms(W). The algorithm is given in Algorithm 1. The LT model is another widely used propagation model. The basic difference from the IC model is that for a node to become active we have to consider the support of all neighbors, which must be greater than a threshold θ ∈ [0, 1], serving as the resistance of a node to its neighbors joint support. Again, we use the support probabilities represented by the weights of the SFG links. The full algorithm, which is based on the static model introduced by Goyal et al. [10] , is given in Algorithm 2. In each experiment the thresholds θ get a random value. We introduce the Biased-PR variation of PageRank [9] that models a biased surfer. The biased surfer always starts from the biased domains (i.e., the seeds of an AB), and either visits a domain linked by the selected seeds or one of the biased domains again, with some probability that depends on the modeled behaviour. The same process is followed in the next iterations. The Biased-PR differs to the original PageRank in two ways. The first one is how the score (support in our case) of the seeds is computed at any step. The support of all domains is initially 0, except from the support of the seeds that have the value init seeds = 1. At any step, the support of each seed is the original PageRank value, increased by a number that depends on the behaviour of the biased surfer. We have considered three behaviours: (a) the Strongly Supportive (SS) one, where the support is increased by init seeds and models a constantly strongly biased surfer, (b) the Decreasingly Supportive (DS) one, where the support is increased by init seeds /iter, modeling a surfer that becomes less biased the more pages he/she visits, and (c) the Non-Supportive (NS) one, with no increment, modeling a surfer that is biased only on the initial visiting pages, and afterwards the support score is computed as in the original PageRank. Biased-PR differs also on how the biased surfer is teleported to another domain when he/she reaches a sink (i.e., a domain that has no outgoing links). The surfer randomly teleports with the same probability to a domain in any distance from the seeds. If a path from a node to any of the seeds does not exist, the distance of the node is the maximum distance of a connected node increased by one. Since the number of nodes at a certain distance from the seeds increase as we move away from the seeds, the teleporting probability for a node is greater the closer the node is to the seeds. We expect slower convergence for Biased-PR than the original PageRank, due to the initial zero scores of non-seed nodes. The algorithm is given in Algorithm 3. Due to size restrictions we provide a rather limited discussion about the complexities and the cost of tuning the parameters of each algorithm. The huge scale of the web graph has the biggest performance implication to the the graph-based computation of the ABs support. What is encouraging though, is that the algorithms are applied over the compact SFG graph, that contains the SLDs of the pages and their corresponding links. The complexity of IC is in O(n * |domsW| * |dom(links(W)|), where n is the number of experiments. LT is much slower though since we have to additionally consider the joint support of the neighbors of a node. Finally, the Biased-PR converges slower than the original PageRank, since the algorithm begins only with the seeds, spreading the support to the rest nodes. Also, we must consider the added cost of computing the shortest paths of the nodes from the seeds. For the relatively small SFG used in our study (see Sect. 6), the SS converges much faster than the DS and NS, which need ten times more iterations. For newly introduced ABs though, the computation of the support scores of the domains can be considered an offline process. Users can submit ABs and BCs into the bias goggles system and get notified when they are ready for use. However, what is important is to let users explore in real-time the domains space for any precomputed and commonly used BCs. This can be easily supported by providing efficient ways to store and retrieve the signatures of already known BCs, along with the computed support scores of domains of available ABs. Inverted files and trie-based data structures (e.g., the space efficient burst-tries [15] and the cache-consious hybrid or pure HAT-tries [2] ) over the SLDs and the signatures of the ABs and BCs, can allow the fast retrieval of offsets in files where the support scores and the related metadata are stored. Given the above, the computation of the bias score and the support of a BC for a domain is lightning fast. We have implemented a prototype 7 that allows the exploration of predefined BCs over a set of mainly greek domains. The prototype offers a REST API for retrieving the bias scores of the domains, and exploits the open-source project crawler4j 8 . We plan to improve the prototype, by allowing users to search and ingest BCs, ABs and domains of interest, and develop a user-friendly browser plugin on top of it. Evaluating such a system is a rather difficult task, since there are no formal definitions of what bias in the web is, and there are no available datasets for evaluation. As a result, we based our evaluation over BCs for which it is easy to find biased sites. We used two BCs for our experiments, the greek politics (BC1) with 9 ABs, and the greek football (BC2) with 6 ABs. For these BCs, we gathered well known domains, generally considered as fully supportive of only one of the ABs, without inspecting though their link coverage to the respective seeds, to avoid any bias towards our graph based approach. Furthermore, we did not include the original seeds to this collection. In total, we collected 50 domains for BC1 and 65 domains for BC2, including newspapers, radio and television channels, blogs, pages of politicians, etc. This collection of domains is our gold standard. We crawled a subset of the greek web by running four instances of the crawler: one with 383 sites related to the greek political life, one with 89 sport related greek sites, one with the top-300 popular greek sites according to Alexa, and a final one containing 127 seeds related to big greek industries. We black-listed Below we report the results of our experiments over an i7-5820K 3.3GHz system, with 6 cores, 15MB cache and 16GB RAM memory, and a 6TB disk. For each of the two BCs and for each algorithm, we run experiments for various iterations n and Biased-PR variations, for the singleton ABs of the 9 political parties and 6 sports teams. For Biased-PR we evaluate all possible behaviours of the surfer using the parameters θ conv = 0.001 and d = 0.85. We also provide the average number of iterations for convergence over all ABs for Biased-PR. We report the run times in seconds, along with the metrics Average Golden Bias Ratio (AGBR) and Average Golden Similarity (AGS), that we introduce in this work. The AGBR is the ratio of the average bias score of the golden domains, as computed by the algorithms for a specific BC, divided by the average bias score of all domains for this BC. The higher the value, the more easily we can discriminate the golden domains from the rest. On the other hand, the AGS is the average similarity of the golden domains to their corresponding ABs. The higher the similarity value, the more biased the golden domains are found to be by our algorithms towards their aspects. A high similarity score though, does not imply high support for the golden domains or high disimilarity for the rest. The perfect algorithm will have high values for all metrics. The results are shown in Table 2 . The difference in BC1 and BC2 results implies a less connected graph for BC2 (higher AGBR values for BC2), where the support flows to less domains, but with a greater interaction between domains supporting different aspects (smaller AGS values). What is remarkable is the striking time performance of IC, suggesting that it can be used in real-time and with excellent results (at least for AGBR). On the other hand, the LT is a poor choice, being the slowest of all and dominated in any aspect by IC. Regarding the Biased-PR only the SS variation offers exceptional performance, especially for AGS. The DS and NS variations are more expensive and have the worst results regarding AGBR, especially the NSS that avoids bias. In most cases, algorithms benefit from more iterations. The SS variation of Biased-PR needs only 40 iterations for BC1 and 31 for BC2 to converge, proving that less nodes are affected by the seeds in BC2. Generally, the IC and the SS variation of Biased-PR are the best options, with the IC allowing the real-time ingestion of ABs. But, we need to evaluate the algorithms in larger graphs and for more BCs. We also manually inspected the top domains according to the bias and support scores for each algorithm and each BC. Generally the support scores of the domains were rather low, showcasing the value of other support cues, like the content and the importance of pages that links appear in. In the case of BC1, except from the political parties, we found various blogs, politicians homepages, news sites, and also the national greek tv channel, being biased to a specific political party. In the case of BC2 we found the sport teams, sport related blogs, news sites, and a political party being highly biased towards a specific team, which is an interesting observation. In both cases we also found various domains with high support to all ABs, suggesting that these domains are good unbiased candidates. Currently, the bias goggles system is not able to pinpoint false positives (i.e pages with non supportive links) and false negatives (i.e., pages with content that supports a seed without linking to it), since there is no content analysis. We are certain that such results can exist, although we were not able to find such an example in the top results of our study. Furthermore, we are not able to distinguish links that can frequently appear in users' content, like in the signatures of forum members. In this work, we introduce the bias goggles model that facilitates the important task of exploring the bias characteristics of web domains to user-defined biased concepts. We focus only on graph-based approaches, using popular propagation models and the new Biased-PR PageRank variation that models biased surfers behaviours. We propose ways for the fast retrieval and ingestion of aspects of bias, and offer access to a developed prototype. The results show the efficiency of the approach, even in real-time. A preliminary evaluation over a subset of the greek web and a manually constructed gold standard of biased concepts and domains, shows promising results and interesting insights that need futher research. In the future, we plan to explore variations of the proposed approach where our assumptions do not hold. For example, we plan to exploit the supportive, neutral or oppositive nature of the available links, as identified by sentiment analysis methods, along with the importance of the web pages they appear in. Contentbased and hybrid approaches for computing the support scores of domains are also in our focus, as well as the exploitation of other available graphs, like the graph of friends, retweets, etc. In addition interesting aspects include how the support and bias scores of multiple BCs can be composed, providing interesting insights about possible correlations of different BCs, as well as how the bias scores of domains change over time. Finally, our vision is to integrate the approach in a large scale WSE/social platform/browser, in order to study how users define bias, create a globally accepted gold standard of BCs, and explore how such tools can affect the consumption of biased information. In this way, we will be able to evaluate and tune our approach in real-life scenarios, and mitigate any performance issues. De-biasing user preference ratings in recommender systems Hat-trie: a cache-conscious trie-based data structure for strings Man is to computer programmer as woman is to homemaker? debiasing word embeddings Bias in algorithmic filtering and personalization Fair and balanced? quantifying media bias through crowdsourced content analysis Algorithmic decision making and the cost of fairness Fairness through awareness The search engine manipulation effect (seme) and its possible impact on the outcomes of elections Pagerank beyond the web Learning influence probabilities in social networks Algorithmic bias: from discrimination discovery to fairness-aware data mining Measuring personalization of web search Measuring price discrimination and steering on e-commerce web sites Equality of opportunity in supervised learning Burst tries: a fast, efficient data structure for string keys Big data: A report on algorithmic systems, opportunity, and civil rights. Executive Office of the President Maximizing the spread of influence through a social network Events and controversies: influences of a shocking news event on information seeking Quantifying search bias: investigating sources of bias for political searches in social media The Tyranny of data? the bright and dark sides of data-driven decision-making for social good Is twitter a public sphere for online conflicts? a cross-ideological and cross-hierarchical look Biaswatch: a lightweight system for discovering and tracking topic-sensitive opinion bias in social media Measuring search engine bias Social influence bias: a randomized experiment In google we trust: users' decisions on rank, position, and relevance The filter bubble: what the Internet is hiding from you On measuring bias in online information Auditing algorithms: research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry Risk, race, and recidivism: predictive bias and disparate impact Search engine coverage bias: evidence and possible causes Secular vs. islamist polarization in Egypt on twitter Quantifying political leaning from tweets and retweets Message impartiality in social media discussions Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment