key: cord-0558216-f2drojhu authors: Bach, Benjamin; Freeman, Euan; Abdul-Rahman, Alfie; Turkay, Cagatay; Khan, Saiful; Fan, Yulei; Chen, Min title: Dashboard Design Patterns date: 2022-05-02 journal: nan DOI: nan sha: ce6be0f744a9012ba7d78eaa7e775f1ae891b1f9 doc_id: 558216 cord_uid: f2drojhu This paper introduces design patterns for dashboards to inform their design processes. Despite a growing number of public examples, case studies, and general guidelines there is surprisingly little design guidance for dashboards. Such guidance is necessary to inspire designs and discuss tradeoffs in screenspace, interaction, and information shown. Based on a systematic review of 144 dashboards, we report on eight groups of design patterns that provide common solutions in dashboard design. We discuss combinations of these patterns in dashboard genres such as narrative, analytical, or embedded dashboard. We ran a 2 week dashboard design workshop with 23 participants of varying expertise working on their own data and dashboards. We discuss the application of patterns for the dashboard design processes, as well as general design tradeoffs and common challenges. Our work complements previous surveys and aims to support dashboard designers and researchers in co-creation, structured design decisions, as well as future user evaluations about dashboard design guidelines. Detailed pattern descriptions and workshop material can be found online: https://dashboarddesignpatterns.github.io Dashboards offer a curated lens through which people can view large and complex data sets at a glance [20, 33] . They combine visual representations and other graphical embellishments to provide layers of abstraction and simplification for numerous related data points, so that viewers get an overview of the most important or relevant information, in a time-efficient way. Their ability to provide insight at a glance has led to dashboards being widely used across many application domains, such as business [20, 38] , nursing and hospitals [8, 17, 30, 37, 53, 53] , public health [35] , learning analytics [12] , urban analytics [36] , personal analytics, energy [23] and more, summarized elsewhere [20, 41, 44, 54] . These examples, designed mainly for domain experts, have since been complemented by dashboards for public health or political elections, designed for a more general audience and disseminated through news media [55] or dedicated dashboard and tracker websites [5, 14, 48] . There are many informative high-level guidelines on dashboard design, including advice on visual perception, reducing information load, the use of interaction, and visualization literacy [17, 20, 41, 54] . Despite this, we know little about effective and applicable dashboard design, and about how to support rapid dashboard design. Dashboard design is admittedly not straightforward: designers have access to numerous data streams, which they can process, abstract, or simplify as they see fit; they have a wide range of visual representations at their disposal; and they can structure and present these visualizations in numerous ways, to take advantage of the large screens on which they are viewed (vs. individual plots that make more economic use of space). Such a number of choices can be overwhelming, so there is a timely need for guidance about effective dashboard design-especially as dashboards are increasingly being designed for a wider non-expert audience by a wide group of designers who may not have a background in visualization or interface design. In this work, we analyze the visual design of 144 dashboards, to better understand the patterns and design practices used by dashboard designers. By coding these dashboards, we formalized 42 design patterns ( Fig. 1 ) that describe common solutions to design decisions (Sect. 3). We group the patterns into two high-level groups: content patterns that describe what information is shown (data information, meta data, visual representation) and composition patterns that describe how components are laid out across one or many dashboard pages (page layout, screenspace, structure, interaction, color). We then describe six 'genres' of dashboard (Sect. 4) with shared characteristics and common design patterns. These can be related to other genres of information visualization, such as Multiple Coordinated View systems [42] or infographics. Our review and pattern collection help us discuss design tradeoffs (Sect. 5) and discuss possible design frameworks (Sect. 7). To better understand the role these design patterns and types play in the dashboard design process, we ran a two-week dashboard design workshop (Sect. 6) with 23 participants. Participants were a mixture of advanced and novice designers and span a variety of backgrounds, both from the academic, public, and private sectors. Participants came with their data and dashboard challenge which we used for group discussions. During the workshop and regular drop-in sessions, most participants worked on dashboard design mockups in the collaborative design platform Figma while others directly designed in tools such as Tableau or PowerBI. The workshop showed that our pattern collection and the associated terminology provided a useful framework to streamline dashboard design for both novice and advanced designers during both, individual and co-design. Discussions revealed challenges in balancing the amount of information shown, designing for a specific audience, and how to best contextualize the information shown. Our findings extend prior knowledge about, e.g., dashboard data characteristics [51] , and the intended audience and use of dashboards [44] . We provide valuable insight into dashboard design, leading to applicable design knowledge that can inform and inspire the creation of future dashboards and dashboard creation tools, help teach dashboard design, and potentially guide structured evaluations into the effectiveness of individual dashboards. A detailed description of all our design patterns alongside design guidance and other materials from the workshop are available online https://dashboarddesignpatterns.github.io as we invite the community to contribute, use, discuss, and extend dashboard design patterns. There are many definitions that describe the essential characteristics of dashboards, often exposing opposing views and design guidelines [20, 21, 44] . At a high level, dashboards have been described as tip of the iceberg, providing a birds-eye-view about what a user needs to know [54] . Few [20] highlights four key aspects in describing dashboards as "a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so that the information can be monitored at a glance" (emphasis ours). Similarly, Kitchin [33] echoes that dashboards do not simply reflect data, but are a purposefully created lens through which data must be seen and can be engaged with: "a dashboard seeks to act as a translator, not simply a mirror, setting the forms and parameters for how data are communicated and thus what the user can see and engage with.". Other definitions include the provision of metrics resulting from data analysis, the display of dynamic information, and the ability to provide drill-down capabilities for data exploration [41] . Elsewhere, Few [21] , distinguishes between dashboards for monitoring (usually static) and dashboards for analytical tasks (Faceted Analytical Displays) which are most similar to Multiple Coordinate Views systems, in that they combine multiple interactive charts and tables. This differentiation is essential because it implies a tradeoff between (i) the amount and level of detail of information that can comfortably fit a single screen view and (ii) the effort required to access and explore these data through interaction. Dashboards have been reported to serve a range of purposes. They can be designed to support decision making at an executive level (strategical), summarize data about departments (tactical), and provide information for front-line workers (operational) [40] . They can also provide consistency with respect to key performance indicators within an organization, help monitor performance, facilitate planning, and support communication [39] . Through an extensive survey of 83 dashboards, Sarikaya et al. [44] grouped dashboards into interactive dashboards (mostly from BI), static dashboards, dashboards for motivation, and for learning and personal analytics. This diverse set of usage goals suggests different solutions that suit the audience, context, and tasks. Although there is a lack of consensus over what exactly should be considered a dashboard (versus, e.g., a Faced Analytical Display [21] ), it is clear that dashboards play many roles in their viewers' personal or professional lives. In this work, we are open-minded about what a dashboard can be, to give wider insight into the range of design possibilities. There is agreement among many case studies and scholars that a dashboard: should not overwhelm users [54] ; should avoid visual clutter [20] ; should avoid poor visual design and carefully chose KPIs [40] ; should align with existing workflows [19] ; should not show too much data [27] ; should have both functional features (i.e., what the dashboard can do) and visual features (i.e., how information is presented) [54] ; should provide consistency, interaction affordances and manage complexity [44] ; and should organize charts symmetrically, group charts by attribute, clearly separate these groups of charts and order charts according to time [8] . Such guidelines can help to inform design at a high level and draw heavily from general knowledge on perception, visualization, and information architecture. For other design decisions, there is less consensus as they require tradeoffs. For example, When do we show number values and tables, and when do we show visualizations? [54] ; How much interaction do we include in a dashboard? [4] ; How much information to include in a single page? How to personalize a dashboard? [50] Does all the content of a dashboard have to fit a single screen size and, if so, how do we deal with different screen sizes? We could not find these questions covered in the literature, and our collection of design patterns aims to complement these high-level guidelines by providing an actionable oversight of solutions used in the wild. Critically for HCI, dashboards have reportedly [40] been rejected by executives on the basis of them not having been involved in the design process, highlighting the need for a user-centered, iterative design process with a shared understanding of concrete design options. Closest to our work is a set of visual features identified by Sarikaya et al. [44] , describing solutions such as multipage dashboards, annotations, and interactive features. While the authors explicitly did not aim for an extensive report on these features, they report on a range of dashboard designs that take inspiration and features from infographics and other storytelling genres. Our dashboard corpus extends theirs with contemporary examples of dashboards and we see our work as an extension of their pioneering work. A design pattern generally describes a common solution for a recurrent problem. Other than design spaces or taxonomies, design patterns are not exclusive constructs but can be combined and exist independently from each other. Design pattern collections are often used in classrooms and for education [1] [2] [3] 18] . Pattern collections have been created for visualization, including Card and Mackinlay [11] , Chen [13] , He et al. [24] , Schulz et al. [45] , and Sedig et al. [46] . These collections show the breadth of visualization design options and can support designers in making deliberate choices, e.g., about tasks [45] . Design patterns have also been identified for more specialized forms of visualization, including graphical abstracts [26] , data comics [7] , and sketchnotes [56] . Our design patterns for dashboards complement these existing collections and are specific in that they are derived from analysis of 144 dashboards. Our patterns are purely descriptive in that they capture existing solutions and can serve as speculation tools in the design process and discussion, as well as providing a resource for concepts that can inform the structured evaluation of dashboards, a current gap in the literature. We gathered a corpus of 144 dashboards by starting with the 83 dashboards collected by Sarikaya et al. [44] but discarding three due to lack of clarity or context. We then added 64 new dashboards found on news websites and personal applications. 36 of these dashboards were related to Covid-19 due to recent public interest. We also included a wide variety of applications (e.g., health and fitness, personal informatics, transport, energy, finance), but given the sheer number and contexts of dashboards, any sampling method is necessary limited. Three of the authors then created independent coding schemes by qualitatively analyzing the dashboards in the corpus using the constant comparative method [22] . Where available, we coded the interactive version of the dashboard (61%). These schemes were refined and consolidated with the help of another three additional coders not involved in the initial scheme. Our focus was on coding the structure, visual design, and interactivity of the dashboards-a key distinction between this and prior work, which, e.g., focused on the intentions of the dashboards [44] . Also, our goal was not to create an exhaustive taxonomy of dashboard design options, but for the first time to describe the user interface building blocks used in dashboard design. Our final coding describes 42 design patterns, grouped into eight categories, which are grouped as follows: The following section describes these groups of design patterns (Fig. 1 ). Detailed description for each pattern can be found on our website: https://dashboarddesignpatterns.github.io. Note that percentages in the following sections indicate the percentage of dashboards where each pattern has been observed; these may not add up to 100% since the patterns are not mutually exclusive. The CONTENT of a dashboard is made up of individual dashboard elements, the crucial 'ingredients' relating to the data and its presentation. We identified three groups of design pattern relating to content: data information, meta information, and visual representation of data. We disregard visual components used purely for decoration or embellishment, e.g., illustrative pictures, dividers, borders. This group of patterns identifies the types of information presented and the extent of abstraction used. We found that information presented roughly ranged from detailed datasets that offer a more complete view of the data, to more abstracted forms that simplify and reduce the amount of information shown (e.g., individual aggregated values, trends). Starting with more complete data, we found detailed data sets (94%) which provide the most complete view. Aggregation (67%), filtering (42%), and derived values (69%) provide purposeful summaries of data using summarizing or analysis, e.g., to calculate trends and other measures. Thresholds (21%) are a particular type of judgment based on data values. Finally, a single value (88%) of a larger data set an be shown, e.g., the most recent value from a time series. These patterns capture additional information used to provide context and explanation. In some cases, this is implicitly understood from the context the dashboard is used in, e.g., the current date, or data released by a specific organization. We report the number of dashboards with explicit meta information, followed by a total including implicit meta information. We found 9% of dashboards showed no meta information. We found indications of: data sources (41%/69%); disclaimers (38%) informing about data processing and context; data descriptions (44%/57%) explaining what the dashboard shows; update information (64%/73%) with timestamps; and annotations (10%), which includes extra graphical embellishments added by the designer to highlight specific points, changes, developments, etc. This group describes common solutions for presenting data in dashboards. We found a wide range of visual representations that, like Data Information, represent varying levels of abstraction. For example, tables (42%, Fig. 2 (d)-middle), lists (9%), and detailed visualizations (88%) can provide very detailed information and allow viewers to read precise values. Detailed visualizations act as standalone components with proper axis labeling, legends, and resolution to show details in the data. They often span between 1/3 to the full width or height of a dashboard and can be accompanied with other representations, such as numbers and trend arrows. Miniature charts (21%), on the contrary, are small and concise visualizations without axis descriptions, labels, or tickmarks. The idea is to give a quick understanding of a trend, akin to sparklines [49] , rather than allowing the reading of precise values. Fig. 2(c) shows examples of miniature charts, all reduced versions of common visual representations. More abstract visual encodings include Gauges & progress bars (17%) that visualize a single value within its context (e.g., percentage). Specific solutions include semi-circular gauges, linear progress bars, thermometers ( Fig. 2(b) ). Some gauges come with an indication of 'critical' ranges, i.e,. a threshold indicating if values are positive/negative. Pictograms (19%) are abstract representations or symbols that illustrate concepts on the dashboard. Their usage can (i) represent data (pictogram-as-data) such as the existence of a value ( Fig. 2(a) -left) and a quantity through "filled pictograms" (Fig. 2 (a)-middle); or (ii) act as indices (pictograms-as-index) that designate the type of a data value found close to the pictogram ( Fig. 2 (a)-right), but not conveying specific data information. Trend-arrows (13%) are small arrows pointing up/down and are used to indicate the direction of change in a data value. They can be binary or include variations in slope. Finally, numbers (62%) are numerical representations of individual values, placed prominently on a dashboard and mostly used to indicate single key values (e.g., Fig. 2 From our analysis, we see that classical data visualization -i.e., detailed visualizations, signature charts, and gauges -are just a small part of the diverse visual language used to present data. The COMPOSITION of a dashboard determines how its individual content components are combined and shown on screen. Dashboards show multiple information elements and their structure and layout on page are meaningful design decisions. We identified four aspects of composition: the page layout of components, the methods used to fit the dashboard to the available screenspace, the structure of content across multiple pages, the range of interactions supported by the dashboard, and the color scheme used by each dashboard. Page layout patterns describe how widgets are laid out and, often, implicitly grouped in a dashboard. There are many layout classifications for information graphics, e.g., for graphical abstracts [26] (linear, circular, zig-zag, forking, nesting, parallel, orthogonal, centric, free), infographics [7] (large panel, annotated, tiled, grouped, grid, parallel,network, branched, linear), or sketchnotes [56] (freeform, grid, radial, linear). Here we identify the most prevalent page high-level layout patterns found in our dashboard corpus. When describing page layout patterns, we define a widget as the smallest unit we are considering. Many dashboards group information and their visual representations (Sect. 3.1.3) hierarchically and it is notoriously hard to create a clear definition of grouping in any visual composition. For example, the first dashboard in Fig. 3 combines numbers, gauges, and pictograms together to form a specific form of complementary information grouping; the pictogram, number and gauge all relate to the same piece of information or at least can be understood as a semantic-visual unit aiming for a specific task. With our page layout patterns, we consider how these higher-level widgets are organized in a single page, identifying the prominent layout decision used to group a potentially large set of content components. Open layouts (22%) place widgets (possibly of different sizes and aspect ratios) in an open way without apparent specific rules. Often widgets are aligned on a grid ( Fig. 3 -#1) following classical design guidelines. There is no strong semantic associated with the location and adjacency of widgets and each widget seems to have equal importance. Stratified layouts (49%) present widgets in a top-down ordering. A stratified layout can be used to emphasize information on the top ( Fig. 3 -#2) over other information. Table layouts (19%) align widgets into semantically meaningful columns and rows. They can be used to repeat information and visual encoding, e.g., across different facets or data items ( Fig. 3-#3 ). Table layouts make it easy to retrieve and relate information. Grouped layouts (33%) group two or more widgets with a specific relation, in many cases labeled by a common title. Grouping can be achieved through e.g., Gestalt laws of proximity or closure. Finally, schematic layouts (1%) place widgets in some schematic relationship such as a physical layouts (#59), networks (#42), or possibly process-workflows. Such dashboards could leverage a user's domain knowledge to orient themselves in the dashboard. We emphasize that none of these page layout patterns are exclusive and combinations are common. For example, the second dashboard in Fig. 3 shows a stratified layout (pictograms on the top, visualizations on the bottom), combined with an open layout. Similarly, the fourth dashboard in the same figure combines a stratified layout with a grouped layout, emphasizing the key indicators on top. Screenspace patterns describe solutions used to fit content onto a single screen. We call these screens pages as content can be split across multiple pages or overflow a screen. At any given time, a single page is visible to the viewer. Screenfit (44%) means that all content of a page fits the screen without the need for interactions like scrolling or tooltips. This is the standard solution for concise static dashboards. Overflow (22%) allows a page to be larger than the screenspace available. Overflows are usually explored through vertical scrolling and there is potentially no limit to the size of the overflow. Detail on demand (47%) shows extra content on, e.g., mouseover through tooltips, or open pop-ups on clicking buttons or widgets, giving access to more data without needing to fit alongside other content. Parameterization (52%) is a further means of showing more content by allowing content to be specified through a range of parameters. These parameters can be set through sliders, checkboxes, or drop-down menus. Other than overflow and detail-on-demand, parameterziation can show potentially very large and multifaceted data sets, but require manual specification and can typically only show one state at a time. Multiple pages (42%) splits content across multiple separate pages which are obtainable through navigation patterns. Structural patterns capture relations between multiple pages of a dashboard. In the most simple case, a dashboard has a single page (61%), which can imply the need for screenspace patterns such as overflow, details-on-demand, or parameterization to fit the desired content into a single page or screen. Multiple pages can exist through relationships as such as hierarchical, parallel, semantic or open. A parallel structure (16%) can imply repetition of the layout, data, and visual representations, e.g., across different courses in a university, or departments in a company. A parallel structure can be combined with hierarchical, e.g., in case of geographic regions which have a political hierarchy but otherwise show similar information. Hierarchical structures (19%) are used for drill-down and can result in a series of pages, each gradually showing more detail then the previous own. Open structures (8%) can capture any other kind or structural relationships. This pattern group describes common interaction approaches found within the dashboards. Interaction routines manifests themselves through interactive data entities, i.e., data as the interface, user interface elements, as well as window-level interactions. The patterns we highlight in this group refer to common roles that interaction could play in dashboard use, expressed through specific user interface components. These are defined broadly, to identify general usage patterns and how they can be implemented through dashboard designs. Only 4% of dashboards in our corpus had no interaction. We found four major uses of interaction in dashboards, supported through an oven overlapping range of UI components, such as tabs, sliders, drop-down menus, etc. Exploration (89%) interactions allow users to explore data elements and the relations between them. They can take on many forms, including brushing + linking (18%) [10] interactions that link data representations across different views. Details of individual or groups of items can also be revealed through interactive features, e.g., pop-ups with details, a table enlisting data features, etc (71%). Navigation (76%) enables navigation between pages of a dashboard or screen. Navigation is supported through scrollbars, navigation buttons, or page tab links. Interactions for Personalization (23%) allow viewers to redefine and reconfigure the information shown within a dashboard. These interactions can add new representations, e.g., by choosing a new data feature to be visualized, resize, or reorder the existing encodings within the dashboards, leading to more bespoke dashboard configurations. Eventually, Filter & Focus (55%) allow viewers to find or focus on specific data, e.g., by searching for particular data values or applying filtering criteria so that only relevant information remains. These interactions are typically facilitated by user interface elements, like text fields, drop-down menus, radio buttons and checkboxes. Color is an important visual variable in visualization. While it can be used for different purposes in dashboards and comes with cultural implications, we examined the use of color at dashboard-level-i.e., across multiple widgets and views. Shared color schemes (35%) give unique recognizable colors to groups or facets in the data. This helps maintaining consistency and familiarity across throughout the entire dashboard (e.g., Fig. 4 ). Data Encoding (80%) color schemes use colors primarily as a visual variable to encode categories or scales within the data, e.g., displaying values with color on a choropleth map (#5 in Fig. 3 ). Semantic (26%) colors indicate specific good-bad semantics, such as the traffic light schemes to indicate status or statuses of patients in a health related dashboards. Semantic coloring is often in conjunction with gauges and progressbars for indicating multiple meaningful thresholds. Emotive (6%) color schemes can add aesthetic strength and develop an emotive response in viewers [28] . This use of color within dashboards does not seem common while more popular in dashboards that resemble infographics (e.g., #19, #60). During our analysis of the dashboard corpus, we realized six dashboard types that can be seen as design genres, because they share characteristics, combinations of design patterns, contexts, or specific goals. After describing an initial set of genres and creating a respective codebook, we again applied structured coding to cover all the dashboards in our collection. Coding was done by two independent coders and yielded strong agreement. These genres invite discussion about how a specific pattern has been put into practice through dashboard design, can be used in design exploration, and can inform discussion about the 'right' dashboard design for a given context. Iconic figures summarize common design patterns found in each genre. Static Dashboard (21%): By static dashboard, we refer to the traditional notion of a dashboard as a static (no-interaction) and screenfit display of information; for example, Fig. 3 -left (#19) shows a classic dashboard that does not require interaction. We did not find many examples of classic static dashboards, which we attribute to the fact that contemporary dashboards are digital and it is easy to support interaction and drilldown tasks through more complex structures. Another reason might be that the range of display sizes on desktop computers, tablets, and mobiles encourages adaptive solutions (e.g., use of overflow to utilize screenspace or a multi-page structure). Analytic dashboards (73%): This genre is what Few would call a Faceted Analytic Display [21] . We see strong parallels to the concept of Multiple Coordinates View [42] . This genre generally uses complete visualizations (rather than just signature charts and trend arrows). Many of these views are fully interactive, providing for pan+zoom, focus+context, tooltips, brushing+linking and other exploration and navigation strategies. These dashboards can also provide parameterization, and use tabs or linking to switch between multiple pages of the dashboard. Importantly, these dashboards generally do not use overflow pagination, since scrolling complicates comparing visualizations. Magazine Dashboards (2%): Many dashboards relating to Covid-19, climate change, politics, etc were created by news agencies and similar media outlets. These dashboards are found as integral part of journalistic articles and resemble visualizations of the magazine genre [47] . The text goes beyond basic meta information to provide additional commentary and storytelling about the data. These dashboards are often broken into several pages and have an overflow use of screenspace on a single page, with visualizations positioned at appropriate points in the text to tell a story about what the data shows. As an example, The Economist Covid-19 tracker (#131) provides viewers with a snapshot of Covid-19 cases and deaths across Europe, with tables, timeseries, trend lines and spike maps interleaved with narrative text. In addition to regular visualization updates, written content is also frequently updated as the 'story' changes, e.g., responding to emerging trends, the effects of vaccination, etc. These dashboards naturally require more effort to design and maintain; whilst visualizations may update automatically as the data changes, editorial oversight is necessary to ensure the story remains consistent with the changing data and its visual representation. Infographic Dashboards (6%): We also found examples of dashboards similar to infographics, including decorative graphical elements and other non-data ink shown alongside data representations. Similar to narrative dashboards, they use non-data media to annotate and embellish data. For example, Fig. 5 shows an infographic style dashboard that uses text, annotations and other embellishments to enhance data presentation and, in turn, help the data to convey a story. Other examples include less artistic content, while being presented more as posters (e.g., #33, #34). Infographic dashboards were mostly used to represent static datasets; e.g., presenting snapshots of key data on a monthly or yearly basis. Often these infographics exceeded the vertical screen-space and could be explored through scrolling (overflow). The artistic content of infographic dashboards may require additional design time and chosen annotations and embellishments will be tailored to particular data points, so are less suited for dynamic dashboard use where data changes often. These dashboards may thus have a different intended use, with an audience expected to discover them over a longer period of time, rather than checking in frequently for updates. Repository Dashboards (17%): We found several examples that list a multitude of charts on a single website, with overflow structure that impedes proper analytics, i.e., comparing views. Their charts often lack textual or other narrative explanations, except for meta data information (which is often extensive). Charts may provide some interaction and usually provide links to explore, filter, and eventually download open data. Data and visualizations are updated, while choosing very common visualizations and numbers to visualize data. Extensive meta information is often provided for transparency. Examples include repositories from governmental and academic institutions, like Our-world-in data [43] or the Public Health Scotland Covid-19 dashboard (#109). Embedded Mini Dashboards (4%): Some dashboards were found to be embedded into other applications such as news websites. These concise dashboards only occupied a small area on screen and usually come with a range of interactive features for navigation and to parameterize the content. Fig. 4 shows an example of a mini dashboard embedded into a news page (#83); like similar mini dashboards, it is linked to a more in-depth narrative dashboard that invites further exploration beyond the initial data at-a-glance. There are many decisions to be considered in a dashboard design process, from high-level (e.g., selecting data and display devices) to more low-level decisions (e.g., color palettes, plot size, pagination structure). High-level decisions will almost always rely on sources over which the designer has little to no agency: the intended audience, the devices, and scenario in which a dashboard is used, or the wider team of data analysts and developers supporting the dashboard's creation. By providing specific solutions that seemed to have worked well in the past, our design patterns and dashboard genres can support lowerlevel design decisions that the dashboard designer actually has agency over and which they must solve to satisfy their requirements. These decisions include the use of screenspace, dashboard structure, page layout, color schemes, visual representations, etc. In this section, we reflect on our own design process in creating over 7000 dashboards for Covid-19-related data in the UK [31] as well as our discussions from both preparing and running a dashboard design workshop (Sect. 6). From an information-theoretic perspective, a dashboard encodes a data space that is smaller than the data space of the data to be displayed. This requires a designer to decide which information to not show on screen and how a user can then obtain the remaining information, if at all. One solution is to reduce information by abstracting data and its visual encodings. For example, consider a time series representing the daily number of positive test cases for Covid-19 during a period of 500 days. Fig. 4 shows a design with four visual encodings (number , trend-arrow , miniature chart and detailed visualization ) for the same time series data. They all lose information, but in different ways: the line chart visualization may lose information through its limited height, and the vertical pixel-resolution limits the range of data visible without scrolling. The large number shows the latest figure (a single-value ), while omitting all other data points in the time series dataset . The trend error (a derived value ) and miniature charts are different levels of abstraction between the number and the detailed visualization, representing the full range of abstraction in one dashboard view. However, information loss may also negatively cause confusion, misinterpretation, or erroneous judgment. At this moment, the dashboard designer needs to counterbalance this information loss through other means such as, e.g., adding interaction such as tooltips , scrolling or spread content across multiple pages . It is clear from this example that there are costs and benefits associated with different levels of abstraction and their visual encodings. However, the tradeoff here is the excessive cost of screen space when displaying several redundant visual encodings, and the increased cognitive cost of interpreting different levels of abstraction over the same data. It is necessary to display fewer numbers and visual information, to reduce the visual complexity of the dashboard. Designers thus need to find an optimal balance between abstraction and cost, e.g., to encode the latest data value(s) as numbers, and rely on perceptually less-precise plots to present overviews while reminding users of their past observations. With careful reasoning, exploring our dashboard design space does not necessarily involve exclusively considering many design options in a combinatory manner; rather, a considered combination can guide dashboard design. In other words, a designer must counterbalance any decrease in one parameter (e.g., abstraction) through an increase in one or more of the other parameters. Fig. 6 illustrates this process for the example parameters abstraction, screenspace, number of pages, and interactivity. In a design process, the goal is to minimize each of these parameters. For example, to fit as much information as possible (low abstraction) into the least amount of screenspace, with no interaction, and and to only show a single page. Such a solution would possibly be the gold standard in dashboard design: showing all important information at a glance, without the need for costly interaction. The model explains that negotiating tradeoffs is best represented through a stress function between the parameters at hand. In the rest of this section, we discuss tradeoffs that minimize this stress with the help of design patterns. Increasing the screen size is an obvious way to deal with potential information loss, as there is less need to abstract and reduce the amount of visible content. Where large screens are available, information loss can be minimized: e.g., dashboard #42 spans the width of a room using multiple screens. However, large screens are not always available or practical, especially as dashboards find a more general audience who browse via personal and mobile devices. Displaying a lot of information on a 'typical' screen size requires careful page layout and structure of components and information; i.e., deciding how many pages are necessary, what to show on each page, how to lay out those components. A good layout and structure needs to consider possible facets in data (e.g., vaccinations, cases, hospitalizations in the case of a Covid-19 dashboard), as well as the tasks that require information across these facets (e.g., comparing cases and vaccinations). At the same time, a layout needs to prioritize information, e.g., showing information in different sizes or places on each page, perhaps using stratification to put the most important information at the beginning. When multiple facets and similar/repeated information need to be shown, table layouts may be ideal. Likewise, a layout can use repetition within each component, like that shown in Fig. 3 -right, which uses a number , trend-arrow , and signature chart to convey data. Repetition like this guides a viewers' eye and helps interpretation and retrieval of information. In contrast to structuring information across pages, dashboard designers can opt for simpler static dashboards, which show data concisely on a screenfit page without requiring user interaction. Static dashboards are ideal when interaction is not possible, desired, or necessary, and are also suited to print media. However, the trade-off associated with screen size is that static dashboards require carefully-chosen abstraction, to make the best use of available space (as discussed in the previous section). Interactive parameterization can provide balance between static and paginated dashboards: reducing the amount of information shown so that it fits on screen, but requiring the user to indicate what information is most relevant to their needs/tasks. Our design space encapsulates many interactive components that support exploration (e.g., tooltips, filtering, search) and navigation through multiple pages of information (e.g., search, tabs, links). Interaction gives viewers the ability to personalize, and use the dashboards in a way that suits their needs. Contrast this with static dashboards that are framed by designers, revealing strong parallels to author vs. reader-driven storytelling [47] . Especially for narrative examples of dashboards (magazine dashboards , inforgraphic dashboards ) interaction can help set the narrative structure and pace. Interaction can be utilized to streamline the volume/velocity of information communicated, i.e., slowly revealing parts of the information/data based on interaction. The most simple interactions to deal with more information than can fit on screen are scrolling (in an overflow layout ) or navigation buttons . Those interactions do not interfere with the individual visual encodings (i.e., visual encodings can be static images) and they allow designers to create dashboards that are easily responsive across different screensizes. Other simple options that do not interfere with the implementation of visual encodings include use of tabs and links . Detail-ondemand and parameterization may require a more specific implementation, but can be effective, as discussed earlier. The six dashboard genres presented in Sect. 4 represent different ways that dashboard design decisions and patterns can be combined to create usable dashboards, typically oriented towards specific contexts, tasks, and users. In this section, we discuss differences and tradeoffs between these, to inform their choice in future dashboard design. We see a distinction between curated dashboards (static dashboard , magazine dashboard , infographic dashboard ) that are highly selective of data and visual representation towards a specific goal, and data collections (analytic dashboard , repository dashboard ) that aim to transmit large volumes of information so that viewers can seek the information most relevant to them. Curated dashboards can be described as author-driven storytelling, while collection dashboards can be described as reader-driven storytelling in the terminology of Segel and Heer [47] . Curated dashboards will typically fit on a single screen with a screenfit or overflow layout, and offer limited ability to change the data or its visual encodings (i.e., limited parameterization ). Creating such dashboards requires a greater extent of curation, design, and selection, i.e., representing the right information using the right visual encodings. These dashboards will have well-defined use cases, and the designer's role is to 'translate' the data for that purpose [33] . Data collections are dashboards that provide access to lots of data, represented in different ways for different tasks (e.g., for analytical use or for sharing open data). Although a degree of curation is required (i.e., the designer has made deliberate decisions about how to show the data), there is less need to reduce the amount of visible content. These dashboards typically use pagination and/or overflow , as the goal is not to make the most economic use of screen space but to maximize the amount of information available to users, so that they can find what they are most interested in. The use cases for these dashboards are more open-ended and they are often aimed at a more broad audience. Data collections invite interaction and may even require to meet the user's information needs. We ran an online dashboard design workshop to help us scrutinize and complete the design pattern collection as well we refine our discussion on design tradeoffs. The workshop ran for 2 weeks and was open to everyone with a dashboard design task at hand. It aimed to guide people through the main stages and decisions of a design process while using the design patterns for ideation and discussion. Rather than implementing a dashboard in a tool such as Tableau or PowerBI, the course intended to finish with interactive mockups in Figma.Detailed information as well as all material from the workshop can be found online: https://dashboarddesignpatterns.github.io. Outline: A (1) 3h kickoff session gave a fast-forward through the design process and introduced the design patterns. The session started with an (1.1) overview of example dashboards from our collection, asking participants to describe what they see and how these dashboards appear to them upon first sight. Then, we (1.2) introduced dashboard design challenges and discussed high-level guidelines such as mentioned in Section 2. Then, (1.3) the workshop introduced the framework in Figure 6. Eventually, (1.4) the session introduced a possible design process, one stage for each type of design patterns: (i) Data Information (what information about data do I need to show) (ii) Dashboard structure (which pages to I need, how are they related?), (iii) page design (page fit and layout, visual representation), and eventually (iv) interactivity and (v) color. The full process is outlined on our website. 1 Following the kickoff session, we had (2) six asynchronous check-in sessions over two weeks. Each session lasted 1h and was facilitated by one to two of the co-authors. During these sessions, we provided individual help to participants, discussing their dashboard examples, giving advice about pattern usage and discussing related data visualization questions. Each session was attended by six participants on average, discussing 2-4 dashboards during that hour. Eventually, we had a (3) debriefing meeting in which we asked participants to show their mockups and reflect on their design process, choices, and challenges. A post-hoc questionnaire collected feedback about the workshop outline and design patterns. The entire workshop was held on Zoom, encouraging participants to work on their design mockups asynchronously and share their designs through Figma. Participants: We had 23 participants in total from diverse backgrounds, including data science, medicine, psychology, economics, bioinformatics, design, etc., including a mix of industry and academic partners. Each participants worked on their own dashboard within a context their defined themselves and which was relevant to their work. Example dashboards created during the course included an analytic dashboard allowing to browse learning analytics of hundreds university courses and student performance data; an infographic dashboard to inform midwives about the risks of Covid-19 and vaccination during pregnancy; an analytic dashboard to track dynamic user interaction log data; a repository dashboard tracking live data about conflicts across the world; an analytical dashboard to help modelers assess and track the performance of their models over various sessions of training and parameter adjustments; a repository dashboard about open government data to allow journalists and politicians retrieve relevant data; and eventually a personal analytical dashboard to monitor energy consumption at homes. About half of the participants had prior experience of one to several years in dashboard design with PowerBI and Tableau. For almost all participants, this workshops was the first time they deliberately engaged in dashboard design in a structured and guided form. In the drop-in sessions, discussions were sparked by participants sharing their designs. Participants also shared questions, possible answers, and reflections, voiced in a participatory manner. In the following, we summarize key topics. Information overload was a frequent topic related to what information to include in a dashboard and how much information to include. Participants complained about "too much info", data, and information, and general confusion about where to look at. Some commented on small charts and maps, and unclear relations between components: "not sure how the linechart relates to the number". Color was sometimes considered overused and its particular function was not always clear. On the other hand, people liked text and descriptions on dashboards. Optimizing screenspace and reducing the number of pages was a common topic in the discussions. While discussing techniques to make visualizations simple, concise, and make best use of the screen space, one participant asked "can a dashboard ever contain too much information?" This comment sparked a long discussion around how to best use screenspace patterns, layouts, and compressed visual encodings and resulted in some agreement that there are lots of techniques to visually compress dashboards. Another participant raised the question how to design for mobile and desktop screens and how many versions of a single dashboard are required. These issues lead to discussion about novice vs. expert audiences as well as casual vs. frequent users. Novice or casual users will likely need more guidance in the form of clear layout and titles, little to no interaction, and generally less information. Expert or frequent users will be more data literate and require more data for their decisions. At the same time, dashboards for experts and frequent users will also likely include more bespoke features, compared to dashboards for novices and casual users, who might be driven by general concerns and more general tasks. Novice and casual users will likely appreciate more message-heavy dashboards (e.g., infographic dashboards or magazine dashboards ), while experts might appreciate more "selfserving" dashboards, e.g., analytical or repositories . Many examples discussed in the drop-in sessions were found a lack of context for their information. For example, individual values and numbers either require more data to be meaningful. Such data can be provided through a temporal context, e.g., data from the past, or other comparisons. Experts and frequent users might have an implicit understanding of this context, but novices and casual users might not. Color and accessibility was a frequent topic in our discussions. Color was heavily used in the dashboard designs, either originating from the participants' personal decision or provided by the dashboard tool without consulting with the designers. Color consistency was a big issue in most designs and most advice hinted towards removing color where not necessary, e.g., on most single-page dashboards. Accessibility was raised as an often neglected topic with implications for titles, colors, and sizes. While many guidelines exist for accessibility in visualization, we are not aware of any accessibility guidelines that take into account the density, tasks, and scenarios of dashboards. Those participants who had some experience with dashboard design tools commented that design iterations are very easy and building dashboard prototypes was easy. On the downside, these tools offered complementary support for dashboard creation with no tool clearly outpacing the other tools or supporting all design demands. Moreover, many tools lacked support for specific design decisions (e.g., optimal use of space, interactions) and patterns (e.g., shared color scheme ). We obtained many quotes from our questionnaire which nicely capture participants' reflections and which do not need much commenting: • "Start with simple designs that spotlight just the most important data, then add to this as you go; this will help avoid just putting everything you can on the dashboard, keeping it focused. (This might not apply to repository style dashboards)."[P4] • "I learned not to try ask too many questions for one graph. One graph can give several different answers, but the question should just be one question."[P3] • "Simpler is often better when it comes to charts: bar charts, line charts, and tables are often clearer and easier to read than their more fancy looking counterparts."[P4] • "If you try to simplify [your design] too much, you risk imposing your own story."[P3] • "Use filters, menus, buttons, and parameters to reduce the number of static visuals shown at any one time (2) tradeoffs that require discussion and decision making; (3) design processes; and (4) tool support. Some of these aspects can be drawn from the literature (e.g., knowledge as discussed in Sect. 2); some are presented in this paper (e.g., design patterns, annotated exaples, tradeoffs, design processes); others are compelling topics for future work and can be informed by our findings here (e.g., tool support for effective dashboard creation). Design knowledge includes anything that can be learned and read from the literature, or taught in classes in the form of theory. We can think of this as the 'raw material' in any design process: the ideas, theories, models and examples that compose the syntactic and structural rules and components of any design. Design knowledge can come from empirical observation from studies, and is generally highly formalized and includes guidelines, examples, and design patterns. Design patterns for dashboards are, for the first time, described in this paper. We also provide a set of dashboard genres that illustrate common combinations of design patterns and abstract many of the realworld examples examined in our work. These genres can be put into action as part of the dashboard design process. However, we believe more design patterns could be found and our collection paves the way to a more comprehensive collection. These patterns help establish a terminology for design knowledge about dashboards. They might even point to specific characteristics of dashboards, helping to see dashboards as only an instance within the forms and genres in the wider visualization landscape, such as multiple coordinated views or infographics. At the same time, these patterns help relate dashboards to these other forms and genres, while keeping the delineation of dashboards deliberately blurred to allow for transfer and cross-fertilization of design ideas. An additional contribution is our dashboard corpus. We extended the dashboard collection started by Sarikaya et al. [44] to a set of 144 examples. Due to the sheer number of dashboards online, any systematic survey is impossible, but a common corpus can inform future research, design, and discussion, similar to examples for physical data visualization [15] , storytelling [6] , or data comics [7] that involve plenty of examples from outside academia. Dashboard design guidelines were discussed in Sect. 2.2 and applied and scrutinized through our workshop. Guidelines can often be generic and widely applicable, but most are limited to specific use cases and some guidelines may contradict each other. Based on our design patterns and discussion of tradeoffs, we are able to formulate additional guidelines, listed on our website: https://dashboarddesignpatterns.github.io/ processguidelines.html. Every design process and design problem is unique in that several parameters must be considered: users, tasks, contexts, devices, etc. Design tradeoffs are inevitable when no solution is optimal, i.e., when the specific parameters of a design problem have contradictory knowledge (e.g., guidelines, heuristics, solutions). Dashboard designers can use this knowledge to inform their approach, but other activities (deliberate or otherwise) will be necessary: e.g., reasoning and logic, experimentation and prototyping, user-centered design and evaluation. Decisions may likely influence or conflict with other decisions, causing further design tradeoffs to be necessary, requiring constant iteration towards an effective and usable dashboard design. We discussed a framework for design tradeoffs (Sect. 5), strongly informed by our design patterns and by our experience in designing dashboards [31, 32] . We think of this discussion as a first formal discussion-partially based on information theory-of design decisions for dashboards. This discourse will evolve with the set of design patterns and future empirical studies. Future empirical studies should challenge our reflections and address the specific design tensions described in this paper. There are also many questions left open regarding the costs and benefits of specific solutions and decisions. For example, To what extent do users engage with interactive dashboards? (some similar work exists in the context of interactivity and storytelling on the web [9] ); Does interaction help users solve tasks effectively and efficiently, or does it just add complexity?; What are 'good' approaches to creating dashboards with multiple pages?; How can paginated dashboards effectively support analytical tasks that require comparison of multiple facets?; How much data information and visual encoding is too much? There is a key need for dashboard design processes that structure both knowledge and decisions in tradeoffs, to guide designers towards effective dashboard designs. As part of our workshop, we described a generalized design process available on our website and that aims to describe mid-level decisions and complement the discussion in Sect. 5. The process assumes requirement analysis has defined users, tasks, and datasets. This process is intended to kickstart the dashboard design process and introduce our design patterns one group at a time. The process implies many iterations to negotiate design tradeoffs. While there are existing tools for dashboard creation (e.g., Tableau, PowerBI, Exploration Views [16] , QualDash [17] ), there is a need for greater support to guide design choices. For example, authors of dashboards and dashboard-authoring tool designers could offer support for a range of design patterns and genre templates. This could streamline dashboard design and allow designers to create different dashboard versions based on the same widgets. There might be further potential for automating dashboard creation [29] through recommender systems, e.g., as used with visualizations [34] and infographics [52] . Recommender systems should take into account levels of abstraction and composition. However, these are non-trivial challenges that require formal design rules, which in turn require further study. Dashboard users could be provided with options for personalization, sharing, bookmarking, or annotation. Personalization could allow a user to specify data they consider important and adapt the layout and size of components accordingly. Only a small number of dashboards support personalization through adding or moving/resizing widgets. A related problem is a lack of responsiveness to different screen sizes. There has been only a handful of studies and suggestions how to make visualizations responsive [25] but responsiveness in dashboards should include the levels of abstraction for data and visual encodings, eventually turning a static dashboard into an analytic dashboard . First, our research is limited by the set of dashboards we have curated. To ensure consistency with existing research, we started our collection from existing dashboard used by Sarikaya et al. [44] . We open our dashboard collection and coding scheme for future research on our website. Our design patterns reflect close agreement among six coders and have been scrutinized through application in the workshop. We highlight that any design pattern collection (Sect. 2.3) is highly qualitative in that it aims to describe solutions for reuse. This implies patterns to bear meaning and capture ideas that can be applied. We see our patterns as independent from each other, allowing hybrids, e.g., parallel structures in hierarchical structures, or hybrids of grouped and stratified and open layouts. Our patterns and their frequencies (reported as % in Sect. 3) are representative for the dashboards we analyzed. We invite the community to extend our pattern collections and ideate new patterns, currently not present in dashboards. The scope of this paper is not to suggest 'good designs' since this requires empirical evidence. Our patterns can only support iterating and analyzing design options. Dashboards have their distinct place in the visualization landscape, sharing many characteristics with other formats, such as infographics, multiple coordinated views, small multiples, infographics, and potentially data comics. We consider dashboards special, since they combine questions captured by our design patterns: abstraction of data (i.e., extracting single values, derived values, threshold judgments from a dataset), organization of information (layouts, hierarchy), grouping of elements and showing relations (hierarchy, grouping, color) and general design principles. Moreover, our work shows that the term 'dashboard' is used very widely, often including self-proclaimed dashboards that do not specifically fit any of the definitions or design patterns described in this paper. For example, #139 calls itself a dashboard but it is showing only one full screen visualization at a time, requiring sequential navigation. However, we argue against a strict definition of the term 'dashboard', and rather see it as way to communicate specific affordances onto a visualization user interface in a specific context: the need for overview, control, and conciseness for decision making. The definitions presented by Stephen Few aim to support the design of 'good' dashboards, e.g., by emphasizing that information should be readable at a glance. It was beyond the scope of this work to determine what makes for a 'good' dashboard, and this will depend on many subjective and domain-specific factors. The diversity in dashboard design found here suggests a compelling need for more research into strengths and weaknesses of different dashboard genres, echoing Sarikaya et al.'s call-to-action for more dashboard visualization research [44] . Challenges, strategies and adaptations on interactive dashboards An interactive tracker for ceasefires in the time of covid-19 Narrative design patterns for data-driven storytelling Design patterns for data comics Using dashboard networks to visualize multiple patient histories: a design study on post-operative prostate cancer Storytelling in information visualizations: Does it engage users to explore data? Interactive data visualization using focusing and linking The structure of the information visualization design space Creating effective learning analytics dashboards: Lessons learnt Toward design patterns for dynamic analytical data visualization An interactive web-based dashboard to track covid-19 in real time. The Lancet infectious diseases List of physical visualizations. www.dataphys.org/list, 2012 Exploration views: understanding dashboard creation and customization for visualization novices Qualdash: Adaptable generation of visualisation dashboards for healthcare quality improvement Oblique Strategies Supporting clinical cognition: a humancentered approach to a novel icu information visualization dashboard Information dashboard design: The effective visual communication of data Dashboard confusion revisited. Perceptual Edge The discovery of grounded theory: Strategies for qualitative research. Routledge Informed dashboard designs for microgrid electricity market operators Vizitcards: A card-based toolkit for infovis design education Techniques for flexible responsive visualization design Picturing science: Design patterns in graphical abstracts Effective dashboard design The feeling of numbers: Emotions in everyday engagements with data and their visualisation Vizdeck: self-organizing dashboards for visual analytics The impact of visualization dashboards on quality of care and clinician satisfaction: integrative literature review Propagating visual designs to numerous plots and dashboards Rapid development of a data visualization service in an emergency response Knowing and governing cities through urban indicators, city benchmarking and real-time dashboards Characterizing automated data insights Towards public health dashboard design guidelines Cityeye: Realtime visual dashboard for managing urban services and citizen feedback loops User-centered collaborative design and development of an inpatient safety dashboard A framework for analyzing and developing dashboard templates for small and medium enterprises Dashboards as a service: why, what, how, and what research is needed Review on dashboard application from managerial perspective Business dashboards: a visual catalog for design and deployment State of the art: Coordinated & multiple views in exploratory visualization Coronavirus pandemic (covid-19). Our world in data What do we talk about when we talk about dashboards? A design space of visualization tasks Design of visualizations for human-information interaction: A pattern-based framework Narrative visualization: Telling stories with data The political dashboard: A tool for online political transparency Beautiful evidence Tailored information dashboards: A systematic mapping of the literature Connecting domain-specific features to source code: Towards the automatization of dashboard generation Datashot: Automatic generation of fact sheets from tabular data A review of dashboards for data analytics in nursing A review of dashboards in performance management: Implications for design and research Mapping the landscape of covid-19 crisis visualizations Sketchnote components, design space dimensions, and strategies for effective visual note taking