key: cord-0058209-y1l62q1d authors: Draffan, E. A.; Heumader, Peter title: Artificial Intelligence, Accessible and Assistive Technologies: Introduction to the Special Thematic Session date: 2020-08-10 journal: Computers Helping People with Special Needs DOI: 10.1007/978-3-030-58796-3_7 sha: 7ba852b8ea20df725a9d0c1cd9835087b3d2574d doc_id: 58209 cord_uid: y1l62q1d Artificial intelligence (AI) has been around for at least 70 years, as have digital technologies and yet the hype around AI in recent years has begun to make some wary of its impact on their daily lives. However, in this special thematic session authors will be illustrating how increased speed of data crunching and the use of complex algorithms have boosted the potential for systems to be used in ways that can be helpful in unexpected ways, in particular when thinking about assistive technologies. The black box nature of AI may be alarming; with its apparent lack of transparency, but it has enormous potential to make digital content, services and systems more accessible and helpful for people with disabilities. The following series of papers related to these issues propose new and innovative ways of overcoming concerning issues with positive approaches to reducing barriers for those with disabilities. The fact that it is possible for companies and organisations to collect ever-increasing amounts of data related to the digital lives of individuals as well as activities within our built environment has proven to be exciting as well as disturbing. Tailoring experiences both off and online sounds like a positive approach to making everyday life easier. Often this is the case, especially when shopping and ordering regularly required items if the process is considered a boring one. However, there are times when personalisation based on our online interactions crosses platforms and devices and affects our reading choices and understanding of what is available. Targeted advertisements for particular products that may be part of recent website browsing history can alarm those who do not understand the systems involved. The items offered may be unwanted and unhelpful, perhaps even causing distress where there are concerns around diversity, equity, and inclusion [1] . These issues have also been shown to be particularly worrying when it is found that personal customization requirements and individual preferences are overruled or data misrepresents minorities, including those with disabilities [2] . There have been examples of image recognition with biases [3] and chat bots that may not offer the expected help [4] . On the other hand, taking a positive stance, there are times when data can be used with algorithms that offer more accurate navigation around buildings for those who are blind and helpful text summarisation for those with cognitive impairments, when coping with complex information. Natural Language Processing (NLP) and prediction models are being used for language translation and speech recognition [5] , not only helping those who speak another language, but also those with complex communication needs and specific learning difficulties such as dyslexia. Automatic video and audio transcripts and captions have improved and can offer accessibility for those who have hearing impairments. Accuracy remains an issue in some fields [6] and with some translations [7] , but we can expect that scientists using AI with its machine learning, deep learning and neural networks, plus the right data and algorithms will eventually enable improvements in output. Some of the challenges and positive aspects of AI in relation to access for those with disabilities are discussed under the following themes; improving access to digital content, recommender systems for enhancing access and navigation systems to aid wayfinding. For many years, individuals have worked to make digital content accessible for those with physical, sensory and cognitive impairments. The process has required not only adaptations to online and uploaded documentation, but also skills in understanding the needs of assistive technology users. Individuals come with a range of skills and linguistic abilities and content that is online varies from that which is read for interest to essential information. There have been several reports offering guidelines for 'Easy Reading' [8] and these include the use of text simplification for those with intellectual disabilities. John Rochford is his paper on the subject has taken six of the most appropriate guidelines for his participants and adapted online text using those rules and neural machine translation as if the original English needed translating into simplified English [9] . This approach has huge potential, as it would allow the same to happen in other languages, although it is accepted that there are machine translation differences when compared to text simplification. Examples given included the fact that not all the words translate into easier words and many do not have a one to one correspondence. This is also true when looking at symbol label to concept linking, as is mentioned by Ding et al. in their paper on AI and Global AAC Symbol Communication. Symbol labels often have to be 'cleaned' with removal of characters or word combinations to allow for more accurate sematic linking. This is achieved with the use of ConceptNet combined with Natural Language Processing (NLP) to build on the base from which the ISO standard (used by Blissymbolics) and a Concept Coding Framework can be used to increase interoperability between symbol sets [10]. Those with literacy skill difficulties and/or cognitive impairments and AAC users would benefit from the combination of text simplification and text to symbol translations for web content and yet without accessible websites this cannot occur. There remains therefore the need for web accessibility checkers to help speed the process of ensuring ease of use and access to digital content. The addition of AI in the form of NLP and image recognition, as suggested by Draffan et al. in their paper allows for warnings and visual representations showing aspects of the web pages that cannot be checked automatically via the code. This includes overlaps on text enlargement or accuracy of alternative text for images that fail to represent what is actually in the picture. Assistive Technology users depend on digital accessibility, but it is clear that barriers can be lowered further by adding additional AI enhanced personalization strategies to increase usability such as text simplification and text to symbol translations, which have both been mentioned in recent Web Content Accessibility Guidelines where the statements or success criteria for web accessibility checks originate 1 . Recommender systems make use of personalized, context-sensitive data or hybrid versions of this idea with the help of machine learning where they compare a person's preferences to another set of similar preferences. In the case of the "Easy to Read Methodology" (E2R), that is designed to provide guidelines for easier to read documents, the comparison is being made between the adaptations made to the text and the content in the guidelines. Mari Carmen Suárez-Figueroa suggests that the use of NLP and machine learning to perform the analysis and transformation of documents in a (semi)-automatic fashion will reduce the need for time consuming manual checks and allow for automated recommendations that will improve documents designed for those with cognitive impairments. This process could well support the work of John Rochford mentioned above and any digital accessibility checks are undertaken. However, as a recommendation system it could also be used to guide users benefitting from the research carried out by Yu et al. on book recommendations for those with visual impairments. A filter as to whether a book had easy to read content. But, Yu et al. found that as users could not scan web pages when searching online libraries, they had to depend on listening to content read aloud in a linear fashion, which did not help as a filtering system for books of no interest. This meant their preferences were often missed and recommender or prediction models failed. Once again, the research plan included a context-aware recommendation algorithm. In this case, the strategy was based on fusion preferences (combining data from the user's behavior with content data) and user attention. Predictions were based on user's interests and the content of available books that matched these preferences, in order to reduce the number of random books found. The results of the research were successful in speeding up the finding of books of interest and once again showed how the use of AI can enhance outcomes. The same principles also work in a learning situation where user behaviors are gathered. A user plays a game and when mistakes are made they may not move to the next level, but are asked to practice the tasks again. Then when they are successful they jump to another level and gain points and are motivated to continue playing. Karaton: An Example of AI integration within a Literacy app for Education has been designed with these ideas in mind using a decision tree based on expert knowledge around reading development. The Karaton mini-games aim to encourage children with poor literacy skills to keep challenging themselves. In the past teachers have adapted the games to suit individual skills. Their changes have been based on the data already collected. Now plans have been made to incorporate AI prediction models that will offer recommendations as to which mini-game to play next and feedback the results to the knowledge base. This will not only allow teachers to monitor progress but also enable machine learning to take place, continually improving the app's ability to provide accurate steps and motivational comments. Meaningful feedback is paramount with any recommendation system, as inaccurate or poorly defined results usually suggest inadequate validation, the user leaves a website if their search has been unsuccessful or gives up on a game. Having discussed the use of AI for improving digital content and how recommendation systems can provide helpful feedback to users, the two final papers present the practical aspects of finding ones way around indoor premises with audio feedback in a 3D environment. Wayfinding outside for those who are blind or have visual impairments has been supported by the use of Global Positioning Systems (GPS) and smart phones with speech synthesis. However, these systems do not help in enclosed spaces and two alternative systems have been presented. Haoye Chen proposes the use of an indoor semantic visual positioning system with the use of 3D reconstruction and semantic segmentation of RGB-D images captured from a pair of wearable smart glasses. The Red, Green, and Blue (RGB) bands of light caught by the camera with simultaneous depth (D) sensing provides machine learningbased 3D information with audio for those who cannot see. The user has more real time information about objects in the vicinity, with sound prompts to ensure avoidance tactics are possible. This can be reassuring when navigating in an unknown space, although these cameras may not work with all materials or in certain lighting conditions. Nevertheless, this system requires no previous fixtures or fittings such as location based Radio-Frequency Identification (RFID) tags or Bluetooth-based beacons that connect to smartphones. Vinod Namboodiri points out that the planning for the placement of a GuideBeacon system is important, in order to ensure smooth navigation around an area, when a user locks onto the system. IBeaconMap provides an automated indoor space representation for Beacon-Based Wayfinding, through the use of floor plans and avoids expensive mapping of the areas used in the building. The provision of any real-time location specific information for those who are blind or anyone who is confused by the complex layout of a large indoor space is invaluable and the options available with the use of AI have enhanced access to the built environment in recent years. There have been enormous changes in the way developers have adapted their assistive and access technologies to encompass the use of AI. Although there remain concerns about the way data has been used, often with a bias that clearly has an impact on those with disabilities, the papers discussed in this special thematic session highlight the potentially constructive areas for its use. From AAC to web content and literacy skill support to wayfinding the research has shown that it is possible to innovate in ways that increase access to both digital and built environments. There may still be challenges in the way we evaluate the outcomes from AI systems and yet more prospects for future work. There may even be the concern that technology driven systems still do not have real conceptual understanding when issues arise about the barriers that remain. However, we have to keep removing those barriers with support from AI, accessible and assistive technologies. Artificial intelligence, advertising, and disinformation Plug and Pray? A disability perspective on artificial intelligence, automated decision-making and emerging technologies AI bias in gender recognition of face images: study on the impact of the IBM AI fairness 360 toolkit Chatbots -an interactive technology for personalized communication, transactions and services A survey of the usages of deep learning for natural language processing Automatic transcription software: good enough for accessibility? A case study from built environment education Speech recognition and synthesis technologies in the translation workflow IFLA: Guidelines for easy-to-read materials. International Federation of Library Association and Institutions IFLA Professional reports 120. Revision by Text simplification using neural machine translation: association for the advancement of artificial intelligence (AAAI) AAC vocabulary standardisation and harmonisation