key: cord-0137489-jwgb13sk authors: Sen, Taylan; Haut, Kurtis; Lomakin, Denis; Hoque, Ehsan title: A Mental Trespass? Unveiling Truth, Exposing Thoughts and Threatening Civil Liberties with Non-Invasive AI Lie Detection date: 2021-02-16 journal: nan DOI: nan sha: f9f2a9b017577ad7182f05617562b4354c999268 doc_id: 137489 cord_uid: jwgb13sk Imagine an app on your phone or computer that can tell if you are being dishonest, just by processing affective features of your facial expressions, body movements, and voice. People could ask about your political preferences, your sexual orientation, and immediately determine which of your responses are honest and which are not. In this paper we argue why artificial intelligence-based, non-invasive lie detection technologies are likely to experience a rapid advancement in the coming years, and that it would be irresponsible to wait any longer before discussing its implications. Legal and popular perspectives are reviewed to evaluate the potential for these technologies to cause societal harm. To understand the perspective of a reasonable person, we conducted a survey of 129 individuals, and identified consent and accuracy as the major factors in their decision-making process regarding the use of these technologies. In our analysis, we distinguish two types of lie detection technology, accurate truth metering and accurate thought exposing. We generally find that truth metering is already largely within the scope of existing US federal and state laws, albeit with some notable exceptions. In contrast, we find that current regulation of thought exposing technologies is ambiguous and inadequate to safeguard civil liberties. In order to rectify these shortcomings, we introduce the legal concept of mental trespass and use this concept as the basis for proposed regulation. "Beyond my expectation, thru uncontrollable factors, this scientific investigation became for practical purposes a Frankenstein's monster, which I have spent over 40 years in combating." These are the words of the first U.S. policeman with a PhD in science, John Larson, reflecting on his invention of the contemporary polygraph shortly before his death (Alder 2007) . Larson was troubled by the improper use of and the unreasonable level of trust placed in the polygraph and the harm caused by the many who were unfairly accused of dishonesty. Over the years since the first practical application of the polygraph in 1921, numerous job applicants were denied employment and government employees lost their Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. jobs (Grubin and Madsen 2005) . Over two million Americans were being tested each year by the 1980s (Goldzband 1990) . It was not until the introduction of the Employee Polygraph Protection Act in 1988 that use of the polygraph and similar devices was banned from most employment settings. While it took the U.S. legislature 67 years to formally regulate the polygraph, the Federal judicial system barred polygraph-like devices in their first application in the courtroom in 1923 (US Court of Appeals D. C. 1923) . The court in Frye v. United States established that in order to "admit expert testimony deduced from a well-recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs". The D.C. Circuit court denied defendant Alfonso Frye's attempt to use the blood pressure deception test (the precursor to Larson's polygraph based on periodic blood pressure readings alone) administered by Harvard psychologist Dr. William Marston to establish his innocence to the murder charges he faced. The "Frye standard" was rapidly adopted by most states' judicial systems (Jensen 2002) . In later analysis of this paper, we show that a legal dichotomy still exists with regards areas of the law which are well defined and others which are largely ambiguous with regards to lie detection technologies, and that this ambiguity is likely to cause harm. Larson's quote in mentioning the story of Dr. Frankenstein's creation of an artificially intelligent creature which becomes a murderous monster is particularly prescient in light of the recent advances AI systems have made in recent years. While we benefit from ever more surprising contributions to our daily lives -AI powered thermal cameras systems are actively being used to screen passengers for fevers associated with coronavirus (Hochreutiner 2020), systems can even extract heart rate from common video stream or wifi signals for health monitoring (Wang, Pun, and Chanel 2018) , (Lee et al. 2018) ; AI systems, like Frankenstein's monster, also bring the potential to cause substantial harm. With the increasing powers of noninvasive AI also come new methods for invasion of privacy and circumvention of our rights against unreasonable search. For example, a recent AI system purports to be able to predict one's sexual orientation from their facial features (Wang and Kosinski 2018) . It is easy foresee the harm that can result from exposing one's private sexuality considering the case of the Rutgers Uni-versity student who died of suicide after his roommate set up a webcam and publicly broadcast a private homosexual encounter with another student (Pilkington 2010) . Similarly daunting is the recent growth in use of AI systems in police surveillance. The Chinese government has been accused of oppression against the Uighgur minorities in the Xinjian province as AI facial recognition systems have been extensively deployed (BBC 2020) . Chinese authorities state that use of such technologies are necessary to fight terrorism and that similar surveillance systems were instrumental in enforcing the quarantines that helped halt the progression of coronavirus throughout China (Fong, Dey, and Chaki 2020) . How can we fix ambiguities in the law to allow us to benefit from AI sensing technology advances while ensuring that they are not abused? Reynolds and Picard were one of the first to examine the related question of "Would it be ethical for a computer to sense a user's emotions?" (Reynolds and Picard 2004) . Through an online survey (N=125) they found that respondents were more likely to consider a system that collects and exposes one's emotions as ethical if there is an explicit contract for which users consent. Calls have been made for "design contractualism" in which systems which are capable of reading user's emotions are designed to obtain consent (Pitt 2012) . In addition to supporting this contractual premise, Reynolds and Picard also promote an analysis of the social dimensions of how a product will be used in determining whether such an application is ethical (Reynolds and Picard 2005) , (Reynolds and Picard 2004) . In situations where consent is not provided, analysis of AI sensing systems in a civil liberties context is largely limited to facial recognition (Raji et al. 2020) , (Brey 2004) and have even been the focus of a recent congressional oversight committee hearing (Congress 2019). In this paper we examine the progression of lie detection technologies and consider to what extent US law currently regulates coming technologies. To aid our analysis, we define two types of lie detection technology: truth metering, which involves using a device evaluating one's level of belief in one of their statements, and thought exposing, which involves using a device to predict an individual's inner thoughts. In summary, we generally find that truth metering is already largely within the scope of existing US federal and state laws, albeit with some notable exceptions. In contrast, we find that current regulation of thought exposing technologies is ambiguous and inadequate to safeguard privacy and civil liberties. In order to rectify these shortcomings, we introduce the legal concept of "mental trespass" and use this concept as the basis for proposed legislation. More specifically, in this paper we argue that: • Development of non-invasive, AI-based lie detection technologies are likely to progress rapidly in the near future, and no law or government effort is likely to halt its production, distribution, and use (in many cases the government is investing heavily in the advancement of such technologies). • Lie detection technologies carry with them much potential for individual harm in terms of loss of privacy, wrongful criminal conviction, and unfair bias. • While the current legal environment generally already regulates truth metering technologies, it is largely ambiguous with regards to the legality of uses of thought exposing technologies. • In order to mitigate the potential harms such technologies may bring, we recommend the introduction of a regulatory federal "Mental Trespass Act", which calls for a general use ban of non-consensual use of thought exposing technology, but allows for non-offensive uses of truth metering devices. Lie detection was essential enough to human civilizations that it appears in the Code of Hammurabi, one of the very first instances of written law from circa 1754 BC (Roth 1995) . Translations of preserved tablets of the Code state that questions of honesty were to be resolved through what has been termed trial by ordeal: "If anyone bring an accusation against a man, and the accused ... jump into the river ... if he sink in the river his accuser shall take possession of his house..." (Roth 1995) . What started out as random change and religions belief, the state of the art in lie detection and related law has progressed to include increasingly powerful scientific techniques including advanced sensing tools and more refined questioning techniques as depicted in the Timeline in Fig. 2 . It took approximately 800 more years after Hammurabi before the first glimmer of scientific legitimacy in lie detection to appear, which was found in the ancient Hindu text: the Vedas. Loosely based on the involuntary fight or flight response (which causes individuals to go white as blood is diverted from body extremities to the heart and lungs), the Vedas describes how to spot a murderer, "[The poisoner] . . . does not answer questions, or they are evasive answers; he speaks nonsense . . . his face is discolored . . . " (Trovillo 1939) . The scientific progression of lie detection continued in the 3rd century BC, as renowned physician Erasistratus used pulse, skin temperature, and skin pallor, to correctly detect the lies of Prince Antiochus, as the prince tried to conceal his passionate love for his father's new wife (Trovillo 1938) , (Amsel 2019 ). An underlying premise regarding lie detection began to be recognized. Charles Darwin wrote in his 1872 book, The Expression of Emotions in Man and Animals, that "...actions become habitual in association with certain states of the mind, and are performed whether or not of service in each particular case..." (Darwin 1873) . We recognize a fundamental premise of lie detection in that a person's internal state of mind uncontrollably leaks out into the externally observable world when appropriately probed. Indeed, this premise must hold for a given lie detection technique to work. Through appropriately probed we recognize that specialized questioning techniques may be necessary to cause honest and dishonest subjects to elicit detectable differences in observable behavior. This definition additionally brings attention that advanced tools may be useful in observing these subtle differences. The use of tools and specialized questioning techniques in lie detection is demonstrated with the perhaps most wellknown and widely used lie detection device, the contemporary polygraph. Like Erasistratus's technique, the common polygraph tracks the subject's heart rate and respiration. The modern polygraph, however, has two notable improvements over Erastratus including: 1) additional sensors for blood pressure, skin conductivity, and respiration rate; and 2) a formal questioning technique, known as the control question test. The polygraph sensors collectively provide a measure of the subject's physiological arousal. Crucially, the control question test begins with questions unrelated to the matter for which the lie detector is being applied, including baseline questions and control questions. Baseline questions are trivial questions used to indicate the subject's arousal at rest. Alternatively, control questions are designed to create a strong physiological response in most people, for example Have you ever stolen office supplies from work? Have you ever cheated on your taxes?. The unrelated questions are followed with relevant questions, which are pertinent to whatever is being investigated. The underling theory of the control question test is that someone who is lying is more likely to be nervous during the relevant questions than during the control questions, compared to an honest subject who is expected to have a stronger level of arousal during the control questions (Raskin and Honts 2002) (Bradley and Klohn 1987) . Other questioning techniques such as the guilty knowledge test (GKT), which relies on strategic use of information only a guilty person would have, have been developed and compared with the control question test (Myers and Arbuthnot 1997). The GKT has been particularly used in systems which rely on advanced sensing, including electroencephalograms (EEG) and function MRI (fMRI). In addition to the advances in questioning techniques and sensors, perhaps most revolutionary recently has the advance in data analysis: including both raw computing power as well as advanced machine learning techniques. The world's first real supercomputer was Control Data Corporation's CDC 6600, developed in 1964 (Hosch 2018). The computer was enormous, the size of multiple people, and state of the art -miles ahead of the competition. Three times as fast as its predecessor, it could run 3 million megaFLOPS. It cost the equivalent of $60 million in 2018 (Spicer 2000; Bureau of Labor Stat. 2018). The CDC 6600 was so powerful the word "supercomputer" was coined to describe it. If someone were to tell its creator, Seymour Cray, that in 50 years' time a processor the size of his forearm would cost 50,000 times less and be 2 million times faster, he might not believe them. But the NVIDIA GeForce GTX Titan X, released in 2015, was exactly that (NVIDIA 2018; Hruska 2016). Piggybacking off of the recent hardware advances, several algorithmic successes have been made in the field of computer vision. In 2012, "AlexNet" astounded researchers with its accuracy in image classification and demonstrated the power of convolutional neural networks for the task (Krizhevsky, Sutskever, and Hinton 2012) . In 2014, the invention of generative adversarial networks utilized deep learning to generate realistic images (Goodfellow et al. 2014) , which recently became embroiled in controversy with their application in deepfakes. Researchers and software engineers working with computer vision have an incredible array of tools with which to develop new technologies in the coming years. We highlight the progress in computer vision specifically because these advances enable lie detection to be performed at a distance due to their inherent non-invasiveness. Additionally, one of the major factors limiting progress in deception detection is the lack of good data. However, with recent advances in Internet technologies, techniques are available to gather data on deception. For example, Sen et al. developed a system for gathering video deception data via crowdsourced individuals (Sen et al. 2018 ). In addition, US government entities have very recently expressed desire to gather data sets on "credibility assessment", which could be used to develop deception detection technologies. In fact, during 2019, the Intelligence Advanced Research Project Association (IARPA) put out a grand challenge concerning the collection of deception data. The Credibility Assessments Standardized Evaluation (CASE) Challenge formally called for a protocol to standardize this procedure in regards to how these datasets are gathered and accessed. Additionally, governments have started pouring vast amounts of funding into projects which expand their powers of surveillance. Backed by the Chinese and Russian governments, AI startup Megvii raised $460 million for the development of facial recognition technology. We emphasize these specific examples to illustrate the non-invasive nature of these developing technologies, advancements in data collection procedures/capabilities and government interest. Because of these qualities, it is inevitable that accurate AI-based lie detection will soon be upon us. We next analyze current laws regarding the public use of non-invasive deception detection technology without an observed party's consent. While our focus is on U.S. law, it is worth noting that the 1948 Universal Declaration of Human Rights has explicit language regarding human rights to "privacy". More specifically, Article 12 of the declaration states "No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence...". While the interpretation of what constitutes "arbitrary interference" and "privacy" are left undefined, it is noteworthy that any notion of such a privacy right was considered so important as to be codified in the Universal Declaration. In the United States, perhaps the most relevant legal issue with regards to public deception detection is raised with regards to the 4th amendment. The 4th amendment establishes the "right of the people to be secure in their persons...against unreasonable searches" and has been interpreted to prohibit searches when there is a reasonable expectation of privacy (Amar 1994) . Several cases have established that in general there is no reasonable expectation of privacy for things which are in plain view in a public area. For example, the U.S. Supreme Court held that garbage that is left out on the curb can be searched without a warrant in California v. Greenwood (Herdrich 1988 ) (Simpson 1989 )(Cunis 1988 ). This has been extended to include use of some specialized equipment, particularly the use of a plane for aerial observation of someone's backyard in California v. Ciraolo (Falcone 1986) , and observation of an open field in an industrial complex with a high definition camera in Dow Chemical Co. v. United States (Joyce 1982 ) (Ruzi 1988 ). The court seemed to indicate the relevance of whether the equipment was available to the public, in one case finding that the EPA did not violate the 4th amendment when it "was not employing some unique sensory device not available to the public". An analysis of the smells during a routine traffic stop with a specialized drug-sniffing dog was also found to not constitute an unreasonable search in Illinois v. Caballes (Dery III 2005) (Froh 2002 ) that viewing a person's home from outside with a thermal imaging camera (to determine if high temperature drug growing lights were used) was indeed a violation of one's "reasonable expectation of privacy". In light of these Supreme Court cases regarding 4th amendment rights, how would we expect the use of a video-based lie detection apparatus to play out? One perspective is that an individual's facial expressions are in plain view and thus do not carry a reasonable expectation of privacy, as in California v. Ciraolo regarding a person's backyard, or someone's garbage on the curb in California v. Greenwood. It is likely that the camera used for deception detection need not be more advanced than the high resolution camera deemed to be acceptable in Dow Chemical v. United States. However, lie detection does involve use of state of the art AI-driven algorithms and computer vision techniques. It seems conceivable that a court could find such algorithms as uncovering someone's internal state in an invasive way. Additionally, we may expect a court to consider, as it did in Dow Chemical Co. v. United States, whether the equipment used is publicly accessible. Thus, whether such lie detection technology is made public or not will possibly affect whether its use constitutes a 4th amendment search (e.g., if it is made available to the public, the Government would not be using "specialized technology"). However, it should also be noted that 4th amendment issues are limited to the government (or people working on behalf of the government) and does not apply to public at large. In addition to the potential 4th amendment issues, the use of lie detection technology in a court of law by the prosecution may bring up Constitutional Law issues with regards to the 5th amendment protection against self-incrimination. The 5th amendment provides that "[n]o person shall be ... compelled ... to be a witness against himself" (Amar and Let-tow 1995) (Rubin 2003) . We foresee that use of a lie detection technology without a subject's consent may be interpreted as compelled testimony. However, the courts have interpreted the 5th amendment narrowly, giving the prosecution the right to compel the accused to provide a password to encrypted computer data (Wachtel 2013 )(Cauthen 2017). Additionally, the courts have determined that a suspect may be compelled to produce fingerprints, blood, and fingernail scrapings without violating the 5th amendment (Inbau 1998 ) (Inbau 1998 ). Further, courts have even found that compelling a witness to provide a voice sample for identification does not trigger 5th amendment protections (Weintraub 1956) . Thus, we believe that it is unlikely that a court would find use of an AI-driven lie detection technology to be a violation of one's 5th amendment rights. However, in certain contexts perhaps this is not the case. For example, Thompson argues that highly invasive lie detection technology, such as unconsented application of the fMRI, is likely to violate the 5th amendment due process law as it "shocks the conscience" (Thompson 2007). Therefore, we take the stance that the degree of invasiveness is what fundamentally defines this question of violating the 5th amendment. We bring to light in this paper that highly accurate non-invasive lie detection technologies are not only imminent, but their risk for infringing upon our civil liberties is much greater. This is due to the fact that non-invasive lie detection devices are able to lie detect non-consenting individuals from a distance. Furthermore, it is not clear whether these noninvasive methods would shock one's conscience enough to violate the 5th using Thompson's terminology. The Employee Polygraph Protection Act (EPPA) prevents private employers from requiring job applicants or current employees to submit to a lie detector of any kind, but allows polygraphs to be used by certain sectors, namely government and security positions. However, according to its website, the fMRI based lie detection company No Lie MRI "measures the central nervous system directly and such is not subject to restriction by these laws". As noted by Greely and Illes, the language used in the provision of the legislation is broad enough that loopholes like this are possible (Greely and Illes 2007) . Without an explicit amendment or judicial review, No Lie MRI could continue to offer its services to employers, violating the intention of the EPPA, but not the text of the law. The EPPA is limited to employeremployee relationships, and is silent with regards to public use. The strongest limitations on the public use of a non-invasive lie detection technology appear to arise from state law. While it is difficult to analyze each state's laws individually, a concise restatement of the preferred rules used by a majority of the states is available in the "Restatement of Law", written by the American Law Institute (ALI). The Restatement provides the law of Intrusion Upon Seclusion, commonly referred to as "invasion of privacy", which makes liable one who "intentionally intrudes, physically or otherwise, upon the solitude or seclusion of another or his private affairs or concerns...if the intrusion would be highly offensive to a reasonable person". This liability extends even when there is no publication or use of the information obtained in violation. In general, surveillance from a public place is not intrusion upon seclusion, however, exceptions to this rule exist. In summary, it is unclear if when accurate noninvasive lie detection arrives, it will be legal to use on non-consenting individuals caught unawares. In the federal court system, it is unclear whether even a highly accurate lie detector would be admitted as evidence. Currently, polygraph tests and their results are almost entirely inadmissible in a federal court under evidentiary rules. Polygraph results are what is known as "highly prejudicial," meaning that regardless of the test's accuracy or even its relevance to the case at hand, hearing about it will bias the jury. If the polygraph indicates that the defendant has lied, despite its questionable accuracy, a jury may treat that as definitive proof that the defendant lied. Additionally, if they believe the defendant lied about material facts related to the case, that may indicate proof of guilt to a jury, no matter how relevant or irrelevant those facts are to the defendant's guilt or innocence. For these reasons, it is possible that even a 99% accurate lie detector could be excluded from evidence, due to a judge fearing the jury will treat it as 100% accurate. There are currently two standards by which scientific evidence can gain admission into the courtroom depending upon jurisdiction, known as the Frye standard (introduced in the Introduction) and the Daubert standard. The Frye standard provides that in order to be admitted, the scientific basis for the evidence "must be sufficiently established to have gained general acceptance in the particular field in which it belongs". Per this standard, computer AI-based lie detection is almost always not admitted in its current state, as the underlying technology is still developing and the accuracy of this method of lie detection has not been well established. The Daubert standard, which has largely superseded the Frye standard in both federal court and most state courts, sets stricter guidelines for evidence being admitted, but leaves the decision up to the court rather than the scientific community. This includes the Frye standard of general acceptance as well as whether the scientific evidence has been tested, whether it has been peer reviewed, and whether it has a high rate of error. AI-based lie detection would also likely be kept out of courtrooms according to this standard due to its current lack of widespread testing and peer review. These two standards ensure that the courts are well equipped to keep potentially inaccurate scientific evidence out of the courtroom. Whether or not a technology is admitted into the courtroom is of the utmost importance for the following reason. Historical review shows that once a technology is deemed as legitimate (e.g. fingerprint analysis) or as questionable (such as the polygraph), such characterizations are unlikely to be changed (Thompson 2007) . A major issue with the polygraph was raised in case law, with the Oregon Supreme Ct. finding "the use of the polygraph ha[s] the potential to dehumanize parties and witnesses, treating them or as 'electrochemical systems to be certified as truthful or mendacious by a machine.'" (Peterson 1989). The Daubert standard, similar to the Frye standard has almost entirely prevented polygraph admissibility (McCall 1996) . Given that recent legal analyses argue that fMRI should not be allowed at this time (Langleben Prior analysis has also come to the conclusion that looking at technologies like fMRI through analogy with blood test and/or forced testimony is inappropriate, arguing that "the implicit assumption of mind-body dualism, which underlies this thinking, is dated and, most likely, no longer tenable" (Thompson 2007). Scholars have argued the importance of considering legal implications of an advancing technology before it becomes ubiquitous, with Thompson stating "if the existing scientific literature is indeed a harbinger of an important new technology, it will be to society's benefit that some thought have been put into its implications before its wide scale deployment." (Thompson 2007) . All in all, the topic of advanced lie detection has received recent attention in ethical and legal contexts (Langleben and Moriarty 2013), (Farah et al. 2014) , (Greely and Illes 2007) , (Tennison and Moreno 2012), (Moreno 2009). However, precise definitions of the technologies in question and proposed legal doctrines offering a solution have yet to be fleshed with enough granularity. With the legal doctrine and case history being classified as ambiguous at best, there is a clearly a strong societal need to formally define what should be allowed regarding these evolving technologies. To understand public opinion on the usage of these lie detection technologies, we sampled the population by conducting a survey using the crowdsourcing platform Amazon Mechanical Turk. We included demographic information on the survey and based on the responses, launched multiple surveys with participation requirements to ensure that the demographic distribution of our respondents resembled the demographic distribution of the United States. (See appendices for side by side comparisons of race, gender, political affiliation, education level and age.) We believe matching the distribution is essential for obtaining not only a reliable sample, but one where we can reasonably extrapolate to claim any kind of generalizable opinion. We monitored the quality of the survey responses by implementing a control question and eliminated all survey responses where the desired condition did not hold (e.g. "answer strongly agree to this question"). We also incorporated a free response question to our survey and removed responses where the length of the response was less than 10 characters in length. After all unsatisfactory data points were removed, we were left with N = 129 quality responses. We set out to investigate whether public opinion was in favor of or opposed to the legality of using these technologies on an individual without first obtaining their consent (crucially important for noninvasive lie detection technologies as they can be used on an individual caught blissfully unaware). The results from the survey responses were strongly indicative of opposition to unconsented usage. We combined the agree and strongly agree responses to find the number in favor as well as the disagree and strongly disagree responses to find the number opposed. 33 people were in favor, 69 people opposed and 27 were neither for nor against this usage case. We conducted a proportions statistical test with our null hypothesis being that there is no difference in public opinion regarding this question (e.g. the number in favor is equal to the number opposed). After running the statistical test, the probability of that null hypothesis given our data was p=0.0001 or 1/100th of a percent (0.01%). We thus reject the null hypothesis of there being no difference in public opinion and accept the alternative hypothesis that likely there is a difference (meaning a majority of people are opposed to unconsented use of these lie detection technologies). Based on public opinion, there is certainly concern over unregulated lie detection technology being used maliciously and we have an obligation as a society to mitigate that outcome. We are hopeful that our proposed "Mental Trespass Act" and recommendations for updating the language in the EPPA to reflect our technology definitions would greatly aid this communal effort. In providing legal recommendations on how to mitigate the potential harms and ambiguity in the field, we first define two types of relevant technology as well as the different categories of their usage. We distinguish two major classes of lie detection tools including (1) accurate truth metering, and (2) accurate thought exposing (Depicted in figure 1 ). Accurate truth metering is defined as Use of a device to measure an individual's level of belief in an intentional statement made by the individual, with the device usage having an accuracy exceeding typical human performance. A statement broadly includes spoken utterances, written text and drawings, bodily gestures, and other forms of communication. An intentional statement, requires that the statement maker has the mental intent to make the statement. Thus, a spontaneous gasp of surprise, or the unconscious blushing after hearing a question are not intentional statements. In defining accurate, we use an excedat-hominem standard, i.e. a level of accuracy which clearly exceeds typical human ability. Thus, in defining an accurate truth meter, we consider the numerous studies on human performance regarding lie detection and note that this level of accuracy has been found to be approximately 54 % (Bond Jr and DePaulo 2006), even amongst expert judges (Bond Jr and DePaulo 2008) . Accurate thought exposing is defined as Use of a device to expose an individual's thoughts, without the individual's consent, with the device usage having an accuracy exceeding typical human performance. Accurate thought exposure specifically includes instances of questioning a suspect without consent and accurately measuring the suspect's physiological response to the questions. As with the definition of truth metering, the definition of accurate thought exposure requires a level of accuracy which clearly surpasses human ability. A primary distinction of truth metering and accurate thought exposing, is that truth metering requires an overt/intentional statement by the individual regarding the issue being observed (Top portion of Fig. 2 illustrates this distinction). For example, in asking an individual what time it is, by evaluating whether they are being honest about the time involves only truth metering. However, if one then uses a system to gauge the level of anger in their voice, the technology has crossed the boundary into the realm of thought exposure because the overt response to the question being asked doesn't pertain to anger. Similarly, if an individual is talking out loud to others in public area on his/her own accord, and we evaluate the honesty of each of his/her overt statements, we are truth metering. However, if the individual's statements do not directly involve their emotions, and we determine that the individual feels high levels of arousal we are thought exposing (noting that a human observer would typically not be able to discern that information). We concur with Greely and Illes that lie detection technologies and services must be regulated to prevent harm. Specifically, we believe that a federal Mental Trespass Act should be passed which: 1.) Provides a general ban on the use of "accurate thought exposing" on an individual without the individual's consent., 2.) Makes an exception to this ban for use of "accurate truth metering" on individuals in a public space, as long as the particular usage would not be found offensive by a reasonable person, 3.) Updates the Employee Polygraph Protection Act to explicitly include "accurate thought exposing" and "accurate truth metering", even when such devices are noninvasive. As much as these lie detection related technologies have the potential to infringe upon the civil liberties of the people in a malevolent way, there are a plethora of instances where the proper use of sophisticated, AI-driven sensing technologies and their associated algorithms can provide many benefits. Consider the infamous picture of the "MAGA teen" Nicholas Sandmann (Figure 3 ) that caused a recent social media and news firestorm as news commentators, actors, and numerous others, joined in for wave after wave of vilification towards Nicholas' Sandmann's alleged harassment of Native American Nathan Phillips (Grinberg 2019). Figure 3 : Infamous MAGA "smirking teenager" (Noland 2020) a) Nicholas Sandmann's single frame expression taken out of context helped fuel a firestorm of rebuke even though automated facial expression analysis detects no contempt, b) analysis of another image from the event suggests that Sandmann was experiencing more fear and surprise. Attention was drawn to Sandmann's expression, one CNN commentator tweeting "Have you seen a more punchable face", as others issued Sandman and his classmates death threats (Richardson 2020). Yet, an analysis of Sandmann's face with facial expression analysis tools suggests that he was experiencing amusement rather than contempt. Indeed later reports validated Sandman's assertions that he and his fellow classmates were not harassing the native American Nathan Phillips, but rather that the students were victims of harassment themselves (Soave 2020) (partially evidenced by Figure 3b indicating that Sandman was feeling more afraid and caught off-guard). Had the news media used a noninvasive AI facial expression evaluation tool, this debacle (and other false news propagation like it) could have perhaps been avoided. Although the Frye and Daubert Standards would certainly keep out accurate truth metering technologies from the courtrooms, there are approaches to circumvent the issues those standards posit to maximize the probability of obtaining justice in our Nation's court proceedings. We demonstrate these aspects using three components; An interview with a Judge, essential design primitives for developing accurate truth metering technologies and through one example, hope to show the ramifications for respecting an individual's cultural background. Interview with Judge In order to get an expert opinion on the potential impact advances in lie detection technologies could have on the courts, we interviewed standing County Judge Dennis Cohen of Livingston County, New York, who has twelve years experience on the bench. On the topic of emerging technologies in lie detection, Judge Cohen said "I think it is a big area of advancement in law, and could help us resolve cases and work through investigations ... just looking at what high resolution cameras have done for us with law shows that we can often identify the right culprit or prove that something happened or didn't happen." Judge Cohen went on to say "Our whole society is changing because of technology. If it could be determined to be reliable ... then it could open up a whole new phase of things". When asked about his opinion on relating the polygraph to these developing technologies threats and their associated threats of unreasonable searches, Cohen remarked "Polygraphs are voluntary. This [referring to these developing technologies] would also be a voluntary procedure as well, at least for the foreseeable future. Therefore it would not ever reach the bounds of an unreasonable search." Here we see an important point brought up that when consent has been unquestionably obtained from an individual, usage of the polygraph or technologies to replace the polygraph never constitute an unreasonable search. However, the utility of such technologies designed in this way are vastly if not completely diminished due to their less than 100% accuracy giving rise to the "highly prejudicial" nature. Thus, it is prudent that in developing these technologies, that an entirely different approach be taken in their design primitives, development and deployment. Essential Design Primitives If accurate truth metering and/or thought exposure is used by law enforcement, it should be equally effective across all races and genders. Therefore, it is the responsibility of us and others who are researching and developing this technology to collect diverse data. We believe this could even be enforced by federal funding guidelines for those who are studying deception detection using Artificial Intelligence. In order to receive federal grants for this purpose, labs could be required to meet certain diversity standards in the data they collect and use in their deception detection algorithms. Additionally, the performance of said lie detection technologies should be standardized across all law enforcement entities. Another relevant issue is how to maximize accuracy (as well as ability to deploy such devices in the court rooms) while preserving investigator autonomy. One solution proposed by Kleinberg et al., in their prediction framework for whether judges should jail or release criminal defendants, is to integrate the machine into the existing procedure, creating a human-machine symbiosis (Kleinberg et al. 2018) . Instead of having the algorithm make all the decisions, the algorithm should give the people that are using it more information for them to make informed decisions themselves. In this vein, it is our suggestion for researchers to create an output that is nuanced and detailed, rather than a binary 1 for "lying" and a 0 for "not lying." The lie detection device should detect and display indicators of deception when they appear. This fundamentally changes the role of the device. Instead of performing the evaluation based on an arbitrary decision boundary, it acts as a tool to assist people in doing the evaluation themselves. To interpret these more nuanced results, trained human operators should be employed. The use of such operators could even be required for the technology to be used. These operators should understand how to interpret the output and convey that information to investigators, while also understanding and conveying potential biases in certain questions as well as the potential for inaccuracy in the technology. The US can easily be viewed as a conglomeration of diversity given that most of the population can trace their family roots back to a family that immigrated to the US in the first place. This causes there to be a melting pot of different cultural backgrounds that inevitably find their way into the courtrooms. Challenged by how to integrate all these cultures successfully and fairly into the legal system, the AI-driven algorithms behind sensing technologies could provide valuable, novel solutions. Currently, the US legal framework does not support the wearing of masks in the courtroom. However, given the circumstances brought on by the COVID19 pandemic, this restriction has temporarily been lifted. This begs the question, should it ever have been a stipulation in the first place? Take for example a woman with a Muslim background who wishes to uphold her cultural traditions and wear a hijab during a court proceeding (e.g., she is called as a witness to bear testimony to the actions of another person). With our advanced sensing technology many options exist. First and foremost, the identity of the witness can be unquestionably established. This is perhaps the most important aspect to uphold. In the rarest of cases, let us for arguments sake assume that the wearing of a hijab interferes with one or two Jury members interpretation/perceived credibility of the witnesses testimony. In such a situation, human operators interpreting the results of the technology employed in the courtroom could be trained to identify this bias and address it. This is just one small example of where these advanced AI-driven sensing algorithms (which are also the backbone of facial recognition technology, creating deepfakes and filtering videos in general) can be used to treat every person that comes into the legal system with respect and fairness of the highest standard. While dishonesty might frequently be harmful to people and society as a whole, we do believe that people have the right to exercise their ability to lie non-maliciously. Nonmalicious lies are frequently altruistic or told by people to protect themselves or others, and allowing lie detection to remain unrestricted would prevent these kind of lies. We believe the harm caused by this would outweigh the benefit of allowing malicious lies to be detected, and therefore we believe that accurate thought exposing technologies should be regulated for the general public. Through establishing these regulations, we not only prevent potentially malicious uses, we offer further protections for the people against unreasonable searches of their mental sanctuaries. Recall in the case of Dow Chemical Co. v. United States, because the observation of the industrial complex was done through a high resolution camera and the general public had access to that technology, the court ruled that this did not constitute the bonds of an unreasonable search. With the public not having direct access to these emerging accurate thought exposing technologies, we thus prevent this legal precedent to be carried out in the future; enabling Government entities to take advantage of individuals in a variety of contexts. It is our position that truth metering devices could remain available to the general public, as long as they were limited to uses that would be deemed non-offensive to a reasonable person. This would allow them to be used for lie detection in contexts such as navigating a foreign environment and dealing more fairly and justly with children. We formally take the stance that thought exposure systems must be regulated more strictly, as they can reveal more private information about a person (recall the unfortunate circumstances that led to the death of Tyler Clementi). We recommend that accurate thought exposing technologies be regulated for the general public (potentially by using a permit schema that is externally audited by multiple third parties relatively frequently), and that their unconsented use be codified as an illegal mental trespass. Accurate deception detection might not be available in the immediate future, although it is likely closer than most of us think. The technology's ambiguous legal status make it necessary to establish guidelines before it is fully developed and available. The introduction of AI-driven advanced sensing technologies for this task raises new concerns regarding privacy and consent due to their noninvasive nature. Defining the technologies precisely as "accurate thought exposing" and "accurate truth metering" technologies is essential for proposing legal doctrine that is as comprehensive and airtight as possible to safeguard civil liberties. Otherwise, potential loopholes could emerge in the future causing harm to society and bypassing the intentions of the law and the protections that it offers (as is the likely case currently with No Lie fMRI and the EPPA Act). Emerging lie detection technology will be a powerful tool, benefiting the criminal justice system, the medical community, and many others. In order to utilize it to its fullest potential, however, it must be developed and used responsibly with the necessary restrictions -or it may end up doing more harm than good. Rather than nurture and help his creation acclimatize to society, Dr. Frankenstein rejected him. If we continue to ignore the legal status of coming AI technologies, rather than nurture and regulate them, we too may end up with a Frankenstein monster as Larson proclaimed. The supreme court announces a fourth amendment general public use standard for emerging technologies but fails to define it: Kyllo v. united states. U. Dayton L. Rev Fourth amendment first principles The uighurs and the chinese state: A long history of discord. BBC News. Retrieved The fifth amendment and compelling unencrypted data, encryption codes, and passwords The Expression of the Emotions in Man and Animals Who let the dogs outthe supreme court did in illinois v. caballes by placing absolute faith in canine sniffs Functional mri-based lie detection: scientific and societal challenges Artificial intelligence for coronavirus outbreak Rethinking canine sniffs: The impact of kyllo v. united states Generative adversarial nets Neuroscience-based lie detection: The urgent need for regulation Lie detection and the polygraph: A historical review How smarter ai™-powered cameras can mitigate the spread of wuhan novel coronavirus (covid-19), and what we've learned from the sars outbreak 17 years prior Self-incrimination-what can an accused person be compelled to do? Frye versus daubert: Practically the same The epa's use of aerial photography violates the fourth amendment: Dow chemical co. v. united states Using brain imaging for lie detection: Where science, law, and policy collide Design and implementation of monitoring system for breathing and heart rate pattern using wifi signals Polygraph testimony and juror judgments: A comparison of the guilty knowledge test and the control question test 1 Design contractualism for pervasive/affective computing Saving face: Investigating the ethical concerns of facial recognition auditing Affective sensors, privacy, and ethical contracts Ex-cnn host 'likely' to be sued over now-deleted 'punchable face' tweet: Sandmann attorney. The Washington Times Square pegs and round holes: Substantive due process, procedural due process, and the bill of rights Reviving trespass-based search analysis under the open view doctrine: Dow chemical co. v. united states. NYUL Rev. 63:191 Automated dyadic data recorder (addr) framework and analysis of facial cues in deceptive communication A year ago, the media mangled the covington catholic story. what happened next was even worse History of lie detection Give me your password because congress can say so: An analysis of fifth amendment protection afforded individuals regarding compelled production of encrypted data and possible solutions to the problem of getting data from someone's mind A comparative survey of methods for remote heart rate detection from frontal face videos. Frontiers in bioengineering and biotechnology 6:33