Skip navigation
PYHS - Header

Deceiving Themselves: How Cops’ False Belief in Their Ability to Detect Deception From Nonverbal Cues Leads to Miscarriages of Justice

by Casey J. Bastian

“The mistakes of lie detection are costly to society and people victimized by misjudgments. The stakes are really high.” — Maria Hartwig, John Jay College of Criminal Justice

For as long as human beings have communicated, many have practiced the art of deception. That people can lie is a fact of everyday life, and lie they will. Research suggests that an average person will tell two lies per day. Research also shows that during a typical 10-minute conversation, 60 percent of people will tell a lie. Obviously, some lie much more frequently than others. The motives are as varied as the actual lies.

The great majority of lies are low-stakes. These are the “little white lies” – about personal attitudes, feelings, and opinions – told to preserve and support social cohesiveness. And while some damage can be caused by these lies, they are generally harmless.

The darker side of deceptions and lies are considered high-stakes. Lies people consider serious, often told to hide significant transgressions such as cheating on a test or an infidelity to a spouse. The most serious of these are told to hide criminal acts and are told for the purpose of self-preservation.

The pertinent question raised is: How can lies be detected as they are being told? Identifying a liar isn’t obvious or easy. If you believe it is, you’re likely deceiving (lying to) yourself. And this has been a problem for millennia. Human beings simply have a hard time detecting deceit based on nonverbal cues.

Several decades of research reveal that even “experts” struggle to accurately detect deception through nonverbal behavioral cues. The psychological folklore of body language and physiological reactions revealing deceit just aren’t true. Innocent people can convey the identical behaviors of a guilty person in high-stress circumstances – like a criminal interrogation. To make matters worse, nearly 70 percent of everything we “say” or convey is a nonverbal communication. According to Judee Burgoon, Ph.D., and professor of communication at the University of Arizona, “There really is no Pinocchio’s nose.”

Despite decades of research to the contrary, members of law enforcement and the criminal justice system have a persistent and unshakeable belief that they possess some innate ability honed through years of experience of detecting when a suspect is being deceptive based on nonverbal cues. Nevertheless, data and research have established that, despite their unjustifiably inflated sense of their abilities, the true accuracy rate is little better than “chance,” even for them. That is, flipping a coin is just as accurate as they are in being able to consistently identify when someone is being deceptive by scrutinizing nonverbal cues.

But that is a genuine and alarming problem. These professionals hold people’s liberty and lives in their hands. Deceiving themselves about the accuracy of their abilities has led to far too many wrongful arrests and convictions. When capital punishment is at stake, it’s literally “an issue of life or death.”  

An infamous example of this is the case of 14-year-old Canadian Steven Truscott. He was falsely convicted of raping and murdering Lynn Harper in 1959. The inspector was convinced Truscott was guilty after an initial interview because Truscott was observed acting “nervously.” The inspector’s belief that Truscott was a “lying, sexual deviant” led to the boy’s conviction and death sentence – overturned only after Truscott experienced the trauma of such an ordeal.

Psychologists Bella DePaulo, of the University of California, Santa Barbara, and Charles F. Bond, Jr., of Texas Christian University, reviewed 206 previous studies on deception detection in 2006. These studies “involved 24,483 observers judging the veracity of 6,651 communications by 4,435 individuals.” Neither student volunteer nor law enforcement experts identified true from false communications at a rate higher than 54 percent. Other studies reveal the same results. Even in individual experiments, accuracy ranged from 31 to 73 percent – a 52 percent average. “The impact of luck is apparent in small studies,” said Bond.

Correctly inferring that a person is being deceitful based on nonverbal cues, e.g., speech errors, nervous fidgeting, or gaze aversion, is a continuing mythology that endangers the legitimacy of the criminal justice system, people’s freedom, and even life. These pervasive misconceptions are found globally, across many cultures. “One of the problems we face as scholars of lying is that everyone thinks they know how lying works,” said Hartwig, who is a psychologist and deception researcher at John Jay College of Criminal Justice.

The persistence of nonverbal lie detection myths is intriguing. It is also demonstrably dangerous. Most importantly, it is considered a covert threat to criminal justice systems worldwide, and America is not exempt. The detrimental effects of these discredited, pseudoscientific, and unfounded beliefs are not easy to measure. While we may never know “how many innocent people have suffered unjust punishment” because of wrongful convictions based on these myths, we can be confident “this problem is substantial.”

What Exactly Is ‘Deception’?

Deception is obviously a pervasive, necessary phenomenon in human communication. The result is that the definition of what precisely is “deception” is a subject of debate. It really is not straightforward. You can “deceive” someone but not be “lying.” For example, someone tells you that it’s not going to rain, so you don’t take an umbrella when you leave the house. But that person had misinterpreted the weather report. Now you’re wet. You were deceived but not lied to.

As there is a spectrum of what can be called deceptions, the idea has been studied within multiple disciplines. These include linguistics, philosophy, psychiatry, and social psychology. For our purposes, we focus on finding methods of detecting deceptions that are “lies,” and specifically, those considered high-stakes. A sufficient definition of deception in this context is: “a successful or unsuccessful attempt, without forewarning, to create in another a belief which the communicator considers to be untrue.” With that definition in mind, the terms deception and lie can be used interchangeably.

The Need to Detect Deception

As societies evolved, becoming more cooperative and structured, standards of conduct (laws, regulations, policies, customs) were established. For these standards to be effective, they must be adhered to by each member of the society. It is the only way to ensure stability and effectiveness in established social constructs. Those who violate these standards must be identified, and the violation rectified. This is how mediating the effects of deception came to be viewed as a legal challenge.

To protect individuals within society and society as a whole, citizens rely on the rule of law. A functioning legal system is the foundation of every developed society. To develop trust in such systems it is required that only the culpable are sanctioned. And to do so, those individuals must be correctly identified. When the system gets it wrong – especially in the sphere of criminal law – injustice follows. Unfortunately, the American criminal justice system gets the wrong person far too often. This frequently happens because in an effort to detect the deceptions of the guilty, misconceptions about nonverbal behavioral cues malign the innocent. False and inflated belief in law enforcement officers’ own ability to detect deception from nonverbal cues result in coercive investigations, identification of the wrong suspect, false confessions, and wrongful convictions.

Evidence is mounting that proves law enforcement and the courts know our need to detect deception has created another problem for society. Yet, they continue to promote misconceptions during seminars and in training manuals. Officers, agents, and judges are still “sympathetic to unfounded, discredited, and pseudoscientific claims” regarding deception detection.

A History of Deception

Long before deception became a legal challenge, deception has been a moral issue. Some believe that a duplicitous serpent coaxed Eve into committing the original sin, enshrining deception as the ultimate source of evil. Aristotle declared that “falsehood is in itself mean and culpable.” German philosopher Immanuel Kant described truthfulness as an “unconditional duty which holds in all circumstances.” Others postulated dissimilar views. The Italian Saint Thomas Aquinas believed a lie told in service of virtue was appropriate. Machiavelli “extolled deceit in the service of self.” Divergent perspectives aside, the existence and prevalence of deceit itself is acknowledged by each.

Concerning deception, people share a lot of common traits: they’ll tell lies, be told lies, and are quite hypocritical about both. People lie to appear sophisticated, acclaimed, successful, or ironically, virtuous. Lies are told to protect the feelings of the speaker or another. Some even lie for fun, referred to as “Duping Delight” by some psychologists. Such lies are “little lies of little consequence or regret.” Situations calling for such deceptions are “momentary exigencies” representing a necessary evil of social life producing little guilt, anxiety, or shame. To the deceiver, such lying is innocuous.

However, lies that are anything but innocuous are told quite frequently as well, albeit in relatively smaller numbers. Deceit for the purpose of manipulation, unjust enrichment, or avoiding responsibility for immoral or criminal acts is pernicious towards society. Though the idea of what is “immoral” or “criminal” is relative to a given society, the point remains the same.

Whether superficial or quite consequential, when people lie, it can be psychologically justified by the deceiver. But when they are the victim of deceit, people become quite moralistic, indeed. Then deception becomes wrong and “reflects negatively on the deceiver.” Researchers developed the Moral Psychology Theory that proposes a “double-standard hypothesis” to explain the apparent moral ambivalence towards deception. Deception scholars are exploring this phenomenon in more detail hoping to explain studies that reveal duplicity is considered one of the “greatest moral failings” in spite of the very human tendency to lie.

Out of 555 personality traits, the trait of being a liar was rated as “least desirable.” It follows then that “social logic assumes honest people always act honestly.” Considering the apparent reality, this is a dubious assumption; but social cohesion requires this belief. To declare otherwise and label another’s statement a lie is to “imply that the person who made the statement is a liar.” Such accusations are quite serious, particularly in matters of consequence. Discovery of serious deception can have disastrous consequences for the liar’s identity, reputation, or freedom. Those being deceitful in serious matters will take advantage of this natural deference.

With all this history of interacting with deceit, people are still really bad at accurately detecting it. The problem is that the signs of deception are typically subtle and not primarily revealed in one’s body language.

The Rise of Pervasive Mythologies

A belief that lies are transparent and revealed through nonverbal behavior has been recorded as early as 1,000 B.C. The Chinese believed that a suspect should be given a mouthful of dry rice. If the rice remained dry after a period of time, the suspect was guilty. This is one of the earliest known beliefs that a physiological response arising from fear or anxiety might produce an ascertainable result – in this case, decreased salivation. It is safe to conclude that many innocent people were executed based on this, the world’s first known and flawed, deception detection model.

Records from 900 B.C. reveal it was believed “liars shiver and engage in fidgeting behaviors.” In 1908, German-born American psychologist Hugo Munsterberg postulated that observations of “posture, eye movements, and knee jerks” reveal deception. A famous quote by Austrian psychologist Sigmund Freud is often chided by modern researchers for its now-apparent inaccuracy. Freud claimed “no mortal can keep a secret. If his lips are silent, he chatters with his finger-tips; betrayal oozes out of him at every pore.” As it turns out, sometimes a fidget is just a fidget. But unfortunately, many in the law enforcement community have not yet heard that deception detection based on nonverbal cues has been thoroughly debunked for lacking any scientific basis for such a belief. 

Having captivated the human imagination for millennia, deception was destined to attract psychological investigators. There has been extensive research into deception detection, and curiosity is increasing. For example, between 1966-86, there were more than 415 psychology articles written – an average of nearly 21 per year. In 2016 alone, this number was up to 206 new articles prepared and released. Critical discussions of nonverbal lie detection had become necessary because “judgements of nonverbal behavior can be made in every social encounter,” often to someone’s detriment. In 2019, the Annual Review of Psychology published its first article about nonverbal behaviors and deception, which firmly declared that “we vastly and consistently overestimate our skills.”

“How can you tell when people are lying?” This question was posed to participants in 75 countries encompassing 43 languages in one study. In another, the Global Deception Research Team (“GDRT”) interviewed people in 58 countries. Researchers in both studies wanted to know if there are worldwide, pan-cultural stereotypes or if they are culture-specific. The most precise answer is that every culture associates lying with “actions that deviate from the local norm.” But researchers did find some pan-cultural commonalities.

Americans associate 18 different behaviors with deception. The number one stereotype identified in 11,157 responses is known as “gaze aversion,” a belief that liars cannot maintain eye contact. Similar stereotypes are identified by Western Europeans, including those from Britain, Germany, the Netherlands, Spain, and Sweden. Other stereotypical beliefs about deception are: arm, hand, and finger movements; changes in speech rate; making sigh-like sounds; you must know a person to detect deceit; tone of voice; eye-related cues beyond gaze aversions (called “spontaneous saccadic eye movements”); sweating; playing with clothes, hair, or objects; unspecified behavioral changes; and weak arguments and logic. All other cues aside, verbal content revealed by weak arguments and logic are likely the most accurate in detecting deception, certainly more so than nonverbal cues.

The GDRT study revealed a total of 103 beliefs drawn from various cultures. The lowest prevalence of gaze aversion stereotype is found in the United Arab Emirates (“UAE”). The gaze aversion stereotype was identified by 20 percent of respondents in the UAE, placing it eight out of 103 on the GDRT coding system.

Researchers believe the gaze aversion myth is found within many cultures due to neural structures in the human brain. These neural structures are specialized for perceiving eye contact and “are sensitive to gaze direction from birth.” When a mother breaks natural eye contact, this can be perceived as the first sign of disapproval that infants experience. By age three, children know that adults respond with disapproval to intentional lies. This leads to a mental connection between deceit and gaze aversion. So, while this myth may be widely held, we must exercise caution.

Gaze aversion and other nonverbal behaviors are culturally mediated. As to Western cultures, Black Americans are more prone to gaze aversion than white Americans. Native Turkish and Moroccan peoples living in the Netherlands display more gaze aversion than the native Dutch people. Looking into someone’s eyes may be polite in Western cultures, but it is considered quite rude in others. Japanese and Aboriginal Canadian cultures are quintessential examples of such belief systems. Caucasian Canadians view those who avoid eye contact as “being shifty, devious, dishonest, crooked, slippery, untrustworthy, etc.” The Aboriginal Canadians customarily avoid direct eye contact because it is considered “rude, hostile, and intrusive.” Imagine the problems this caused in early, everyday interactions – and even today when a Caucasian Canadian law enforcement officer questions an Aboriginal Canadian. The latter’s desire not to be rude or hostile is the very behavior the former interprets as being deceptive.

Any belief that detecting lies based on observations of nonverbal behavioral cues in any useful, systematic manner on an “individual culture-free basis” seems thoroughly unreliable. As one group of researchers put it: “We may have been looking for a lawfulness in human behavior that exists only in our minds.”

The Search for Viable
Detection Methods

As people realized that lying is prolific and can have harmful impacts, they tried to learn how to spot a liar. And yet, in general, they’re better at lying than detecting the lie. After thousands of years, this still remains true.

It turns out that “our ancestral environment did not prepare us to be astute lie catchers.” Our distant ancestors typically lived in environments lacking privacy. This reduced the prevalence of serious, high-stakes lies. Opportunities to study demeanor and to interpret behavioral cues to detect deceit were too infrequent. Most serious lies were instead “discovered by direct observation or physical evidence,” not interpretations of demeanor. Serious misdeeds rarely occurred and didn’t go unnoticed. The reputational costs to an individual would have been too great and inescapable. A reliable ability for nonverbal cue lie detection just never developed.

Other researchers argue that a “general deception-detection incompetence” must be “inconsistent with evolutionary theory.” This theory suggests that effective detection of deception was critical for the purposes of survival and reproduction as a species. Humans trying to evade discovery of their deceptions were constantly adapting, but the same was likely the case for those trying to detect deception. But this generally involved low-stakes deceptions like the location of food stores, not murder. Yet, a small number of costly mistakes should have created the wisdom necessary to detect harmful deceptions. For a variety of reasons, a permanent evolutionary ability to detect deception never materialized. Our evolutionary history has not left us “very sensitive to the behavioral cues relevant to lying.”

This inability to detect lies based on nonverbal cues has become more consequential as society evolves. The prevalence of lies has increased in modern societies. There are more opportunities to lie and fewer immediate consequences if detected. Even serious lies about conduct that feel outside the criminal sphere don’t necessarily result in permanent reputational damage today. It is much easier to pick up, move, meet new people, and start over (this may be changing back to being more difficult in the information age, but the point remains). It is also easier to conceal evidence of activities about which one might need to lie. Because the evidence of lies is not so evident, demeanor was the primary means by which people tried to make deception detection judgements.

Not only has evolution failed to teach us accurate nonverbal detection methods, culture has left our capacity diminished as well. We’ve been taught not to identify others’ lies. If someone lies to protect their privacy, we’re okay with that. For example, as a child, if your parents said they were going to take a “nap,” whether this was true or not simply didn’t matter. If they deceived you, fine. Being trusting is also helpful in relationship development. Always being suspicious undermines the establishment of “intimacy in mating, friendships, or ongoing work relationships.” Trust makes life easier, “so we err on the side of believing the liar.”

Often, we want to be misled so “collude in the lie unwittingly” because it is better to not “know” the truth. This applies to many with a cheating spouse. One doesn’t want to get caught, and the other doesn’t want the marriage to end. As a result, the deceptions aren’t “detected.” Or, if your wife wants to know your opinion about another women’s looks, even if the answer is obvious, the deceptions are exchanged, and everyone remains happy with what is technically a lie. Even though some would consider these types of lies “serious,” the need to tell them and the desire to not detect them serve a vital social purpose.

However, such rationales do not adequately explain why most members of modern criminal justice systems have such a difficult time discerning lies through demeanor. The police don’t adopt a “trusting stance” with an accused, in fact, the exact opposite is true. Law enforcement do not collude in their being deceived. They trust no one while investigating criminal acts, assuming everyone is lying. And that might be part of the problem. To them everyone “acts” guilty. These are the misunderstandings and ignorance of human behavior that miscarriages of justice are born of: highly suspicious people who believe everyone is lying, who have tremendous power over life and liberty, and who can’t accurately detect deceit through demeanor but stubbornly insist they can despite conclusive scientific evidence to the contrary. Confidence does not equate to ability.

A 2003 study revealed police officers overestimate their ability to detect lying. Sixty officers were asked to assess their detection accuracy. Even when performing below chance levels, each assessed their accuracy as “high.” When provided feedback that confirmed their detections were effective, any “notion of their abilities increased.” Negative feedback caused them to “rate their lie detection abilities lower.” What this reveals is that law enforcement is susceptible to their own belief systems. In the real world, this tendency of police to “overestimate their ability to detect deception can change suspicion into certainty and increase the risk for a false confession” as well as remain doggedly fixated on a particular suspect based on little more than investigators’ misplaced belief in their alleged ability to detect when a suspect is lying.

The Turn to Technology

In 1870, Franz Joseph Gall devised a method called phrenology. The idea was that the shape of the skull could reveal behavioral patterns, including “the tendency to lie” and “engage in criminal behavior.” While Gall’s method was abandoned as a lie detection method, it did lead to a “medical model of criminal behavior.” This model first posited that behaviors are affected by brain malfunctions. As a result, many crimes were reevaluated and likely saved a “multitude of mentally ill people from being unfairly sentenced.”

Jean-Hippolyte Michon introduced graphology in 1875. Designed to detect forged signatures, it led to the assumption that certain personality traits are revealed through “peculiarities of handwriting.” As a method of lie detection, it was abandoned after WWI. Italian criminologist, physician, and anthropolinguist Cesare Lombroso created the first modern lie detector in 1881. “Lombroso’s Glove” attempted to chart changes in blood pressure. The device was later improved by William M. Marston after WWI and designed to record changes in breathing and blood pressure during interviews. Based on this, John Larsen and Leonard Keele designed the “Cardio-Pneumo Psychograph” – or simply, the polygraph. A polygraph records changes in blood pressure, galvanic skin response (bioelectric reactivity), and respiratory rate. Polygraph results are based on the outcome of the “relationship between physiological changes which manifest when a person is not telling the truth.” The reality is that these physiological changes are varied and evidenced by other states than lying.

By the late 1990s, the polygraph was frequently used in business and law enforcement enviroments. The need to ensure the polygraph’s reliability increased as “growing popularity” and “recurring inaccurate results” were observed. The National Academies of Science (“NAS”) tested the polygraph in 2003. The NAS found reliability (of detecting targeted physiological changes) between 81-91 percent and was confirmed by six independent research projects.

But while the polygraph might reliably chart physiological changes, it doesn’t “detect” lies, and that’s the entire purpose of the polygraph. The device “measures physiological responses postulated to be associated with deception.” That is a huge difference. The results of a polygraph examination demonstrate similar emotional responses by those both lying and those telling the truth. The nonverbal responses are measured by the polygraph but must be interpreted by the interviewer. This allows for bias to be introduced. When combined with a well-trained examiner and other verbal analysis techniques, it can be a useful tool. Its limitations are recognized as such, and any results are typically not allowed in court proceedings.

In 1993, research focused on manufactured expressions of pain to create a Facial Action Coding System (“FACS”). FACS is a device measuring any facial expression a human can make. The FACS manual describes how an individual can code each facial Action Unit and was first published in 1978 by Ekman and Friesen. It was eventually renamed the Ekman Micro Expression Training Tool after its inventor. A training program was developed to allow someone to become a self-instructed, certified micro-expression expert. It has never been shown to accurately detect lies.

Another technical method developed was Voice Stress Analysis (“VSA”). VSA measures “fluctuations in the physiological microtremor present in speech.” Every muscle in the human body presents microtremors. Microtremors in the vocal cords have a frequency of around 8-12Hz (a Hertz is a cycle of one per second, so 8-12 microtremors per second). As lying is perceived to be a stressful event and stress causes “microtremor shifts in frequency,” VSA is regarded as a potential means to detect false statements. A 2013 study found VSA can “identify emotional stress better than the polygraph.” It still doesn’t technically detect lies either, though. Further testing of the VSA’s reliability in the justice system is ongoing and is viewed as having potential.

While technological methods dependent on physiological responses waxed and waned, the field of neuroscience was developing a variety of detection methods at the “highest levels of mental processes.” To measure brain activity, several methods were developed: transcranial magnetic stimulation (“TMS”), functional magnetic resonance imaging (“fMRI”), position emission tomography (“PET”), and Brain Fingerprinting (“EEG wave”).

The first EEG wave (electroencephalograph) method was devised in 1924 by Hans Berger. Theoretically, the brain processes unknown or irrelevant information and known or relevant information differently. If details of the crime were present in the brain of a suspect, this should be “revealed by a specific pattern in the EEG wave.” Brain Fingerprinting uses P300 brain response to detect recognition of known information.

In 1995, one of the inventors of EEG wave detection discovered P300-MERMER (Memory and Encoding Related Multifaceted Electroencephalographic Response). P300-MERMER provides a “higher level of accuracy and statistical confidence than the P300 alone.” Peer-reviewed publications report “less than 1% error rate in laboratory research.”

Brain Fingerprinting does exhibit disadvantages. For this method to be reliably used in criminal investigations, investigators would need a sufficient amount of very specific information about the crime and suspect. This is the only way a suspect’s EEG wave readings could be “matched” to a “correct” determination. But if the suspect had captured knowledge of the crime’s details from another source, like the investigator, the results would be corrupted. Brain Fingerprinting requires more time and preparation, as well as being more costly, than methods such as the polygraph. This places real limitations on its availability for broad use.

PET and FMRI devices focus, not on the peripheral nervous system like the polygraph, on the central nervous system – that is, the brain and spinal cord. Expanding the fMRI in 2002, a study used BOLD (Blood Oxygenation Level-Dependent) fMRI to “localize changes in regional neuronal activity during deception.” The study subjected 18 students to a Guilty Knowledge Test involving playing cards. Researchers were able to identify significantly different areas of the brain that varied “between the two conditions of telling the truth and lying.”

In 2003, Harvard researchers used BOLD fMRI to study localized brain changes in three scenarios: memorized lies, spontaneous lies, and the truth. The researchers observed that each type of lie created brain activity in the “anterior prefrontal cortices bilaterally,” areas believed to be involved in retrieving memory. The 2002 and 2003 studies each found that the anterior cingulate cortex is activated by spontaneous lies. Researchers suggest that brain activity “may be related to the conflict associated with inhibiting truth.” Lying takes deliberation and intent, and this causes particular measurable brain activity. Further studies identified up to seven areas of the brain that predominately exhibit activity when a lie is being told. This resulted in a 90% accuracy rate in early detection studies using this information.

But again, researchers urge caution. Other studies published findings that these methods are “not sufficiently precise” and “lack strong empirical foundation.” A 2008 review identifies the following issues: “problems with replication, large individual brain differences, and unspecified brain regions associated with truth telling.” Several other limitations have become apparent. Lie detection experts using fMRI methods typically describe young, healthy adults, but BOLD activity is altered with age. And these experiments do not specifically answer the “lie or truth” question either. Instead, these methods simply reveal which parts of the brain are activated when lies are told in an experimental setting.

The problem is that in each experiment, using contrived lies and different subjects caused similar activity in different parts of the brain. In other words, it might make inferable data observable, and this might reveal possible deception in one person, but it might not do so in another. Those detecting deceptions would technically need to know how each individual brain functions to accurately determine deception.

And that might be the most significant limitation on lie detection techniques: the human brain itself. Any accusation, right or wrong, will activate parts of the brain. The brain of every individual is so unique that it might be impossible to precisely predict deception. It is also possible for some to “hide” activity by thinking of complex, different activities like mathematical operations, etc. Researchers call this “self-defense.”

The direct and indirect observation of behavioral, physiological, or neurological nonverbal cues are not found to be wholly and independently accurate methods of deception detection. Not even with the most advanced, modern technological assistance. They are each just different examples of methods for interpreting nonverbal cues.

Awareness of the deficiencies in nonverbal methods has not sufficiently diminished their popularity or use in lie detection. Organizations like the ACLU argue that even if these technologies could reliably detect deception, their use would still be opposed. Their position is that it views “techniques for peering inside the human mind as a violation of the Fourth and Fifth Amendments, as well as a fundamental affront to human dignity.”

A Demand for ‘Reliable’ Truth Detection Methods

New and various methods to detect lies in people’s personal and professional lives are very popular. An Internet search related to “ways to catch a liar” yields nearly 10 million references to much-heralded methods. These range from the clearly exaggerated but plausible to some that seem, well, deceptive, with costs ranging from $19.99 - $109.99.

For the right price, the following courses are available: Never Be Lied to Again: Advanced Lie Detection Course; How to Get the Truth in 5 Minutes or Less; Award-winning Lie Detection Course: Taught by FBI Trainer; Learn How to Spot the Lie in ANY Speech; The Complete Catch-the-Liar Masterclass: Become a Human Lie Detector; How to Detect Deception: Secrets of Human Lie Detectors; How to Detect Deception: Secrets of Human Lie Detectors; Signs of Lying – Is He Really Mr. Right?

You can even find those that attempt to draw from equally unreliable law enforcement training methods. Just buy the “Detective’s Guide to Lie Detection and Exposing the Truth” for only $29.99. It’s quite evident that popular culture reflects an adherence to these false belief systems. The relationship between nonverbal behavior and deception has become big business, but criminal justice professionals need to do better – lives are literally at stake when law enforcement buys into and perpetuates the false belief in investigators’ ability to detect lies based on observing nonverbal cues. In fact, they should be required to use different modern techniques. As it is, these basic mythologies have clearly stunted American police interrogation training. These myths have also become a built-in legal presumption within court proceedings today. When determining witness credibility, judges’ and jurors’ incorrect popular belief systems can distort the credibility determinations and cause miscarriages of justice in worst case examples.

Popular Methods Used by
Law Enforcement

If you Google “can police tell when someone is lying,” the results imply the answer is yes, when in actuality, the answer is a resounding no. One article is entitled: “Former Detective Reveals How to Tell When Suspects are Lying.” According to Stacey Dittrich, “The 911 call and initial statements are among the most important pieces of evidence should a case go to trial.” Dittrich is a former Ohio police detective, self-styled crime “expert,” and author. “The entire case can build from those few sentences,” says Dittrich, apparently without any self-awareness of how ridiculous that statement is. In other words, a whole case can be built based on unfounded, initial presumptions.

Some examples Dittrich provides are: calls to 911 that are considered “pre-emptive” (it’s “too soon” to be concerned about a missing loved one); a person is too calm or too hysterical; only innocent people answer with just a direct “yes” or “no”; providing too many details (we can presume not enough details would make Dittrich suspicious as well); lying about small stuff; saying “huh?”; helping with alternative explanations; and similar content. The only reason such belief systems might be reasonable is because it tends to imply reliance on verbal content as oppose to nonverbal behavior, but they are still subject to myth-based confirmation biases and perceptions of the interviewer.

But the biggest and obvious problem with the foregoing “expert” techniques to determine whether someone’s lying is that it presupposes that all people will behave exactly the same way in a particular situation (violent death of a loved one, accused of a serious crime, witness to a traumatic event, called a liar by cops, etc.) or under certain conditions (extreme stress, terror, emotional trauma, etc.) and deviations from that presupposed standard behavior is indicative of deception and lying. The underlying assumption is so preposterous as to utterly fail the so-called “giggle test,” but there’s certainly nothing amusing about the fact that so many law enforcement officers actually believe it, even if implicitly. 

Marty Tankleff was 17-years-old when he found his parents brutally murdered in the Long Island family home. Investigators claimed Tankleff was too calm about the ordeal (notice how this presupposes there’s a standard or “correct” way all people should behave in this circumstance, so his deviation from that standard is indicative of guilt). Any claim of innocence was disregarded, and Tankleff confessed, was convicted, and sent to prison.

Jeffrey Deskovic was 16-years-old when his classmate was found strangled. Detectives claimed Deskovic was “too distraught and too eager to help” (Tankleff above was too calm for investigators). Clearly, that made Deskovic guilty according to detectives. Deskovic confessed, was found guilty, and sentenced to prison.

Both boys spent nearly two decades in prison, each wrongfully convicted because of scientifically unsupportable beliefs espoused and actively perpetuated by people like Dittrich. The number of wrongful convictions in the U.S. compared to that of Western-European countries may be linked to investigative styles. In Western-European countries, an “information gathering” technique is used. This technique encourages suspects to speak more, and they typically do, as opposed to the accusatory interviews in the U.S. The Western-European technique tells investigators to “solely concentrate on the speech content,” not on nonverbal cues and behavior.

In the U.S., this “accusatory interview technique” causes suspects to say less and makes the interviewer more dependent on nonverbal cues. “Confrontation is not an effective way of getting truthful information,” observed Shane Sturman, President and CEO of Wicklander-Zulawski & Associates (“WZA”). The WZA organization is one of the “country’s leading law enforcement training organizations.”

It took decades for WZA and similar groups to finally admit what they teach is not effective. Throughout those decades, WZA taught thousands of investigators the “Reid technique,” created by John E. Reid and Associates. The Reid technique was considered the “Gold Standard” and the “granddaddy” of accusatory, confrontational interview methods. This technique is “guilt presumptive” and “begins with an accusation, a confrontation” focused on nonverbal behaviors. Training of the Reid technique instructs interviewers to “lie about evidence linking [a suspect] to the crime.” A suspect maintaining their innocence is to be “interrupted and redirected to the idea that they’re guilty.” An investigator should convey that “resistance is futile.”

A flyer for a four-day, 36-hour training event held in 2018 at the Austin Regional Intelligence Center described the Reid technique curriculum. Interviewers are taught to consider behaviors reflecting fear or conflict to be “emotional states that would not be considered appropriate from a truthful subject.” Such behaviors include “posture changes,” grooming,” and “eye contact.” The “interrogation process” is covered in the second half of the training. Training covers such topics as: “beginning with how to initiate the confrontation; develop the interrogational theme; stop denials; overcome objections” and work to “stimulate the admission.” The interviewer is taught to pursue the suspect through “various stages of the interrogation process including the Defiant Stage, the Neutral Stage, and the Acceptance Stage.” It’s difficult to understand how these types of interrogation techniques are designed to extract truthful information rather than a confession, regardless of whether it’s true or false.

To the training groups, there is no cause for concern in those instructions. Reid and Associates President Joseph Buckley relentlessly declared “we don’t interrogate innocent people.” What that means exactly is anyone’s guess. Except the whole technique was premised after a wrongful conviction. It would be ironic, except for the fact that the eponymous technique has been used over the years since that first wrongful conviction to produce countless more false confessions and wrongful convictions.

Chicago policeman John Reid created the technique that would eventually have a “near monopoly” on interrogation training in America. Reid interrogated Darrel Parker in 1955, believing Parker had raped and murdered his own wife. After nine-hours of accusatory interrogations, Reid compelled Parker to confess. Parker was innocent. A career criminal named Wesley Peery had committed the crime. Parker was officially exonerated in the summer of 2012 but not until years after Peery had died. The confession had been given to Peery’s attorney but remained hidden due to attorney-client confidentiality. Then in his 80s, Parker said, “At least now I can die in peace.” Parker had finally achieved a legitimate “Acceptance Stage.”

WZA is moving away from traditional methods like the Reid technique. Sturman says it’s a big move for WZA, but the change has been “coming for quite some time” because research reveals “other interrogation styles to be much less risky.” The move was prompted by research done by the High-Value Detainee Interrogation Group (“HIG”), a federally-funded interagency effort created by the Obama Administration. HIG works to improve means of “advancing the science and practice of interrogation.”

A 2016 HIG report declared: “Empirical observations found that police in the U.S. regularly employ poor interview techniques” that incentivize suspects to “provide incorrect information.” The Reid technique isn’t the only method that needs to be abandoned. In 2018, the Northern California Regional Intelligence Center provided training called “Subconscious Communication for Detecting Danger.” This program was developed by former police chief Steven Rhoads, who operates two outfits called Subconscious Communication Training Institute and Institute for Lies.

Richard Leo is a professor of law and psychology at the University of San Francisco School of Law and is an interrogation expert. Leo calls the subconscious communication training “disturbing.” Leo adds, “I mean, anything can be said to be subconscious. So, the cops can just make it up.”

Jeff Kukucka is particularly concerned with subconscious danger detection. “I would be very concerned that the context of those trainings would just exacerbate the implicit, especially racial, biases that already exist,” says Kukucka. He also holds negative views of “New Tools for Detecting Deception” by Renee Ellroy. Ellroy has adopted the self-styled “Eyes for Lies” persona, claiming she is “one of just 50 people” who can spot deception “with exceptional accuracy.” The “Eyes for Lies” has taught a full spectrum of law enforcement, including the local, state, and federal agency levels.

There is just one problem. “It’s completely bogus,” said Kukucka, an assistant professor of psychology and law at Towson University. Kukucka studies forensic confirmation bias, interrogations, and false confessions. “And what’s maybe more alarming about it ... is that this isn’t new. We’ve known for quite a while that this stuff doesn’t work, but it’s still being peddled as if it does.” Leo and Kukucka aren’t alone.

Steven Drizin is co-director of the Center on Wrongful Convictions at Northwestern University’s Pritzker School of Law. Drizin says training that is based on junk pseudoscience “just furthers the deterioration of the relationship between case officers and people in the community.” Drizin argues that the police reform movement must include science-based interrogation methods. “Part of the distrust that you see between law enforcement and minority communities stems from the way suspects, witnesses, victims, and family members are treated by detectives during the course of an investigation,” said Drizin.

The initial modern theoretical conceptualization of nonverbal behavior and deception was presented by Ekman and W.V. Frieson in 1969. Their model expanded psychoanalytical methods of early Darwinian and unconscious theories of emotions. Ekman and Frieson hypothesized that an inability to fully suppress emotions associated with deception – anxiety, fear, or delight –could cause nonverbal cues to be displayed. This was the “leakage hypothesis.” These leakage cues were thought to manifest themselves in nonverbal channels such as arms or hands, face, and legs or feet. Ekman’s 1985 leakage theory has been “highly influential in the popular media,” while spawning network television shows that perpetuate dangerous myths to the law enforcement community and general public alike.

Ekman’s theories have since been highly criticized in the scientific community. The primary problem is what emotions exactly is a liar supposed to feel? Or when? Others questioned why a similarly situated truth teller might not experience the same emotions. Ekman confounds both emotion and deception. The idea that liars and truth tellers might experience differing cognitive processes dates back to 1981 paper by M. Zuckerman. Theories that focus on liars’ emotions have been generally rejected since. Despite this understanding, many books, manuals and training seminars rely on discredited pseudoscientific practices. These pervasive techniques don’t stand up to empirical realities but remained beloved by many in law enforcement.

The Behavior Analysis Interview (“BAI”) is a Reid School of Interrogation method widely linked to miscarriages of justice. BAI consists of 15 questions; the technique relies on the myth that liars and truth tellers reveal different nonverbal responses. The only laboratory experiment to test BAI revealed predicted nonverbal responses were not displayed.

Ekman also claimed that “micro expressions” reveal deceptive emotional information. Micro expressions are “fleeting but complete facial expressions” believed to reveal true emotion that cannot be concealed. This is premised on The Seven Universal Facial Expressions of Emotion – happiness, surprise, contempt, sadness, fear, disgust, and anger – that are immediate, automatic, unconscious, and pan-cultural. Research results, however, do not validate reading of micro expressions as a deception detection method. In the one study of its kind, participants were exposed to 700 video fragments of micro expression. In only 14 video fragments were micro expressions identified; six of those 14 were actually truth tellers.

“When police are trained in false and misleading stuff, they become more confident, so they become more prone to error,” said Leo. “It’s just this loop, this dangerous loop.”

Victims of This Dangerous Loop

Since 1989, 12% of the 2,654 exonerations identified by the National Registry of Exonerations involved a false confession. Over 60% of those convicted of a murder that DNA later proved their innocence had confessed according to the Innocence Project. Prior to DNA testing that can provide definitive proof of innocence, wrongfully convicted persons faced profound skepticism from legal commentators and the courts for decades. The very concept raised a puzzle: How could an innocent person convincingly confess to a crime? Data reveals that of 252 people exonerated, nearly 42 percent had falsely confessed to rape or murder.

Researchers revealed in 2003 that when innocent people are mistakenly believed to be guilty, “an interrogation style that is even more coercive than those experienced by guilty suspects can occur.” Disbelieving investigators won’t believe an innocent suspect’s denials and are “inclined to double their efforts to elicit a confession.” Many of those suspects are either juveniles, mentally ill, mentally disabled, or borderline mentally disabled – sometimes more than one. This is recurring theme in serious, high-profile cases. A need for an arrest and conviction rips constitutional protections asunder.

Truscott was only 14-years-old when accused of rape and murder. Deskovic was only 16-years-old when he was accused of rape and murder. Tankleff was only 17-years-old when he was accused of murder. Juan Rivera was 19-years-old and was a former special education student when he was accused of raping and murdering 11-year-old Holly Staker. Nichole Harris is a Black woman who was 23-years-old when she was accused of murdering her son. Gary Gauger was 41-years-old when he was accused of murdering his elderly parents.

Deskovic confessed after six hours of interrogation, three polygraph sessions (interrogators lied and told Deskovic he had failed the tests, showing there’s nothing a suspect can do to satisfy interrogators once they target the suspect as the perpetrator other than confess), and extensive questioning. That was when Deskovic realized he might be guilty. DNA evidence known before trial excluded him. The confession sealed his fate. The DNA would eventually exonerate Deskovic in 2006.

Tankleff was told by detectives that his father “had awakened at the hospital and identified” Tankleff as his attacker. It wasn’t true. Tankleff’s father never regained consciousness prior to dying. This false statement compelled Tankleff to produce a written “narrative,” which he refused to sign. This unsigned narrative was used to convict Tankleff. The real killer was Jerry Steuerman and had been identified by Tankleff to police prior to trial. In 2008, the charges were dismissed, and as of 2020, Tankleff is a lawyer in New York.

Rivera was interrogated for four days. At 3:00 a.m. on the fourth day, a typed confession was signed by Rivera, who was in a padded room, on the floor in a fetal position, and pulling hair from his head. The so-called confession was “so riddled with incorrect and implausible information” that the State’s Attorney forced detectives to “cure” the inconsistencies. A couple of hours later, “Rivera signed the second confession, which contained a plausible account of the crime.” DNA had excluded Rivera prior to trial, too. And Rivera had been on home confinement with an ankle monitor the day of the crime. Rivera was convicted after three separate trials. It was 20 years before Rivera was freed.

Harris was relentlessly questioned for 27 hours while being “threatened, pushed, called names, and denied food, water, and the use of a bathroom.” Harris’ son, Jacquari, was found with an elastic band around his neck. Harris confessed to strangling the boy with a telephone cord. When detectives realized that didn’t fit the evidence, Harris was prompted to provide a modified confession wherein she had used an elastic band. At trial, Harris testified that the confession was “false and the product of a lengthy and coercive interrogation.” Harris was found guilty and spent eight years in prison for what had actually been a terrible accident. Jacquari had a habit of “playing Spiderman” by wrapping the elastic band around his neck.

Gauger was interrogated all night. The police said Gauger confessed. Gauger claimed any statement was a “hypothetical,” one based on a police theory that Gauger had experienced an “alcoholic blackout.” Only then had Gauger speculated about the crime. Police also lied about a failed polygraph test. Gauger was pardoned in 2002, nine years after the crimes were committed. “Until this happened, I really believed in the criminal justice system,” said Gauger, echoing the sentiment of most who haven’t been subject to the dark arts of police interrogations in America.

As a “confession” trumps all other evidence (despite them being among the most unreliable evidence), even exculpatory DNA evidence, the injustice of an abusive interrogation is often not remediated in the courtroom. Even absent a confession, the psychological folklore of nonverbal cues can influence judges and juries while influencing credibility determinations. This can be significant because “credibility is an issue that pervades most trials, and at its broadest may amount to a decision of guilt or innocence.” And while judges are legally authorized to use demeanor to assess witness credibility, “evidence-based workshops or seminars to mitigate the impact of misconceptions about nonverbal cues to deception are not mandatory.”

Jurors are left to their own terribly flawed beliefs as well. Technological or expert assistance to aid in lie detection is generally barred in U.S. Courts. We require witnesses to appear in person, and the juries are the “sole judges” of a witness’ credibility. Jurors are instructed to evaluate a witness’ “demeanor upon the stand” and “manner of testifying” when judging truthfulness. According to this belief system, “lay judgement solves the legal problem of deception” because “lie detecting is what our juries do best,” which, of course, is false.

Each of the wrongful convictions discussed involved police interrogators who “knew” they had the correct suspect, and they were completely wrong. They refused pleas of innocence and used coercive interrogation methods and lies about nonexistent events or evidence to secure a false confession. Unfortunately, such techniques and methods are routinely sanctioned by the courts. These nonverbal behavior-based methods are dangerously unreliable psychological interrogation techniques, and resulting confessions are often either “coerced compliant” or “stress compliant.” The beleaguered person just wants the ordeal to end and will leap at the offer of any alternative available, including “realizing” their guilt.

Research has found that there are three primary “interrogation errors” in U.S. methods. First is the “misclassification error” where an innocent person is presumed guilty. There is a double-harm in this: an innocent person is accused and a guilty one is free to roam and victimize. The second is a “coercion error.” This is where the investigator’s lies about evidence, failed polygraph exams, or promises of leniency are used to “stimulate the admission” during the Denial Stage.

The Supreme Court of Hawai’i considered the question of “whether a deliberate falsehood regarding polygraph results impermissibly taints a confession.” State v. Matsumoto, 452 P.3d 310 (Haw. 2019). Keith T. Matsumoto was arrested for allegedly inappropriately touching a teenage girl during a tournament at a local high school. Matsumoto denied the charges and agreed to a polygraph. The detective told Matsumoto he had failed the polygraph. Matsumoto then “confessed,” stating he might have accidentally touched the girl. At trial, the polygraph was not discussed, but the confession resulted in Matsumoto’s conviction. The Supreme Court unanimously concluded that the police’s “deliberate falsehood was an extrinsic falsehood that was coercive per se.” Matsumoto’s conviction was vacated, and the case remanded.

The third error is the “contamination error,” where police shape statements and add details to make the confession more plausible or persuasive, especially to fit the known facts of the case. Contamination errors are compounded by suggestibility and the “misinformation effect.” High-pressure interview techniques increase suggestibility. This is the act of exposing a suspect either inadvertently or deliberately to inaccurate or misleading information. Suspects begin to respond, wanting to appear cooperative or innocent. The young, developmentally disabled, and/or mentally ill are most susceptible to such abusive tactics.

The misinformation effect refers to the creation of false memories after being exposed to misleading information and repeated questioning. Research has shown that memory is quite malleable, even when telling the truth. This effect “can cause people to falsely believe that they saw details that were only suggested to them.” The result is that “original memory traces after exposure to misinformation” often become inaccessible. The lie becomes the truth.

So many mistakes must be made by so many people in the criminal justice system at every level to produce a wrongful conviction – implying that both active and passive acts and omissions must frequently occur. The reality is disturbing and beyond the scope of this article.

It is an incontrovertible fact that accusatory investigative techniques lead to injustice in the form of false confessions and wrongful convictions. No one should be aggressively questioned by law enforcement based on speculative assumptions and especially interrogators’ wholly unjustified, inflated, and false belief in their own ability to determine if a suspect is lying or being deceptive based on nonverbal cues.

Solving the Problem

Researchers are developing proactivestrategies. “The view now is that the interaction between deceiver and observer is a strategic interplay,” said Hartwig. A growing body of evidence demonstrates that the “success of unmasking a deceptive interaction relies more on the performance of the liar than on that of the lie detector.” Individually, no diagnostic cues to deception occur, but a “diagnostic pattern will arise when a combination of cues is taken into account.”

Nonverbal behavior analyses have their place; but it cannot be used in isolation. “A lot of research is flying in the face of law enforcement training and common beliefs,” says Christian Meissner, Ph.D., a professor of psychology at Iowa State University. Meissner adds, “This research has enormous potential to revolutionize law enforcement, military, and private sector investigations.”

There has been a reclassification of theories on nonverbal behavior and deception. Hartwig and fellow psychologists Aldert Vrij, of the University of Portsmouth, and Par Anders Granhag, of the University of Gothenburg, examined the classification in “Reading Lies: Nonverbal Communication and Deception.”

This updates a field that has undergone significant theoretical developments. These theories are broken down into “mental processes” and “social psychology theories.” The idea is that the most beneficial approach to understanding a liar’s overt behavior is to examine the internal processes occurring during the creation of a deception.

Researchers are seeking to understand how and why lying is more cognitively taxing than telling the truth. If the suspect is lying, the cognitive effort might then be manifested in nonverbal behaviors. But what the suspect says is the most important component in this theory. Liars aren’t simply telling a story; they must make a convincing impression. Vrij says, “If the interviewer makes the interview more difficult, it makes the already difficult task of lying even harder.”

Telling the truth as it happened, truth tellers expect their innocence to become apparent. Liars are likely to feel their credibility is in jeopardy and will feel the need to appear believable. Researchers believe “specific interview protocols are required for clear cases to emerge.” Active deception detection requires three things: “gathering information to fact-check the communication content, strategically prompting deception cues, and encouraging admissions and discourage continued deceit.” This requires fact-gathering, listening and asking verifiable questions, and collaboration with other involved professionals.

There are many methods and techniques outside those currently utilized in U.S. accusatory interrogation practices. These include: Comparable Truth Baseline (“CTB”); Strategic Use of Evidence (“SUE”); Cognitive Credibility Assessment (“CCA”); Assessment Criteria Indicative of Deception (“ACID”); Criteria-Based Content Analysis (“CBCA”); Reality Monitoring (“RM”); Scientific Content Analysis (“SCAN”); Strategic Questioning (“SQ”); Statement Validity Assessments (“SVA”); Voice Stress Analysis (“VSA”); and PEACE –Preparation and Planning, Engage and Explain, Account, Closure and Evaluate.

Skilled liars can evade detection methods in current models. Such liars are sure to embed lies in truths, not tell blatant lies that are entirely untruthful, and provide unverifiable information. They are often not nervous either, even in high stakes interviews. Methods like PEACE are quite simple. Lying and body language are recognized as having nothing to do with each other, and more details will eventually cause the mental systems to break down. PEACE helps determine which parts are verifiable and which are not. There is no substitute for a thorough and competent investigation, but interrogations should incorporate these methods and techniques. Some appear more promising than others.

SVAs originated in Germany and Sweden and were originally intended to determine the credibility of child witnesses in sexual offense trials. SVAs’ core phase consists of 19 criteria to produce CBCA. These 19 points include: mentioning time and space, replication of conversation, recall of interactions, unexpected complications, and accounts of mental state. Such criteria are presumed present more in truthful statements. Liars describe “fewer reproductions of conversations” and are less likely to make spontaneous corrections to a story. This is similar to RM criteria in that lies include fewer perceptual, spatial, or temporal details and are less plausible stories generally. Deception detection accuracy rates are found to be around 70 percent using these methods.

The SUE technique takes advantage of the “liar’s dilemma.” According to Ray Bull, Ph.D., and professor of criminal investigations, “They have to make up a story to account for the time of wrongdoing, but they can’t be sure what evidence the interviewer has against them.” Encouraging interviewees to continue talking while slowly revealing evidence allows a guilty suspect to reveal their guilty knowledge. The guilty employ avoidance or denial strategies that truth tellers normally do not. Such observations are anecdotal. Reponses vary based on the type of lie, amount of time to prepare, strategy of the interviewer, and the liar’s confidence. This strategy has demonstrated reliability nearly 70 percent of the time. “We are talking significant improvements in accuracy rates,” declared Hartwig.

Information-gathering interview methods are effective because it produces a greater quantity of details, allowing for verbal cues to be analyzed. Expansive verbal and written statements allow for analyses of word counts and word choices, too. “If liars plan what they are going to say, they will have a larger quantity of words,” said Burgoon. “But, if liars have to answer on the spot, they will say less relative to truth tellers.”

Many of the developing methods utilize Strategic Questioning. Unexpected questions surprise liars relying on prepared lies, so they are left “floundering for a response or contradict themselves.” Vrij argues that truth tellers will provide more information if encouraged, liars can’t or won’t. “They might not have the imagination to come up with more or they may be reluctant to say more for fear they will get caught,” says Vrij. How the Fifth Amendment plays into these methods isn’t considered. Until law enforcement begins every investigation based on the presumption of innocence, not a presumption of guilt, it is unlikely any method will fully protect the innocent.

Most important is the training investigators receive. Receiving feedback is imperative. If their beliefs are wrong, receiving delayed or inadequate feedback regarding credibility judgements can perpetuate errors and hamper productive investigations. This feedback should be provided in real-time or as soon as possible. Without this critical information, erroneous myths, faulty methods, and injustice will continue.

From myths to miscarriages, it is obvious reforms are needed immediately. As a social psychologist observed, the phenomenon of “belief perseverance” may make change difficult. A perfect example is the U.S. Department of Homeland Security’s Transportation Security Administration (“TSA”). Between 2015-18, 2,251 formal complaints were filed based on improper cultural and behavioral stereotypes. The TSA relies on having detection success by preventing three passengers in 11 years from boarding airplanes. “TSA believes behavioral detection provides a critical and effective layer of security within the nation’s transportation system,” said TSA media relations manager R. Carter Langston. Langston failed to mention that Homeland Security undercover agents successfully smuggle fake explosive devices onto airplanes 95 percent of the time. Such obstinance on the part of any law enforcement agency should be constitutionally unacceptable.

It is well understood these myths are not reliable. Any reliance on nonverbal cues, whether behavioral, physiological, or neurological, needs to be tempered with improved methods. This is compounded by abusive, accusatory interrogation methods utilized by the U.S. law enforcement that is allowed to lie with impunity to coerce “confessions” and “realized” guilt because the investigator believes “innocent people aren’t interrogated” is abhorrent and evidences a complete break from reality. Judges and jurors need to be instructed that demeanor alone is not a reliable indicator of credibility. This will take enormous focus on training, resources, policy reform, legislative action, and judicial as well as societal awareness.

Hartwig is one of many who declared these “fundamentally misguided” beliefs and detection methods should be abandoned. Their continued use will wreak havoc on the life and liberty of practically all individuals who get ensnared in the U.S. criminal justice system.

Sources: aclu.org; apa.org; assets-us-01.KC-usercontent.com; businessinsider.com; deliverypdf.ssrn.com; frontiersin.org; jcjl.pubpub.org; journals.plos.org; jstar.org; law.justia.com; law.northwestern.edu; law.umich.edu; leb.fbi.gov; maricopa.gov; ncbi.nlm.nih.gov; npr.org; policechiefmagazine.org; researchgate.net; smithsonianmag.com; supreme.justia.com; theappeal.org; theintercept.com; udemy.com  

 

 

Disciplinary Self-Help Litigation Manual - Side
Advertise Here 3rd Ad
Prison Phone Justice Campaign