Skip navigation
PYHS - Header
× You have 2 more free articles available this month. Subscribe today.

Startup Surveils Communities of Color for Police Using Twitter

Dataminr’s early backers included Twitter and the CIA and has among its primary customers domestic law enforcement agencies, including the FBI. In a presentation to the FBI, the company stated its goal is “to integrate all publicly available data signals to create the dominant information discovery platform.” In human-speak, this means surveilling social media for relevant information desired by customers. When those customers are police, this means information about crime, suspected gang activity, and other “threats.”

Dataminr has stated publicly that “97% of our alerts are generated purely by AI without any human involvement,” and it “rejects in the strongest possible terms the suggestion that its news alerts are in any way related to the race or ethnicity of social media users.” But anonymous sources interviewed by The Intercept told a different story.

Sources stated that the alert stream provided to public sector clients such as police departments was assembled by “Domain Experts,” a fancy title applied to untrained staff who comb through thousands of tweets each day, performing keyword searches to locate “threat-related” tweets, and converting them into alerts meant for police officers’ iPhones and laptops. According to one source, this consisted of “white people, tasked with interpreting language from communities that we were not familiar with” who were coached by predominantly White former law enforcement who themselves “had no experience from these communities where gangs might be prevalent.”

This becomes especially troublesome when considered in the context of what we know about over-policing in communities of color. “Areas that were predominantly considered more white” were overlooked, while the focus was minority communities, said one source. “It was never targeted towards other areas in the city, it was poor, minority populated areas.” For instance, “Minneapolis was more focused on urban areas downtown, but weren’t focusing on Paisley Park – always ‘downtown areas, with projects.”

“Dataminr is in a lot of ways regurgitating whatever the Domain Experts believe people want to see or hear,” those people in this case being the police. But this system isn’t merely overlooking crime in other areas, it is responding to searches by its users who are looking for crime in areas where, because of racial stereotypes, they already expect crimes to occur.

“We would make keyword-based streams [for police] with biased keywords, then law enforcement would tweet about crimes, then we would pick up those tweets.” This feedback loop merely reinforced already racially motivated over-policing.

Babe Howell, a CUNY School of Law professor and criminal justice scholar, referred to this method of threat identification as “far worse than useless.” Howell discussed how “adolescents experiment with different kinds of personalities” that make use of the “artistic expression, the musical expression, the posturing and bragging and representations of masculinities in marginalized communities.”

Howell cautions against using social media for threat analysis, not just because even trained ethnographers would be pressed to distinguish such posturing from true threats, but because this information can cause serious harm to persons identified as threats, especially when they are labeled as possibly gang-related.

Information has been coming to light about how police are using databases of suspected gang members, often including persons without much (or any) evidence. Reform groups have been fighting to make the use of these databases at least transparent if they cannot eliminate the databases altogether.

Howell remarked that “if someone is accused of being a gang member on the street they will be policed with heightened levels of tension, often resulting in excessive force. In the criminal justice system they’ll be denied bail, speedy trial rights, typical due process rights, because they’re seen as more of a threat.”

Information from Dataminr’s alert system has been specifically tailored for surveillance of supposed gang activity, and these untrained “Domain Experts” are feeding data directly to police gang units despite lacking any training. “There’s a great deal of latitude in determining [gang membership], it wasn’t like other kinds of content, it was far more nebulous,” said one source. Asked whether this information might target children, the source added, “We had no idea how old they were.”

That Twitter casts a blind eye toward the activities of Dataminr shows the company does not take its Terms of Service (“TOS”) seriously. Its TOS is supposed to prevent surveillance, and the derivation of information “about a Twitter user’s ... [a]lleged or actual commission of a crime.” Despite auditing Dataminr’s tools, Twitter officials have stated they’ve “not seen any evidence that [Dataminr is] in violation of our policies.”

Forrest Stuart, a sociologist and head of the Stanford Ethnography Lab, said the use of Twitter to infer gang affiliation is “totally terrifying.” He continued, “At a minimum, if they’re gonna continue feeding stuff to Dataminr and stuff to police, don’t they have some kind of responsibility, at least an ethical obligation, to let [users] know that ‘Hey, some of your information is going to cops’?” 

 

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

CLN Subscribe Now Ad 450x600
PLN Subscribe Now Ad 450x450
Disciplinary Self-Help Litigation Manual - Side