So, a few weeks back, the Washington Post ran a story that had a disturbing tinge running through it. The article centered on a software called Predictim. It’s a service that scans potential babysitters’ Twitter feeds, Facebook pages, and Instagram stories to provide concerned parents automated risk ratings for things like drug use and bullying.
Less objective measures like a bad attitude or disrespectfulness were also included in the report — which, really?
It’s certainly understandable that parents want to check out the person watching their child. But this technology presents some new issues that go beyond general workplace spying.
Here’s a little background on Predictim, and a look at what’s looming over the horizon when it comes to new ways to invade peoples’ space.
How does Predictim work? What’s the deal?
Predictim scans the digital footprint of a prospective babysitter to determine their risk to parents. The app was created by Sal Parsa and Joel Simonoff who set out to develop an AI solution that can generate personality assessments based on the candidate’s digital footprint.
The app used natural language processing (NLP) to sort through social media platforms, scanning photos, tweets, and captions for risk factors — Parsa and Simonoff say that traditional background checks or interviews don’t paint the full picture of who a person might be.
With Predictim, the algorithm considers billions of data points and can scan them within a matter of minutes — spitting out a report with predicted traits, behavior, and a brief summary of their digital history.
Reports cost $24.99 a pop and come with a risk assessment score — green being not risky to red, very risky. A small price to pay for identifying potential bullies and abusers who might otherwise slip through the cracks.
What are some of the issues?
Critics of the tool say that technologies like the solution that Predictim has presented present some clear dangers, especially given that these tools are being used to make decisions about someone’s ability to find work.
The first problem that comes to mind, of course, is bias. Often AI tools don’t show bias until they’ve been deployed and someone starts to pick up on this unfortunate pattern.
Gizmodo’s Brian Merchant decided to experiment with Predictim and his post-mortem highlights some major problems. Merchant says he tested the app using his actual babysitter, an African American woman, and a white male friend. The friend got a better rating than the babysitter, despite the fact that he shares a lot of vulgar content on Twitter. Merchant’s babysitter was flagged for being disrespectful.
Joel Simonoff, the CTO, says that they don’t look at things like race or ethnicity, but we’ve seen other tools develop biases, too. Amazon’s recruitment AI didn’t especially like women and software used by many of the nation’s prisons routinely rates black inmates as more likely to commit a crime in the future.
Another issue is this tool turns up a lot of personal data. Predictim pulls up addresses, phone numbers, email addresses, and names of relatives — to, you know, ensure that the person running the scan knows, just for sure, that they’ve got the right person.
Finally, these recruitment apps are classist. Increasingly, we’re developing tools that disproportionally subject low wage workers to robot-based scrutiny. While Predictim does need to obtain the babysitter’s consent before processing the report, if the sitter opts out—surprise, they’re not going to get the job.
Fama scans social media channels for toxic behavior and alerts the bosses, while HireVue analyzes tone and demeanor to predict on the job performance — candidates are encouraged to smile at the bot for best results.
Facebook and Instagram blocked the tool
Twitter says, when they became aware of the platform, they blocked it as well. A spokesperson for the company said that Twitter prohibits the use of their data and APIs for surveillance purposes.
Still, Predictim isn’t giving up. The founders are currently in talks with shared economy companies to provide vetting services for rideshare services or accommodation hosting. We’re sure there are some big names involved.
While it’s heartening that the social media giants have shut this tool out — there are so many of these vetting platforms that keep cropping up. And, you’re looking at things like a bad attitude or evaluating personalities with algorithms, there’s going to be a problem in the power dynamic between employers and those vying for opportunities from the babysitting gig down the street to gig economy sites and the traditional 9-5.
It looks like the media scrutiny offered more blowback than Predictim, uh, predicted. Visit the site now and you’ll see a statement at the top:
“We have been overwhelmed by the interest, press coverage, and input regarding our project. To be honest, this attention came earlier than we expected, and certainly before we had fully launched our contemplated services. We received some very helpful feedback on ways we could make Predictim even better. Clearly, people are hungry for better ways to make decisions in marketplaces where character, reputation, and trustworthiness are important. As a result, we have decided to pause our full launch and put our heads down to focus on evaluating how we offer our service and making changes to address some of the suggestions we received. While we are not offering any services at this time, please stay tuned and check back often for updates: we will be back!”
Ultimately, it doesn’t seem like Predictim and similar apps are going away anytime soon. Even if Facebook and Twitter keep blocking the platform from scraping private data, there’s always the chance the companies reach a compromise if there’s enough money on the table.
The fact that enough people are willing to use this service to spy on the neighborhood teen is a bit troubling, and it’s a sure sign that we need to start fighting for our privacy before it’s too late.