As we learned during the general election, political campaigns now routinely involve paid social advertising utilising a variety of data to identify likely supporters or swing voters.

Such ‘social scoring’, whether done manually or by an algorithm, is concerning to some. Professor John Rust of Cambridge University’s Psychometric Centre told the Guardian: “The danger of not having regulation around the sort of data you can get from Facebook and elsewhere is clear. With this, a computer can actually do psychology; it can predict and potentially control human behaviour.”

He finds it “incredibly dangerous” that people’s “attitudes are being changed behind their backs”.

User profiling is nothing novel. Dynamic pricing and credit rating, for example, may seem similarly unfair to many – identifying your mobile device or computer as an indicator of wealth, or your credit score as an indicator of financial concerns.

Consent is becoming a thorny issue and marketers everywhere need to understand what measures their data protection officers are putting into place.

Social scoring using public social media posts has been fair game for a while too, with employers and landlords checking for anything that may set off alarm bells. There are companies that will check private posts and messages for landlords too, taking advantage of the competitive property market to ask that prospective tenants sign up to handover access.

Such software cannot legally use factors such as age and pregnancy to determine suitability, but, as an article in Gawker explained, it can determine estimates of a tenant’s “extroversion, neuroticism, openness, agreeableness, and conscientiousness”.

Informed consent and the ‘black box’

The use of machine learning is ramping up quickly. IBM Watson offers a suite of off-the-shelf functionality and Google provides a whole range of APIs. Martech vendors are adding machine learning features and client-side companies are looking to employ data…