Balancing the need for privacy, security and profit is the defining debate of the internet age, impacting all of our lives. Rather than being a single cut and dry issue, each change to the way privacy and technology intersect needs to be examined on its own merits. Take for example Apple’s recent decision to scan every image in its iCloud for child pornography – on one hand it serves an obviously noble end, but it could also be a ‘backdoor into your private life.’
Those few of us who read all 14,000 words of Facebook’s terms and conditions know that we are signing up to have our data tracked: you can download every record Facebook has on you here if you use the service, and the sheer quantity and depth of this information is likely to be shocking – multiple gigabytes on each of its 2.89 billion monthly active users. Multiply that by the dozens of services we interact with and the advertising cookies that track our every move online and you will see how ubiquitous ‘surveillance capitalism’ is. And yet, few people go without
Many of us go about our lives without giving much thought to the information available about us, others are much more worried about the potential for abuse. VPNs, the Tor network, and ad-blockers are all common tools to take back control of our digital lives, but they are also used by criminals who want to defraud companies and individuals. A balance needs to be struck between privacy and security – but how?
How much of a problem is cybercrime?
The World Economic Forum estimates the economic cost of cybercrime to be $3 trillion worldwide. That’s 30 times more than the $100 billion in damages inflicted each year by natural disasters, 10 times more than the yearly costs from climate change, and it’s five times more than the oil and gas industry earns in a year. If that amount of money were in the legitimate economy it could do an immense amount of good: stopping climate change will cost $50 trillion over three decades, ending hunger only $330 billion.
Given its sheer scale, online fraud is a global emergency, and yet not enough is being done. A lack of understanding of the problem is pervasive: individuals are still setting their password to ‘password’ and governments have been slow to make impactful changes. To make matters worse, some software developers have taken reasonable concerns about privacy too far, to the point that they compromise safety and inadvertently create tools that criminals use.
Cybersecurity isn’t surveillance
It is easy to see how the infrastructure created to facilitate surveillance capitalism could be used for purposes other than selling advertising – with Cambridge Analytica, it has already been weaponized. However, fraud prevention is different: it is based on collecting smaller amounts of data for a limited time and using them for a very specific purpose. Anti-fraud companies are only interested in knowing if a device is part of a fraud ring trying out different stolen cards at scale, and this is done not for commercial purposes but to protect card owners and support online businesses who want to keep their customers safe. For example, we use publicly available information to analyse such device information to help online businesses identify risky users and transactions. Any data we collect is anonymous, not stored for more than a year, not shared between customers, and not used to build a global database.
There is an enormous gulf between this and the all-encompassing surveillance that is the business model of many of the world’s biggest companies. This is why it is such a shame that some well-meaning organisations have become overzealous when it comes to protecting privacy in ways that end up helping criminals. The Brave web browser, for example, has a mission statement that we agree with wholeheartedly: “As a user, access to your web activity and data is sold to the highest bidder. Internet giants grow rich, while publishers go out of business. And the entire system is rife with ad fraud.” However, in addition to blocking the tracking used by advertisers, their browser also blocks device fingerprinting, which is one of the methods used to help detect fraud. Fingerprinting can be used both for mass data collection in tracking, but it can also be used for protecting security in fraud prevention. Therefore, blocking all of it is bad for end users, as it can also easily lead to accidentally rejecting genuine transactions.
As privacy tools are exploited by online criminals, this makes it harder for those trying to reduce or prevent online fraud and companies and consumers around the world will lose out – without anyone’s privacy being affected in a real way. The key point here is that before blocking certain tools their purpose should also be considered.
We need a better conversation
We hear about obvious cases of overreach and outright criminality online every day, whether that’s proposals to eliminate online anonymity in the UK or the Pegasus Project to target journalists and activists. These are easy to see as unequivocally wrong, but for most of us living digital lives means constant compromises between what we want to do and what we are willing to share. Rather than making a binary choice between ‘privacy’ and ‘freedom’, we all negotiate whether the services we use are worth the risk.
Companies who create software to protect ordinary people online need to have a nuanced view of what is and isn’t a breach of privacy unless they want their software to be used by and associated with criminals. We all have to use the internet together, so it is vital that companies offering privacy protection do not adopt an absolutist position but be more open to legitimate uses for solutions that protect users against fraud.