Manifest Digital Destiny: Your Data Should Be Fair Game

Manifest Digital Destiny: Your Data Should Be Fair GameHarry SauersBlockedUnblockFollowFollowingJan 12Over the past few years, we’ve been under a barrage of controversy stemming from the use of big data.

Despite the potential for abuse (Cambridge Analytica, or as imagined in Minority Report) the availability of big data has improved our lives immeasurably and will continue to do so at an ever-accelerating rate — with misuse being relatively contained due to effective regulation and broad scientific principles.

If anything, large data sets ought to be made more widely accessible at little to no cost, as it will necessarily spur innovation, save lives, and create economic opportunity for everyone.

Our first case: systemic damage from lupus patients was accurately predicted by a neural network: https://www.

ncbi.

nlm.

nih.

gov/pubmed/28329014.

This is invaluable, as chronic damage prediction and prevention is a “major goal” in lupus treatment.

This would not have been possible without access to large, reliable, scientific datasets, and to restrict said access would have put the quality of life for the nearly five million sufferers of lupus worldwide in jeopardy.

Though its use may put freedom at risk in countries without Fourth and Fifth Amendments, law enforcement personnel in the U.

S.

have leveraged machine learning and big data to open doors to early intervention for at-risk individuals and more effective apprehension in dangerous, unsolved crimes.

Fortunately, artificial intelligence is generally not admissible in American courts, and has been effectively used to uphold the rule of law without abusing the rights of both the innocent and accused.

However, we ought to remain vigilant: this is only because we have such strong Constitutional protections against presumptive guilt and unreasonable search and seizure, rather than truly benevolent governments.

Perhaps not quite as selfless as the above examples, big data also enables companies (and nonprofits) to target groups of people with a certain trait, interest, or behavior in order to more efficiently promote their product or service.

For the consumer, this means better access to products you care about and that can measurably improve your life, without the barrage of untargeted ads for products that don’t even apply to you — unless you enjoy the flashy-but-worthless infomercials and Viagra disclaimers of cable television.

Opening up these datasets would reduce the monopoly power of companies like Facebook (who are prone to abuse this power) and instead democratize it and allow businesses to choose the most effective medium for their service without sacrificing effective data.

This principle applies in politics as well: the entire Cambridge Analytica scandal could have been averted if data had been open to the public, rather than sold and exploited for profit and political gain.

Furthermore, easily available (but anonymous) political data could enable more prospective candidates to better understand the issues that constituents care about — reducing corruption and forcing more long-time elites like Joe Crowley out of office in favor of candidates who both understand and care about their supporters.

Even in the financial industry, big data has immense potential to improve the lives of many families and individuals: artificial intelligence can be used to analyze and make decisions on large amounts of loan data (like from Experian or Lending Club).

This can both prevent or mitigate crises (like 2008) and help identify individuals who are not traditionally creditworthy and provide additional liquidity to them, lowering interest rates and enabling many of these individuals to buy a home, pay off student debt, or pursue better medical care that they otherwise would have been priced out of.

These are just a few of many positive potential uses of data, but many individuals and special interest groups want to further restrict and privatize datasets, which only worsens the real issue and encourages malicious actors to hack, breach, lie to users, or otherwise gain access to monopolized data.

Generally speaking, your data — when anonymized — can’t hurt you, but restrictions on it can and do hurt people across the globe.

A very basic law to require the anonymization of big data (like the CDC’s Youth Risk Survey does) would all but eliminate valid concerns over abuses of data, while rendering said data even more accurate and scientifically valid as users will be less afraid to disclose it.

As follows, it is not only incorrect but wholly morally wrong to substantially restrict the collection and use of anonymized big data — to do so withholds medical advances, reduced crime and poverty, better products, more honesty in politics, and better access to credit from ourselves and future generations.

The abuses of such data are not only few and far between but also relatively small in their real impact on individuals and communities; damage is only done when data is not properly anonymized and sensitive or identifying information (think social security numbers, credit cards, or passwords) is not properly protected.

We instead ought to be encouraging greater responsible use of big data by anyone from university students to startups to conglomerates, and pursuing ever-improving research techniques into the science needed to use data in improving our lives.

I, for one, hope that more and more companies and universities open up their private data to the public with the goal of spurring new discoveries, advancements, and improvements across communities, industries, and national borders that will help us all live better lives.

.

. More details

Leave a Reply