Skip to main content

We need new regulations to protect us from Facebook and Equifax

The theft of an estimated 143 million Americans’ personal details in the breach of consumer-credit reporting agency Equifax and the Russian hack of the U.S. elections through Facebook had one thing in common: They were partly possible because our personal data has no legal protections. Though the U.S. Constitution provides Americans with privacy rights and freedoms, it doesn’t protect us from modern-day scavengers who obtain information about us and use it against us. Our privacy laws were designed in the days of the telegraph and are badly in need of modernization. Much damage has already been done to our finances, privacy, and democracy — but worse lies ahead.

Credit bureaus have long been gathering information about our earnings, spending habits, and loan-repayment histories in order to determine our credit-worthiness. Tech companies have taken this one step further, monitoring our web-surfing habits, emails, and phone calls. Via social media, we have volunteered information on our friends and our likes and dislikes and shared family photographs. Our smartphones know everywhere we go and can keep track of our health and emotions. Smart TVs, internet-enabled toys, and voice-controlled bots are monitoring what we do in our homes — and often are recording it.

In the land-grab for data, there were no clear regulations about who owned what, so tech companies staked claims to everything. Facebook required its users to grant it “a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content” they posted to the site. It effectively required them to give it the right to use their family photos and videos for marketing purposes and to resell them to anybody. American laws are so inadequate that such companies are not even required to tell consumers what information they are gathering and how they will use it.

Unlike manufacturers liable for the safety of their products, tech companies gathering our data have practically no liability for compromising it; they can protect it as they choose and sell it to whomever they want to, regardless of how the third party will use it. No wonder Equifax had such lax security or that Russians and hate groups were able to target the susceptible with misinformation on Facebook.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


The FTC can possibly fix this problem by requiring data brokers to provide industrial-strength security. University of California at Berkeley law professor Pamela Samuelson says the FTC has “statutory authority to regulate unfair and deceptive practices [and] can act on that authority by initiating claims against those who fail to maintain adequate security.” She notes that the FTC has used these powers before, nudging firms to have privacy and security policies. And when firms failed to comply with their own policies, the FTC treated that as an unfair and deceptive practice.

Such a move by the FTC would level the playing field by making data brokers as responsible for their actions as most product manufacturers are for theirs. We hold our car manufacturers responsible for the safety of their products; why shouldn’t the tech companies bear similar responsibility?

We could also see new legislation enacted to solve the problem. But Samuelson says data holders would vigorously fight such legislation. And, though it would be a good step forward, it would only solve yesterday’s problems considering the pace of change we’re seeing.

The falling costs of DNA sequencing will soon make it as common as blood tests, and the tech companies that today ask us to upload our photos will tomorrow ask us to upload our genomic information. Technology will be able to understand our mental state and emotions. These data will encompass everything that differentiates us as human beings, including our genetics and psychology. While credit reports could result in withholding of loans, corporate use of our genetic data could affect our jobs and livelihoods. We could be singled out for having genetic predispositions to crime or disease and find ourselves discriminated against in new ways.

The Genetic Information Nondiscrimination Act of 2008 prohibits the use of genetic information in health insurance and employment. But it provides no protection from discrimination in such matters as long-term care, disability, housing, and life insurance, and it places few limits on commercial use. There are no laws to stop companies from using aggregated genomic data in the same way lending companies and employers use social media data, or to prevent marketers from targeting ads at people with genetic defects.

Some states have begun passing laws to say that your DNA data is your property; but we need federal laws to stipulate that we own all of our own data, even if it takes an amendment to the Constitution. The right to decide what information we want to share and the right to know how it is being used are fundamental human rights in this era of advancing technologies.

Harvard Law School professor Lawrence Lessig has argued that privacy should be protected via property rights rather than via liability rules, which don’t prevent someone from taking your data without your consent. “When you have a property right, before someone takes your property they must negotiate with you about how much it is worth,” argues Lessig. Imagine a website that allowed you to manage all of your data, including those generated by the devices in your house, and to charge interested companies license fees for its use. That is what would become possible.

Daniel J. Solove, Professor of Law at George Washington University Law School, has reservations about protecting privacy as a form of property right, because the “market approach has difficulty assigning the proper value to personal information.” He worries that an individual may not see the danger of giving out bits of information in different contexts but that the information could be aggregated and become invasive when combined with other information. “It is the totality of information about a person and how it is used that poses the greatest threat to privacy,” he says.

It isn’t going to be easy to develop the new systems for maintaining control of personal information, but it is imperative that we start discussing solutions. As Thomas Jefferson said in 1816, “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.”

Vivek Wadhwa is Distinguished Fellow at Carnegie Mellon University Engineering at Silicon Valley. His new book, The Driver in the Driverless Car: How Our Technology Choices Will Create the Future discusses the choices we must make to build a great future.

Want must read news straight to your inbox?
Sign up for VB Daily