Tag Archives: privacy

The downside to mass data storage in the cloud

The ability to access Dropcam video footage in the cloud is indicative of a broader trend in cloud computing that is eating away at privacy.

The cloud can be an enormously cost-effective way to increase storage and computing musculature, and also, sadly, a way to further add misery to those seeking privacy—or who just want to be left alone. It’s rare to see organizations stand up and shout, “we’ll not give your data to anyone!” or “the life of all stored data, except opt-in assets you want us to store, is always 90 days!” or “yes, we can determine in absolute certainty that your data has been erased to protect you and your identity.”

The cloud, in some warrens, has become a storage ground for the various factories of “big data,” whose ideals are generally to sell things to consumers and businesses. Correlating facts is huge. Ask Target, whose insight into discovering pregnancies helped them capture a nicely profitable market in the pregnancy and new mother world. Smart, you say. There is a downside to this.

Striking while the iron is hot is a great idea. This means harvesting information on searches to be correlated into ads at the next site you visit. Facebook and Amazon are famous for this, and it’s a huge amount of Google’s total business model. Google’s purchase of Nest last year, which gleefully rats out your utility use patterns, also meant the acquisition of Dropcam.

As ace reporter Sharon Fisher reported at TechTarget, Dropcam’s users allow cameras to send their data into Dropcam’s cloud, where it is archived seemingly indefinitely, to the delights of users, police warrants, and security monitoring individuals, who see the surveillance results at will, from any reasonable IP address. It’s inferred that some users monitor Airbnb suites (shouldn’t they disclose this?) and apparently users forget there’s a camera on and do, well, silly things that they may not want captured on digital film.

Google’s storing this sort of info, Amazon will be listening with Echo, and who knows what Siri knows but isn’t saying. This amounts to a comparative heap of very personal information, as though these were robots whose knowledge base was contained inside the physical unit we see on-premises, but it’s not—it’s in the cloud and not only hack-able, but perhaps being used to analyze us, sell us something, or maybe worse, refuse to sell us something or to used against us in a court of law.

Is this data tagged so someone knows to kill it? Is there a metadata tag saying this file or this datablock expires on April 19, 2017? Often it’s tied to an account. Does this data get reused somehow? Video, audio conversations scrubbed for keywords? Much is up to the user agreement, and what happens if you’re, say, a medical provider that’s amassing large quantities of personal medical data? Can that be used? Yes, an attorney would say, “stop right here, and let’s disambiguate these questions.” Clear as mud.

The average civilian has no “bill of rights” that’s common to these online personal information services, whose data is accumulated in cloudy locations. Murky might be a better way to think about it. You want to trust data storage providers – one wants to believe that data sources are somehow bulletproof – but with huge, emblematic recent breaches of retailers, insurance providers, and university alumni databases, that’s not so easy. In reality, some have already been hacked and we just haven’t discovered it yet because no one’s offering the information on dark markets….at least right now.

Is there a way for the app industries to have a common agreement about what can be shared, what is a reasonable life expectancy for personal data, how and to what extent personal data can be actually anonymized, and how data destruction can be audited to even a private detective’s satisfaction? I wish there were answers.


 

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

In search of a social site that doesn’t lie

Facebook and OKCupid experiment on users. So what’s wrong with that?

Rudder’s post described a few of the experiments that the dating website had carried out. In one, OKCupid told people that they would be good matches with certain other people even though the site’s algorithms had determined that they would be bad matches. That’s right: The company deliberately lied to its users. OKCupid wanted to see if people liked each other because they have the capacity to make up their own minds about who they like, or if they like each other because OKCupid tells them they should like each other.

(The controversial post was Rudder’s first in several years; he had taken time off to write a book about experimenting on people. Due out next month, the book is called Dataclysm: Who We Are (When We Think No One’s Looking).)

The OKCupid post was in part a response to controversy over a recently discovered Facebook experiment, the results of which were published in an academic journal. Facebook wanted to see if people would post more negative posts if their own News Feeds had more negative posts from their friends. In the experiment, Facebook removed some posts by family and friends because they were positive. The experiment involved deliberately making people sadder by censoring friends’ more uplifting and positive posts.

Don’t like this kind of manipulation? Here’s Rudder’s response: “Guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site.

That’s how websites work.”

What’s wrong here

Rudder’s “everyone is doing it” rationalization for experimenting on users makes it clear that he doesn’t understand the difference between what OKCupid and Facebook are doing, and what other sites that conduct A/B tests of different options are doing.

The difference is that OKCupid and Facebook are potentially changing, damaging or affecting the real relationships of real people. They are manipulating the happiness of people on purpose.

These companies might argue that this damage to the mood and relationships of people is small to the point of being inconsequential. But what makes them think it’s OK to deliberately do any damage at all?

The other glaring problem with these social science experiments is that the subjects don’t know they’re participating.

Yes, I’m sure company lawyers can argue in court that the Terms of Service that everyone agreed to (but almost nobody read) gives OKCupid and Facebook the right to do everything they do. And I’m sure the sites believe that they’re working so hard and investing so much to provide free services that users owe them big time, and that makes it all OK.

Imagine a splash screen that pops up each month on these sites that says: “Hi. Just wanted to make sure you’re aware that we do experiments on people, and we might do experiments on you. We might lie to you, meddle in your relationships and make you feel bad, just to see what you’ll do.”

No, you can’t imagine it. The reason is that the business models of sites like OKCupid and Facebook are based on the assumption of user ignorance.
Why OKCupid and Facebook think it’s OK to mess with people’s relationships

The OKCupid admission and the revelations about the Facebook research were shocking to the public because we weren’t aware of the evolving mindset behind social websites. No doubt the OKCupid people and the Facebook people arrived at their coldly cynical view of users as lab rats via a long, evolutionary slippery slope.

Let’s imagine the process with Facebook. Zuckerberg drops out of Harvard, moves to Silicon Valley, gets funded and starts building Facebook into a social network. Zuck and the guys want to make Facebook super appealing, but they notice a disconnect in human reason, a bias that is leading heavy Facebook users to be unhappy.

You see, people want to follow and share and post a lot, and Facebook wants users to be active. But when everybody posts a lot, the incoming streams are overwhelming, and that makes Facebook users unhappy. What to do?

The solution is to use software algorithms to selectively choose which posts to let through and which to hold back. But what criteria do you use?

Facebook’s current algorithm, which is no longer called Edgerank (I guess if you get rid of the name, people won’t talk about it), is the product of thousands of social experiments — testing and tweaking and checking and refining until everyone is happy.

The result of those experiments is that Facebook changes your relationships. For example, let’s say you follow 20 friends from high school. You feel confident that by following them — and by them following you — that you have a reliable social connection to these people that replaces phone calls, emails and other forms of communication.

Let’s say you have a good friend named Brian who doesn’t post a lot of personal stuff. And you have another friend, Sophia, who is someone you don’t care about but who is very active and posts funny stuff every day. After a period of several months during which you barely interact with Brian but occasionally like and comment on Sophia’s posts, Facebook decides to cut Brian’s posts out of your News Feed while maintaining the steady stream of Sophia posts. Facebook boldly ends your relationship with Brian, someone you care about. When Brian posts an emotional item about the birth of his child, you don’t see it because Facebook has eliminated your connection to Brian.

And don’t get me started on OKCupid’s algorithms and how they could affect the outcome of people’s lives.

Not only do both companies experiment all the time; their experiments make huge changes to users’ relationships.

The real danger with these experiments
You might think that the real problem is that social networks that lie to people, manipulate their relationships and regularly perform experiments on their users are succeeding. For example, when Facebook issued its financial report last month, it said revenue rose 61% to $2.91 billion, up from $1.81 billion in the same quarter a year ago. The company’s stock soared after the report came out.

Twitter, which is currently a straightforward, honest, nonmanipulative social network, has apparently seen the error of its ways and is seriously considering the Facebook path to financial success. Twitter CEO Dick Costolo said in an interview this week that he “wouldn’t rule out any kind of experiment we might be running there around algorithmically curated experiences or otherwise.”

No, the real problem is that OKCupid and Facebook may take action based on the results of their research. In both cases, the companies say they’re experimenting in order to improve their service.

In the case of OKCupid, the company found that connecting people who are incompatible ends up working out better than it thought. So based on that result, in the future it may match up more people it has identified as incompatible.

In the case of Facebook, it did find that mood is contagious. So maybe it will “improve” Facebook in the future to build in a bias for positive, happy posts in order to make users happier with Facebook than they are with networks that don’t filter based on positivity.

What’s the solution?

While Twitter may follow Facebook down the rabbit hole of user manipulation, there is a category of “social network” where what you see is what you get — namely, messaging apps.

When you send a message via, say, WhatsApp or Snapchat or any of the dozens of new apps that have emerged recently, the other person gets it. WhatsApp and Snapchat don’t have algorithms that choose to not deliver most of your messages. They don’t try to make you happy or sad or connect you with incompatible people to see what happens. They just deliver your communication.

I suspect that’s one of the reasons younger users are increasingly embracing these alternatives to the big social networks. They’re straightforward and honest and do what they appear to do, rather than manipulating everything behind the scenes.

Still, I’d love to see at least one major social site embrace honesty and respect for users as a core principle. That would mean no lying to users, no doing experiments on them without their clear knowledge, and delivering by default all of the posts of the people they follow.

In other words, I’d love to see the founders of social sites write blog posts that brag: “We DON’T experiment on human beings.”

Wouldn’t that be nice?


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com