CategoryDrones, Smartphones and Privacy

NFC Podcast #6: The Privacy Index

The 6th NFC podcast. Today, we discuss smartphones, drones and the Serial podcast!

Effective Slippery Slopes

The quantification of a phenomenon/idea/issue invokes the inner nature of a scientist, math nerd, and analysis junkie, that both of us quite evidently are. But, there are limitations even for the best of things in the world (except for the raps getting W’s – can never get enough of that! #WeTheNorth). When we try and quantify a concept like privacy, instead of creating a potent signal, we end up with more noise. The reason for this is the lack of grey area that gets considered when we translate an interconnected, complex, evolving issue into a number. Yes, these data points may be used in the discussion as a loose feeler for the current state of the given issue, but I don’t think it’s effective beyond that.

Other than trying to strut my natural fanciness, I bolded effective for a reason. I spoke earlier about slippery slopes and their relationship with issues laced with rapid changes and unpredictable futures. What I meant to highlight here is the question of the effectiveness of using slippery slopes as an argumentative tool in order to make a decision on a policy issue. Earlier, I stated that discussing the end of the slippery ride (for example, in the privacy and drone issue, murderous drones raging rampage over our world), is indeed an effective tool to help us underreact in the future by overreacting now. And as you highlighted, the drone in this case, would symbolize a “dead canary” and not a “red-herring”. However, this effectiveness isn’t always the case. The factors governing its utility were brought up in a discussion with the God of Never From Concentrate. I’m fairly agnostic on the whole God thing, but here we must refer to our boy, Mr. Aakash Sahney, as the God of NFC, because if it wasn’t for him, we would’ve never met and NFC would’ve never been born!

So what are these factors that determine the effectiveness of using a slippery slope argument in policy decision making? Or in terms relating directly to our conversation, when is bringing up drones in a discussion about policy a “red-herring” and when is it a “dead-canary”?

One issue with slippery slopes is that we don’t often know where we are on the slope. The end of the slippery slope in the argument against slavery was “maybe we’ll have a black president one day” and that is now, as we reflect on it, the view after a majestic “mountaineering expedition”. The main point in this case is to consider whether we are considering the end of the climb or the fall at the end of the slope, when using this argumentative tool in practice.

The second more pressing issue with slippery slopes is that our legal system has a natural balancing system ingrained within it. When public opinion does a slow 180 on an issue, for example instituting prohibition in the early 20th century, the law was adapted to this change. When we realized that it was a failed policy, and public opinion completed the rest of the pi revolution back to its original state, we changed the laws accordingly. So being experimental with new and/or radical ideas can be highly useful without considering the end of the slope, as we can rely on this natural balancing nature of our legal system. The obvious flaw in this argument is when a marginal dip down a slope has irreversible damage. For example, we take less risks on policy changes that pose potential death risks. This riskiness typically gets brought up in discussion when we’re dealing with infringements on basic fundamental rights — we can use the set of issues outlined in the Canadian Charter of Rights and Freedoms as our set of essential rights. When the consequences related to the change in policy on an issue results on an infringement on one of these rights, we move the cost-benefit analysis from a utilitarian discussion to one that’s more categorical in nature. A stark example of this is our policies on animal testing are discussed from a much more utilitarian viewpoint than any issue related to a potential human death, like euthanasia (not youth in asia to be clear). To further narrow my thesis I mentioned in my last post, slippery slope arguments are effective tools in discussions on policy changes, only if the consequences of the policy change result in either a direct infringement on our essential rights, or a reasonable path can be drawn to reach an infringement on these rights.

The question of whether drones fall under this or not, we shall leave up to the discussion on our podcast next week!

Red Herrings and Canaries

Rachit,

Interesting analogy, I enjoyed imagining Gandhi slowly metamorphosing into a murderous tyrant. The Schelling fence idea reminds me of a ‘prenup’ that you agree upon (with yourself). I’ve read of similar advice for negotiators: it’s important to set strict limits for an acceptable price prior to the start of the bargain to prevent exactly the same type of slippery slope traps that you brought up. In a purely numerical realm, I think this is certainly possible. But, like you said, it seems that it’s difficult to do the same with other more complicated issues that can’t be reduced to a single number (i.e. privacy). How can you create an effective fence when you have no map? Perhaps, then, we need to take a shot at creating a reasonable privacy map.

What types of numerical scales can we use to quantify privacy? Can those metrics be applied to existing societies to enact laws that declare a base level of privacy as a human right? In the U.S., the current ‘base’ level appears to be the 4th Amendment to the constitution (the prohibition of ‘unreasonable searches and seizures’). Recently, an interesting computer science research paper used a machine learning approach to try and quantify “the point at which long-term government surveillance becomes objectively unreasonable”. Their conclusion was that approximately 1 week of GPS tracking was enough to uniquely identify an individual. Maybe we can use this numerical limit as a starting point for privacy legislation. Can you think of any other ones?

The article linked above also has a fascinating study of a recent supreme court case:

“Antoine Jones, a nightclub owner in Washington, D.C., was suspected by the police of dealing drugs. The local police, working with federal agents, put a GPS tracking device on his car, without a warrant, and gathered his location data for four weeks. Mr. Jones was initially convicted of drug trafficking conspiracy… The Supreme Court [overturned the verdict and] ruled for Mr. Jones, saying his Fourth Amendment rights had been violated because [of] the GPS device on his car.’

I would argue that federal agents attaching a tracking device to your car is the privacy equivalent of an aerial drone filming you doing something revealing. Sure, it’s certainly possible, but the real problem is the fundamental reality of location-aware devices. Most people do not have federal agents tailing them undercover, but we do carry around smartphones that can periodically send location information to a central database. As Supreme court Justice Sotomayor did in her analysis of the case, we can imagine the potential for that data “to detect trips of an ‘indisputably private nature.’” Certainly information about “trips to a psychiatrist, abortion clinic, AIDS treatment center, strip club and mosque” should be private, shouldn’t it? If we limit how much location data can be saved without our consent, we can keep our privacy.

You say that ‘red-herring’ drones can serve to bolster our defences against other, more subtle, attacks against our privacy. While this may be true indirectly, the entire point of a ‘red-herring’ is that it serves as a distraction. Our attention is focused on it in order to direct conversation away from the other pernicious attempts to exploit our personal information. I think the term that’s more appropriate here is the ‘canary in the coal mine’. Let’s acknowledge the herrings but focus on the dead canaries.

I hope I didn’t drone on too much.

~V

Murder Gandhi and his MQ-9 Reaper

Valentin, no need to beat around the bush here – you’re calling me a Luddite. I can take it. I hold myself to a big boy standard. But you know what, you are probably right … to a certain degree. As you’ve highlighted, “drone” is indeed a buzzword that almost cartoonishly symbolizes the growing privacy concerns that people are facing. Yes, being inquisitive with how some “free” products offered by companies like Facebook or Google affect our privacy is probably a better exercise in practice. But, how these will affect us in the distant future should be just as imperative as how they affect us now. And in this future tense, this ‘drone as a red-herring’, although not directly, may actually serve some real purpose. So as best of a time as any, let me introduce you to Mr. Murder Gandhi:

“Gandhi is offered a pill that will turn him into an unstoppable murderer. He refuses to take it, because in his current incarnation as a pacifist, he doesn’t want others to die, and he knows that would be a consequence of taking the pill. Even if we offered him $1 million to take the pill, his abhorrence of violence would lead him to refuse.

But suppose we offered Gandhi $1 million to take a different pill: one which would decrease his reluctance to murder by 1%. This sounds like a pretty good deal. Even a person with 1% less reluctance to murder than Gandhi is still pretty pacifist and not likely to go killing anybody. And he could donate the money to his favorite charity and perhaps save some lives. Gandhi accepts the offer.

Now we iterate the process: every time Gandhi takes the 1%-more-likely-to-murder-pill, we offer him another $1 million to take the same pill again.

Maybe original Gandhi, upon sober contemplation, would decide to accept $5 million to become 5% less reluctant to murder. Maybe 95% of his original pacifism is the only level at which he can be absolutely sure that he will still pursue his pacifist ideals.

Unfortunately, original Gandhi isn’t the one making the choice of whether or not to take the 6th pill. 95%-Gandhi is. And 95% Gandhi doesn’t care quite as much about pacifism as original Gandhi did. He still doesn’t want to become a murderer, but it wouldn’t be a disaster if he were just 90% as reluctant as original Gandhi, that stuck-up goody-goody.

What if there were a general principle that each Gandhi was comfortable with Gandhis 5% more murderous than himself, but no more? Original Gandhi would start taking the pills, hoping to get down to 95%, but 95%-Gandhi would start taking five more, hoping to get down to 90%, and so on until he’s rampaging through the streets of Delhi, killing everything in sight.”

This is what is considered “a slippery slope” – a small agreed upon trade off in the beginning of bartering, but eventually, the initial conditions slip off to a complete overhaul of the original principles. Suppose we tell Gandhi this information about slippery slopes before he begins bartering the money for the pills. As the article I quoted above further describes, a possible solution for him to avoid this slip into pure murderous rage would be to incorporate what is called a “Schelling fence” (an extension of the Schelling point, coined by Nobel award-winning economist Thomas Schelling). This would be a pre-decided, somewhat arbitrary fence, that Gandhi would agree never to cross. By doing so, he can cash out on the exchanges until this fence is reached, which at that point, everything will come to a halt and the rest of the slope will never be initiated.

This slippery slope argument is often what is cited when dealing with privacy issues (and free-speech as another example). A small compromise in privacy now, may result in a different playing field in the distant future, where another incremental privacy related compromise may be made. Rinse and repeat, until we’re in a dystopic, 1984, hell-hole of a world.

Schelling fences are a possible solution to the slippery slope conundrum, but mostly in theory and not in practice. They deal with precommitments, which as human beings, we aren’t the best at keeping. As well, Schelling fences are much more difficult to coordinate or even establish when there are multiple interest groups involved. However, what this does highlight is that the general issue of short-term policy making that has severe consequences in the unknown, technologically-advanced, and culturally varied future, is an especially difficult problem to solve. It is even further amplified in dealings with privacy, as going backward seems almost impossible once it initially has been compromised. So here comes my thesis point: the ‘luddite’-esque nature of ‘drones’ or similar concerns help us overreact to short-term privacy issueswhich may reflectively be an underreaction in the future. Of course, this is assuming that we are on a potential downwards slippery slope.

So to answer your question directly, I think it may be good to keep around some Luddite-type fears, like the ‘symbolic’ drone, specifically in regards to privacy issues, to avoid the unknown potential downfall on a slide with no bearing in sight.

.

..

If you’re going to a bar, and you’re trying to get your drone on, does that mean you’re going to try and get with the hottest girl there, aka the queen bee? I mean, at the very least, we can try and incorporate this into the mainstream lingo. Maybe create an entry in Urban Dictionary? It can possibly, if ever so slightly, ameliorate the current ‘demolition’ / ‘hell-inducing’ type connotations associated with ‘drone’. Your friends at UTIAS would appreciate it.

~ R

Drones and Luddites

Rachit,

‘Drones’: so hot right now! They film your wedding and then get shipped overseas for the next airstrike in the Middle East. Can you imagine a ‘Drone’ sitcom? “Meet Buzz: at night, he’s a deadly assassin in the covert ops. But in this economy, he’s gotta pay the bills. Can this hot cadet deliver the perfect shot for his next wedding gig?”

Where did the term ‘drone’ even come from? I did some research, and the word itself seems to originate from the 1930s, when U.S. Navy commander Delmer Fahrney was asked by an admiral to develop a remote-controlled aircraft similar to the British ‘Queen Bee’. As a homage to the British name, Fahrney named these aircraft ‘drones’ after the male honeybees whose sole purpose is to mate with the queen (which is where the term ‘to drone on’ originally comes from).

Before we delve too much into this topic, it’s important to emphasize that the term ‘drone’ is as specific as the term ‘car’ – probably even less so. When I think of ‘drones’, I think of essentially toy helicopter-like flying vehicles that have some autonomous capability but are largely human-piloted and weigh less than a few kilograms and cost no more than a few thousand dollars. Other people may think of the ‘MQ-9 Reaper’ , the ‘hunter-killer drone’, which costs $16.9 million and can weigh almost 5 tons.

When we’re talking about surveillance, we’ll probably be talking about the smaller of these UAVs: things like the DJI Phantom 2 (probably the most popular commercial drone for filming). Anything significantly bigger than that will not be able able to fly close enough to the ground without significant noise and safety risks – which are a concern even for these smaller drones. The Phantom 2 has an advertised flight time of 25 minutes before the battery needs to be replaced or recharged. I would bet that this is actually closer to 15-20 minutes in realistic outdoor conditions. So, right away, before we can even discuss privacy concerns, the efficiency of these types of vehicles needs to be greatly improved for them to pose a threat that other, pilot-operated vehicles do not.

In many respects, I think ‘drones’ are a red-herring with respect to privacy concerns. They are flying robots that are easy to visualize and fear, but they distract away from the much more subtle, pernicious technologies that pose much greater risks to our privacy. What about smartphones, email, Facebook? Surely our private emails and pictures are more important than videos of public demonstrations or images of outdoor events like the one you linked to? What about already existing closed-circuit cameras? George Orwell’s home nation now boasts 1 camera for every 11 citizens .

This brings me to the title of this post: Luddites. To be a ‘luddite’ is now synonymous with being ignorant of some form of technology or innovation. This stems from a group of English factory workers who rebelled against forms of autonomous machinery that threatened to take over many jobs in textile plants in the early 19th century. Interestingly, many economists believe the fundamental Luddite concern is a fallacy, the appropriately named Luddite Fallacy, which wrongly equates new forms of technology with job destruction and economic downturn. In fact, the late 19th century was an incredibly productive time in Britain precisely because of the machines the Luddites rebelled against. Technology can of course automate-away many jobs, but it can also create jobs in other, often unexpected, industries (who would have thought ‘data scientist’ would be an in-demand position 30 years ago?). With that said, there are now pressing concerns about the current digital revolution and how the new wave of automation will affect our society and its widening income-gap.

When we talk about privacy, the important question for me is how do we stray away from Luddite-esque fears of conspicuous technological advances while still ensuring that we are cognizant of subtle, subversive attempts at governmental oversight?

~V

Mr. Peeping Drone

Care for another topic Valentin? WELL I’VE GOT THE JUST THE TOPIC FOR YOU! For only $9.99…

Let’s talk privacy, utilitarianism, and drones. Privacy is a right we are given as a citizen of the developed world. Wait. Let me rephrase that. Some amount of privacy is a right we are given as a citizen of the developed world. There’s that grey area we know and love. Now, I’m not planning to bore you with all the frequently debated internet-related privacy talk. Instead, I am going to narrow the focus of the discussion with privacy and how it relates to our recent trend to use drones (and not the friendly kind you and your buddies use in your aerospace lab).

Our privacy rights are highlighted in section eight of the Canadian Charter of Rights and Freedoms — “everyone has the right to be secure against unreasonable search and seizure”. Of course, we give up these rights when there is a cause of the search is reasonable – where reasonable is being defined as common-law evolves in different privacy domains. And it is here, as citizens, we make this trade off with our right to privacy. The utility we get for having our privacy invaded – at times unfairly – is worth the result of having a safer society to live in.

Now, where do drones fit into this picture you may ask? Let’s put aside the topic of drones being death-inducing terror monster machines and instead just think about surveillance drones. Nations are using them quite effectively  even in local / non-warfare conflicts. And as you would imagine, the use of drones is already being questioned in its infringement on our privacy rights. As we live in a common-law run government (excluding those Frenchies in Quebec!), the outcome of the use of surveillance drones is yet to be determined. So I ask you the following questions Mr. P:

Are you personally comfortable with surveillance drones to be used locally to serve peace-keeping efforts and increase safety?

If so, where is your line where it becomes unacceptable?

And if you had the power to, would you implement an amendment to privacy rights to include surveillance drones?