Our Corporate Overlords, Power and Privilege, Tech and Society

The Commodification of People

Among the many ways so-called Big Data is influencing our lives, quantification and predictive analytics is beginning to play a significant role in how people are selected for opportunities, such as jobs, homes, romance, sex, insurance, and so on, substituting the vagaries of human judgement with seemingly objective and reliable analytic scorecards and labels. The same profusion of data that flows from your interactions with the networked and surveiled world, and which results in all those “personalized” ads you routinely encounter, can also be used to evaluate and grade you as a person. Your daily experiences and interactions with websites, mobile apps, credit card processors, eBook readers, cell-phone carriers, security cameras, etc. leaves data trails that are routinely and tirelessly hoovered up to supply the information economy with the raw material of user profiling (but you already knew that, right?). But beyond the now familiar goal of these activities to simply sell you stuff lies a larger information dream: Using data about you to thoroughly understand what makes you tick and using that understanding to predict your future. Opportunity gatekeepers, such as landlords and employers, find this dream very attractive. Business objectives drive gatekeepers to seek out any and all means to maximize efficiency in their operations and reduce their levels of uncertainty and risk. Quantifying people into gradable categories like bushels of rice with consistent and predictable quality is an intoxicating product offering for decision makers, and the data industry is prepared to meet (and create) that demand. By aggregating together your prior preferences and behaviors and comparing those to the preferences and behaviors of thousands of similar people and their choices, a motivated data processor and her algorithms attempts to make a range of predictions about your life, getting out ahead of the uncertainties of evaluating people based on what they self-report or provide through from their chosen references.

But there’s a problem. Quantifying people is not nearly as easy as quantifying grain. Quantification requires standardization, but people aren’t standardized and the data collection methods we have for analyzing people aren’t perfect. So, shortcuts have to be made and proxies must be used to reduce the rich complexity of human experience into discreet buckets. The first reduction comes in the form of the data that is used. Despite the fact that we our lives are increasingly observed and analyzed, the domains and methods of observation come pre-loaded with certain biases. Tracking what books you read with a Kindle (or other eBook reader) requires, first, that you own a Kindle or use the Kindle app. This already eliminates that data point from consideration for all the people who stubbornly continue to read printed books or who choose to spend their limited incomes in more practical ways. Here we see how one chooses to engage with the data ecology might impact her profile. The varied choices people make about participating in social media are similarly influential in profile development, as evidenced by the increasing number of data products that use social media data as inputs (see this for a chilling example).

The data industry also makes use of the open records policies of government agencies to build their profiles. Some types of public records, such as arrest records, tend to reflect negatively on people of color and the poor. For example, there is an abundance of evidence that drugs and weapons laws are routinely violated by people across demographic lines, but African American men are more likely to be arrested and convicted for violations (see this, and this for examples).  As a result, evaluating people based on their criminal histories doesn’t necessarily tell the kind of nuanced story that leads to complete knowledge. These two examples (and there many others) suggest that the construction of the data regime may not be quite as objective and reliable for judging people as we think. In fact, it appears to favor people of privilege – those who can afford to participate richly in the data economy (and choose to) and those for whom readily-available derogatory data is less likely to be discovered.

In addition to understanding how the formation of user profiles might be flawed and unfair, I am also interested in why economic/social gatekeepers are so keen on using analytics to make decisions about people in the first place. And this brings me to the work of Albert Borgmann who writes about the “hyperactivity” of modern society. Borgmann describes a hyperactive society as one that is constantly “mobilized” against the perceived threat of economic ruin. This mobilization has three key features: the suspension of civility, rule of a vanguard, and the subordination of civilians. It is in that third feature that I detect what I would label the “precarity” of the modern worker. Despite our cultural mythologies in the U.S. and elsewhere about how hard work and dedication inevitably lead to riches and success, and in spite of the tremendous wealth our society has created, we have seen in recent decades increasing social and economic inequality and the loss of stable work opportunities for ordinary people due to changes in a variety of structural economic conditions.  There are many reasons for these changes, but one of the results is that those with the power to make important decisions about our lives seem to have considerably more power and incentive now to exploit what Borgmann refers to the “disposability of the noncombatant work force.” In short, the incentives are high to reduce the work force as much as possible and the moral precepts of capitalism do not offer much resistance to doing so. The resulting precarity of work in our society leads to increased competition among workers. In order to survive in this mobilized society, we are basically forced to compete for increasingly scarce resources rather than to join together to challenge the sources (real and imagined) of the scarcity.

While Borgmann tells us something about societal forces that contribute to interpersonal competition for scarce opportunities, another author, James Carey, sheds light on how information systems have provided the means to commodify human beings. Writing in 1989 (but eerily prescient), Carey examined the dramatic social and economic changes wrought by the first electronic mass communication medium: the telegraph. The telegraph was the first technology capable of detaching information from physical objects and constraints, increasing the ability for traders of every stripe to to abstract physical objects into symbols for exchange. With the telegraph, information about the world could travel much faster than any messenger or machine, breaking down prior barriers of time and space. This change in the temporal and physical reach of communication increased a business person’s pool of potential partners, making direct personal experience with each one impossible. As a result, new methods of evaluating strangers had to emerge. This can be linked to another of Carey’s observations about a separate byproduct of electronic communication: the commodification of goods. Carey argues that the emergence of the commodities futures markets was tied to the linking of buyers and sellers regionally and nationally by the telegraph. It became possible to trade goods, such as bushels of wheat, by lots aggregated from dozens or hundreds of sources rather than dealing directly with the individual producers. This practice required the development of standardized grading systems that could be applied to quantities of goods from diverse sources. These seemingly unrelated byproducts of communications technology–the emergence of impersonal business dealings requiring new methods of personal assessment and the invention of the commodities trade that massed and standardized diverse goods into quality categories–set the stage for the emerging commodification of people. In the modern setting, the ability to post a job ad or a dating profile potentially viewable by millions of people means that the “seller” must be able to rapidly sort through dozens, hundreds, or thousands of applicants. The ability to judge candidates individually becomes impossible. Here we see the origins of the reputation industry and commodification of people: Why not employ algorithms to sort them into quality categories as if they were bushels of grain?

How this operates in practice is complex, but one thing is certain: the precarity of position and the perception that resources are scarce motivates people to sacrifice their own freedoms to gain an edge. People will give up their privacy and  otherwise adjust their lives to please opportunity gatekeepers in order to get ahead. A telling example comes from the insurance market where, in exchange for rate reductions, people install data devices in their cars that monitor and report their driving habits to insurers. Even more invasive, people are sharing the data collected by their health tracking wearables for similar incentives. This practice is known as “signaling” by economists. While granting explicit consent to monitor specific activities is a very obvious type of signaling, there are other means of signaling that are a bit more complex, but not too complex for analytics algorithms to notice. Social media activity provides a rich assortment of signals about one’s life, including family composition, health events, employment satisfaction, and financial stability among others. A few banks are confident enough about what they can learn from social media they are basing credit decisions on it (see this and this). As the practice of monitoring social media use to assess one’s worthiness for loans and other opportunities becomes commonplace, it’s not hard to imagine how that may influence how people use social media and therefore how they socialize in general.

There are many reasons why this matters. For one, it represents a progressive rebalancing of information flows. Economists have long rued the “information asymmetry” in buyer-seller transactions, in which the seller uses her deeper knowledge of a good for sale to the potential disadvantage of the buyer. However, one man’s market inefficiency is another’s defense in a world of outsized power imbalances. If the seller is a job applicant for a job at a large corporation, they are arguably arrayed against the titanic power of the modern corporation. Being able to assume some measure of control over the hiring process could be the last semi-free act of her career. Meanwhile, the corporation’s goals are to avoid risk, by choosing the candidate least likely to harm the firm, and to increase efficiency by streamlining the process of choosing from among a pool of candidates. Commodification of the candidate serves the corporation well, but may disadvantage the candidate if she cannot control the sources and biases of the information used to categorize her. As the reputation industry matures and more and more choices about who gets what opportunity are determined by abstracting people into symbols and treating them like graded commodities, the risk that people seeking opportunities become increasing disempowered will emerge as the crowning achievement of information technology: The commodification of precarious lives.

Standard
Our Corporate Overlords, Tech and Society

Blame the Election On Facebook (in part anyway)

Donald Trump won the U.S. presidential election last night. This is terrible news for the country and I am horrified by his victory. In particular, I’m having trouble processing his obviously widespread support given the many negative attributes he has displayed throughout his life and during the election. While we’re looking around for whom to blame for how things turned out (and there will be plenty of finger-pointing), I believe Facebook, Twitter, Google, etc. and the entire culture of information “personalization” should be counted among the blameworthy. True, there are a number of complex sociological factors in play in any election, and the combination of Trump’s celebrity appeal and populist messaging seem to have had a powerful effect on a lot of people. But here’s the thing: Many of us were unprepared for this result. We looked on with wonder as Trump won in the primaries and went on to become a popular candidate in the general election. Many of us were shocked and surprised by the outcome of the election. Did you, like me, experience ongoing shock and disbelief over Trump’s consistently competitive poll numbers even after allegations of sexual assault and the array of other deeply negative revelations about him? If so, it might be because you and I live in a media bubble built out of algorithmic profiling, an echo chamber designed to soothe us with an overwhelming number of messages that we agree with, or are pretty dang close.

When you view your Facebook “newsfeed,” you’re not viewing every post of every person you are connected to on the network. Facebook’s learning algorithms access thousands of data points about your past behavior on Facebook and your interactions with other websites, merchants, and mobile services to identify your tastes and preferences. The resulting newsfeed you and I see contains only posts that Facebook believes are the most “relevant” to each of us. On Facebook, we’re mainly connected only with people who we have identified as “friends” and what we hear from them (and they from us) is winnowed down into a preference-focused feed that may not even include the contrary views of people we know. When we perform searches on Google, another collection of personalization algorithms massage our search results to conform to what the system believes each of us wants to see. Twitter users can choose whose posts to follow, enabling users to curate their information sources into narrowly defined subjects and communities. In my Twitter feed, I only follow academics and research institutions working in my field, plus a few journalists and news sources whose reporting appeals to me.

Getting news this way is completely different from  traditional journalism, where the goal, ideally, is to provide readers and viewers with a diverse range of ideas and multiple viewpoints. On commercial information services the information we receive is narrowly restricted and designed to please each of us individually. (Much of this will not be news to anyone who has read Eli Pariser’s “The Filter Bubble.”) The goal of customizing our various search results, feeds, and follows is to keep us online, staying engaged with whichever service we’re using, clicking links, viewing ads, buying things. The more time we remain engaged this way, the more information about our preferences and inclinations that can be translated into advertising dollars. The result of all this customization is that each of us is experiencing very different information flows from people who disagree with us – flows designed to keep each consumer engaged and to limit any feelings of discomfort. If you think Hillary Clinton is dishonest, it’s likely reflected in your online media choices and personalizations, and you’re unlikely to see posts or articles that champion her as a person of integrity.

Democratic deliberation requires the airing of a plurality of ideas and room for meaningful debate on the merits. It is still true that people are more likely to find common ground and back down from extreme positions if given the chance to truly understand each other. It is also true that customized information sources are as likely as not to include easily disputed rumors and distortions that would become apparent if more viewpoints were available for consideration. This is not what is happening. Unfortunately for democratic deliberation, the discomforting effects of stories and worldviews that don’t conform to our biases are bad for the online business model. If the goal is to keep people where they are, engaged and consuming what you’re offering, it doesn’t make business sense to question or challenge them and their version of reality.

Our media elites used to do a decent job of providing us with a plurality of views. Traditional journalism is far from perfect; media biases and filters are not new. But there were (and still are) journalistic institutions dedicated to reporting more fact than rumor, and for presenting multiple viewpoints on contentious questions. When that system was more functional than it is now, while I might not agree with the opposing viewpoint, I could at least come to understand and engage with it. Similarly, people on the other side of an issue might come to understand a piece of my truth. But traditional journalism is in decline. Fewer of us are relying on well-established media sources that can legitimately claim to be objective or balanced. One outgrowth of this is that some of the remaining media institutions have become clownish and shallow, more interested in salacious gossip and to pleasing political leaders in return for “access” than to soberly analyzing their views and statements.

As my old friend David Newhoff points out in his blog, viewing the world through the filter of commercial information platforms, including social media, makes it “very hard to distinguish between being vigilantly informed and hysterically manipulated.” As more of us come to get most of our political news from these platforms, whose shared mission is to harvest and monetize information from us and not to inform us, we will continue to fail at gaining a thorough understanding of what comes blasting out of the fire hose. Still more problematic, we’ll also continue to fail to truly understand what the other side actually thinks, resorting instead to caricatures and hyperbole. We are going to see the results of this filtering effect repeated again and again with the result that we become weaker advocates for our causes and candidates. This is not making us smarter. It is making us naive and vulnerable.

While we’re busy pontificating (myself included) on social media about our views and sharing our carefully curated information tidbits with our online followers and friends, remember that this narrowly focused information sharing is a central problem for political discourse. Despite the potential for sharing our views with more people than most of us could have hoped to before these platforms existed, the intentional limiting of our feeds and searches by platform operators means that what we say, do, and seek in the information space is not likely to escape the comfort of our individual echo chambers. We’re just yelling at ourselves while generating revenue for others and carving out ever-tinier slices of an increasingly subjective reality.

Standard
Our Corporate Overlords, Philosophy

Is Western Philosophy Irrelevant?

In a thought-provoking opinion piece by Robert Frodeman and Adam Briggle, the authors argue that modern philosophy has engineered its own irrelevance by retreating into elite research universities where it has struggled, and largely failed, to compete with the positivist natural sciences around which the modern university was built. Rather than remaining a democratically situated facet of daily life and a familiar component of all manner of intellectual inquiry, Philosophy, as a discipline, surrendered to the institutional pressures to specialize and to endlessly “produce” new knowledge.

While I agree that one reason Philosophy has been marginalized is due to the same disciplinary, epistemological tensions that populate ongoing debates about scientific rigor and results between scholars of the “hard sciences,” like Chemistry and Computer Science, and the “soft sciences” such as Sociology and of course, Philosophy. However, I would offer that the rising eminence of the applied philosopher, aka “the ethicist” as a feature of research teams, board rooms, and news columns, counters some of the trends cited by the authors. Applied Ethics is a move towards the re-democratization of philosophy in my opinion, and has risen in eminence due to a rising discomfort with the abuses of increasingly powerful and influential institutions (e.g. Enron, Wall Street, etc.). The main limitation there though is the same one that has neutered moral accountability within the natural sciences: capitalism. Boeing, Amazon and Google may employ ethicists (unlikely at Amazon actually), but can their work make those institutions “good?” Or do they simply provide cover and PR talking points for business as usual?

So, my question back to these authors, which is perhaps addressed in their book: Where is there actually space in our society for philosophers outside of the research university? The academy appears to be the last refuge for anyone who doubts that a society based entirely on free markets and social darwinism is necessarily a good society (and that refuge is shrinking). Who would have the philosophers now except for self-help book readers, cable news-segment producers, and public relations “damage control” teams? Maybe it’s not just that Philosophy felt the need to ape and compete with the natural sciences and so retreated to their own ivory tower, it was driven out of industrialized consumer society and dismissed as vaguely interesting and quaint, but mainly inconvenient. (Modern religions could lodge a similar complaint, but their troubles are complicated by widespread hypocrisy and intolerance.) Academic institutions may not be ideal or particularly democratic places, but they do preserve the realm of free thinking about ideal societal models in an era when self-interest, greed and accumulation appear to be the ascendant societal values, while harmony, charity and compassion are viewed as weak.

It’s become fashionable to trash the academy as an ossified, siloed, nearly-irrelevant institution that produces far less “value” than the resources it takes in. Many of those critiques are of course quite valid. Universities and their cultures are not at all above reproach and should be the subject of ongoing scrutiny by their communities. But these critiques occur at the same moment that bootstrap-pulling libertarians demand that our educational institutions make a “business case” for their existence and prove their “relevance.” The creeping corporatization of universities, particularly grant-driven-but-otherwise-cash-strapped public universities is, in part, a quest to delegitimize the types of inquiry that do not lead to the creation of new consumer products, but instead challenge existing power structures by questioning the current arrangement of society. Like that of Socrates and Martin Luther, independent inquiry and public declarations of dissent makes the powerful very uncomfortable. That is the value of so-called “academic freedom” which, when it actually functions and enables truly free expression, allows thoughtful people to boldly challenge the status quo. Philosophy deserves a place in society. Blaming the discipline for its balkanization may, in part, be a case of blaming the victim.

Standard
Our Corporate Overlords, Tech and Society

Groundwork to a Rhetoric of Technology

I recently had the opportunity to learn about a field known as the “rhetoric of science,” which is the study of the discourse around scientific topics. While the word “rhetoric” is often thought of as a pejorative, here it is a neutral term that broadly describes how we go about trying to persuade each other to a point of view using words and, quite often, do so by targeting emotions and assumptions. Each of us uses rhetoric pretty much daily; from technical arguments about politics, to mundane negotiations about household responsibilities. We spend a great deal of time in conversation “making a point,” which is another way of saying that we try to persuade others that we’re right about something. In studying the rhetoric of science, scholars seek to understand how science is described, debated, and understood (and frequently misunderstood). Rhetorics of science frequently affect how specific research or an entire science is perceived, and it can affect how future work is conducted. An example is the examination of how topics like climate change and human evolution, both of which are firmly settled questions within the scientific community, have been successfully portrayed by activists as ongoing debates. Another example is the study of how simple metaphors are used to describe deeply technical topics like genomics. (Is DNA really a “map” or a “blueprint” of a gene?) Consider the recent controversy concerning Planned Parenthood and the alleged sale of fetal tissue to researchers for profit. A fiercely politicized discourse has been employed to depict a fairly routine activity—the use of human tissue for research—as something deeply nefarious. The rhetoric of science is a fascinating research area, one that we are all engaged in whenever we consume media on scientific topics, which happens with increasing frequency thanks to the ease with which information and misinformation rapidly spreads via social media and cable news programs.

My brief introduction to the rhetoric of science caused me to redirect my thinking about how we talk about information systems and technology. If you know me or have been reading this blog, it probably won’t surprise you to learn that I have been labeled a technology “skeptic.” There’s an example of a rhetorical move right there. I think “skeptic”—which has a mix of connotations, some of them pretty negative—isn’t quite the right word. What I feel is that common portrayals of modern technology in our public discourse lack a satisfying amount of questioning and thoughtfulness. This shouldn’t be too surprising since, after all, most of what we hear about popular technologies comes, either directly or through proxies, from the giant corporations that make them. We learn most of what we know about iPhones from Apple (and its many allies), social media from Facebook (and its many admirers), and so on. Those who craft the messages we most often hear on these topics exploit the fact that most of us are readily impressed by sleek designs and technological novelty. While the twittersphere may contain an abundance of contrarian voices on technology topics, you kind of have to want to hear them to find them, and even then, credibility is hard to establish. It’s easy to dismiss critics as uninformed, puritanical, or simply “no fun” (consider this blog, for example). I think it’s safe to say that the most consistent and well-crafted information we hear about most technologies comes from marketers. Whether it arrives in the form of a slick advertisement, or through something more viral, like blog posts and cable news appearances by various spokespersons and consultants, the major channels of communication still favor those with the deepest pockets and largest marketing infrastructures. There are people who get paid really well to spin great stories about a direct link between new technologies and human flourishing and they do it very well. Even seemingly neutral information sources like news programs often lack introspection; favoring instead to offer breathless “reviews” of new technologies that fail to offer criticism that might alienate an audience that is generally awestruck by the latest gadgets and apps.

What I’m planning to do with all this is to start looking at some of the common tropes and stylistic moves that tech-evangelists use to convince their audiences about the seeming promise and inevitability of tech-mediated living. Examining the metaphors is one way to do this. Words like “interactive” and “disruptive” deserve closer inspection. So does the term “social media” for that matter. In each case, we should be asking, what do these words mean to an audience? And do they accurately describe the states and changes they are employed to describe? The use of terms that invoke freedom and choice have a long history of association with market based thinking, and have become even more pervasive in the rhetoric of Silicon Valley. What is a “free” app exactly? What range of “choices” do we actually have in selecting and using information technology?

My main concern is that as we move from living and interacting in physical space and in real time—on the street, in the park, in the auditorium—to online existences where we interact using social media, augmented reality, gaming, and so on, that we are moving into spaces not only mediated by technology, but easily manipulated by the corporations that make the technology. Interactive spaces made by corporations are not agenda-less spaces. They contain (and are) rhertorics designed to persuade. One look at the default screen of an iPhone offers numerous clues as to the priorities of Apple, which likely do not conform entirely to yours. Wherever possible, companies that can hold your attention will seek to convince you to use more of their products and services and, as often as possible, will reinforce their tech-focused, consumerist worldviews. As more and more of the information we receive is “curated” for us by the algorithms that select, say, your personalized Google search results, there is a real risk that powerful voices will dominate and hijack your access to information. Consider for a moment what could happen if Eric Schmidt, the CEO of Google, decided that he really wanted Donald Trump to be the next president. How much tinkering would it take to subtly change your search results to present the most sympathetic accounts of Trump and his views? Technology companies have access to enormously powerful rhetorical tools. Our actual freedoms and choices may well depend on how attentive and aware we are of that.

This is a topic I plan to return to. For now, I invite you to do what I’m doing, which is to listen closely to the words that get used in any conversation about information systems and technology (including mine) and seek out the meta in the conversation. What shorthand is used to describe complex, socially impactful developments? How are contrary voices characterized? You may start to make some very interesting observations. You may find yourself becoming something of a critic. Who knows? You might even become a “Luddite.”

Standard
Uncategorized

Do Privacy and Privilege Converge?

The short answer is “yes.” The concept of “privilege” is probably familiar to you. It describes the advantages, both subtle and obvious, that flow towards certain groups in society and away from others. It is a collection of unearned rewards, benefits, and/or advantages triggered by affiliation to the dominant side of a power system. We experience this in the United States primarily as white privilege, male privilege and heterosexual privilege. Being any of non-white, female, queer—among other classifications—has been amply demonstrated to reduce one’s opportunities in the world. One need only examine some prison statistics to see how this plays out for people of color, or the evidence of pay disparities by gender to see how this plays out for women. So, how does this play out in the digital world?

We have taken to calling the experience of our online lives the “information society.” It is an increasingly apt description because is accurately describes how our social, political and legal lives are migrating out of physical space and into the digital. It should not be surprising that as both the positive and negative human inclinations found in the material world find expression in the digital world, a move to separate and segment society is finding its way there as well. The inequities that comfort and oppress various groups in modern society are not absent in the digital world. In fact, they are expressed in more subtle and pernicious ways and may prove even harder to combat. By now, you probably know a thing or two about state surveillance due to the disclosures of former NSA analyst Edward R. Snowden. Maybe you were already somewhat concerned about corporate surveillance too—the practices of all sorts of entities collecting information about your browsing habits, cell phone use, and so on. While most of us probably think mainly about how all this data collection affects our own lives, something that is obscured behind the basic problems of unregulated surveillance is how the dramatic increase in surveillance capabilities and data processing affect some people much more than others.

The journalist Natasha Singer has written a number of articles about the the profiling and scoring of consumers based on what can be discovered about them via their online activities and habits. As Singer illustrates, new scoring algorithms—which work similarly to traditional credit scores, but without any regulatory restraint—have the power to influence everything from your eligibility for a loan or a job to who you date. These are proprietary systems that operate with a great deal more information about you than you might imagine. The numbers you dial on your phone, the websites you access, the purchases you make (or decline), where you drive your car or swipe a transit pass, plus thousands of other data points gleaned from the increasingly transparent nature of our transactional and information-seeking lives, are aggregated, crunched and calculated for efficient ad-targeting…and social sorting. You could be tagged a hot prospect for a great deal on airline tickets, or you could be identified as a credit or security risk and denied an apartment rental, all through an entirely opaque and unaccountable web of algorithms and hidden interactions.

While you may or may not be concerned about this for your own life opportunities, this can have pretty disastrous effects on certain segments of society. Consider the intersection race with the technology of modern policing. As one example, body-worn cameras used by the police (which hold out much promise in addressing police/civilian violence) have raised concern due to everything else that is collected by those cameras during police interactions, and the web of technology that can act on that data. Police body-cams don’t only capture video of suspects, but also the people around them. Being the neighbor of a troublemaker dramatically increases your chances of playing a role in a police video. Facial recognition software has advanced to astonishing accuracy and will only get better. The increasing availability to the public of police data, joined with facial recognition software and digital scoring algorithms suggest a scenario in which one’s “scores” could be downgraded from simply living next door to a police target, resulting in curtailed opportunities and choices. Even if the example of this “by catch” seem farfetched to you, consider at least that the actual targets of police activity are also being unfairly impacted in new ways by technology. Racial bias in arrests and convictions is well-established and pretty hard to dispute. This increased likelihood of being targeted by the police based on race now also means an increase in data about people of color entering the data stream and staying there pretty much forever where it can permanently damage one’s future prospects. Add in the web of parsing and scoring algorithms and a picture begins to emerge of race-based algorithmic discrimination.

Race is not the only privilege factor that surfaces inequities in the information society. Women are having a very different experience of information technology than men. Revenge porn, which is the unauthorized publication of nude or pornographic imagery, typically made public by disgruntled ex-boyfriends, overwhelmingly affects women and can have profound social and economic consequences as well as put women in physical danger. Revenge porn victim have been fired, denied employment opportunities socially ostracized and stalked. While some states are getting around to criminalizing the practice, many of the websites that publish revenge porn media are out of the reach of law enforcement. Just like with criminal incident data, it can be nearly impossible to remove every instance of the data once it shows up somewhere online. Women experience other forms of harrassment, including threatening behavior by “trolls,” as happened in the Gamergate controversy earlier this year.

When a person or organization captures information about you and then uses it against you, that is a privacy-based informational harm. It is also an exercise of power. What opaque and proprietary monitoring systems know about you empowers them to manipulate and control you, while simultaneously diminishing your autonomy. We are all subject to this form of disempowerment at the hands of those who operate the technology upon which we are increasingly reliant. Lacking relative power at the outset, as is the case with the unprivileged, places some people in a far more vulnerable position; one they are unlikely to be able to extricate themselves from.

For the affluent and privileged, the market is responding to the increased awareness of, and sensitivity to, ubiquitous surveillance. For example, there are costly smartphones available that ship “hardened” with enhanced encryption and “data leakage” protection. A new industry called “reputation management” is rapidly growing aimed at businesses and the affluent enabling them to manage their online profiles in order to minimize any potential damage from any sort of bad behavior. Unprivileged people are more likely to have black marks against their profiles and are also likely lack the economic means to buy expensive phones and reputation cleanup services while the privileged can avoid discriminatory surveillance practices and can pay to maintain squeaky clean online profiles. This means that existing social divides will just get deeper and more entrenched. In the information society, the unprivileged are increasingly captured and shamed with demeaning and punitive data and can’t do much but become even more marginalized.

Standard
Philosophy, Privacy

Communitarianism: Is Trading Privacy for the Common Good, Good?

This week, I had the remarkable opportunity to meet Amitai Etzioni, the internationally acclaimed scholar and the founder of the “communitarian” movement. Communitarianism is, in short, a moral philosophy that eschews the ascendant radical individualist worldview in favor of one that balances personal liberties with the common good for the good of all. Communitarianism stands in contrast to libertarian teachings, which argue that guaranteeing personal autonomy and stressing the rights of the individual are the best ways to achieve a just society. In this view, we owe nothing to others so long as our actions do not interfere with their life choices. Instead, communitarianism suggests that we co-create the best possible society by working with others to negotiate social norms. What I find appealing about this approach is the assumption that we do owe something to one another and that we are responsible for each other. While personal autonomy and liberty alone are appealing facets for a life of self-reliance and a potential bulwark against tyranny, the central ideals of the libertarian/individualistic outlook lack a strong expression of compassion as a central element of the human condition.

Etzioni started the day’s talk by asking, “what makes a good society?” The process of answering this question is not straightforward. Communitarianism assumes that there are competing claims for what is “right” and each claim is potentially valid. We have to struggle together in our communities until our disagreements—even those that may have devolved into violence or marginalization—fade into a mutually satisfactory moral agreement. No single actor wins outright. Rather, we all arrive at an appealing conclusion together. Achieving this in reality sounds challenging, and it is, but Etzioni cites historical examples in the United States that reflect positive transformations in culture and law, such as the evolution in thinking about the social and legal claims of African Americans and other non-dominant peoples. He also points to the contemporary example of gay marriage as a case where a strong moral disagreement has rapidly faded and is now yielding an astonishing (though far from universal) level of agreement. The promise of solving societal problems through painstaking consensus building may be utopian, possibly unrealistic. On the other hand, in an information/opinion-rich world that seems increasingly divisive, even among people who share culture and history, a vision that suggests that all this arguing and conflict ultimately leads to satisfying agreements is very comforting.

However, I hesitate to identify as a communitarian. Attempting to balance personal liberties with social good, as instructed by communitarian thinking, is not without its problematic aspects. While I have some faith that finite communities of equals are capable of negotiating their way to moral agreement, I am less convinced that we can count on consistent or lasting results with larger institutions, including the various organs of government power and, most acutely, corporate power. Etzioni seems to have more faith in the likelihood that, over time, monolithic institutions can be motivated to act fairly and openly in negotiations involving trading cherished rights for a common good, like “security” or “commerce.” Time and again, both governmental and corporate concerns (increasingly linked) have demonstrated that they are not fair dealers when entrusted with our liberties, and the outsize power they bring to the table tilts the balance towards whatever ends suit their goals. Concerning privacy in particular, Etizioni’s view is that, while valuable, privacy must yield to public interests, such as enabling reasonable physical and property searches and weaker data encryption in order to support law enforcement activities and preserve public order. Unfortunately we know all too well that entities like the National Security Agency are firmly skewed to achieving security and other arguably public interest goals by any means necessary, including the wholesale sacrifice of privacy. Similarly, corporations that traffic in personal information aren’t driven by any common good, but by the much more limited “good” of profit. Balancing the privacy interests of their target users with financial opportunities is not an option. Money talks, liberty walks. In such a climate, a more defensive posture for rightsholders seems warranted.

Similarly, Etzioni believes that overly permissive press-freedoms are not valuable when they put people’s lives at risk, as with recent cases of news media outing undercover operatives and divulging state secrets. Etzioni feels that we grant news editors too much power over the fates of others. It is compelling to question the presumed nobility of the press, but I have less faith in governmental and corporate actors who routinely obscure and classify information about their actions, often irrationally or with the goal of hiding nefarious behavior. I tend to side with argument that sunshine is a important pillar of democracy.

Returning to the topic of privacy, Etzioni invoked the metaphor of the village. In the village, everyone knows everyone’s business. As a result, residents behave better out of fear of ostracism and censure by their neighbors. In this context, sacrificing privacy has an agreeable reward, which is social concord. However, if the village is, instead, a nation, and instead of one’s neighbors, the negotiation is with one or more faceless institutions that wield incredible power and are not trustworthy, the desire to sacrifice privacy for the assurances they provide is much less appealing. Maybe communitarianism just doesn’t quite scale. Or maybe, despite my faith in humanity and belief in the goodness of co-creating a just social order, I’m just not patient enough to await a long arc of history that will achieve a communitarian balance.

Standard
Our Corporate Overlords, Privacy

Lessons in Workplace Privacy: Sony’s Emails Could Easily Have Embarrassed Them Without Hacking

I’m fascinated by the hand-wringing and disbelief that accompanied the recent hack of Sony’s network, and particularly about the disclosure of embarrassing internal emails. Most of the commentary regarding the Sony hack has concerned either the salacious internal gossip that was revealed or the potential suppression of a mediocre movie whose plot may have been a catalyst for the hack, along with ongoing analysis of the security challenges for corporate networks in general. Yet, there has been little discussion of the “vulnerable-by-design” nature of email and the purposeful weakening of any expectation of privacy in workplace communications. Even if Sony’s leadership had responded more adroitly, and its technical staff had been able to rebuff the massive attack on their network, Sony’s email was already a potential weak point in the defense of sensitive information before any extortionist hackers got involved.

Consider, first, that email was designed in a more innocent age and traces its roots back to a time before the World Wide Web and the Internet as we know it today. An example of the limitations of the original design is the ease with which a spammer or phisher can “spoof” a legitimate email address, which basically involves swapping one address for another with about as much fuss as copy/pasting a sentence in Word. Just like revelations in recent years about the security weaknesses of the domain name system (DNS), there are venerable, fundamental systems operating on the internet that are nearly unpatchable and supremely vulnerable to the corruption and malfeasance of the modern age.

But there is more to the story of email’s vulnerability to disclosure than its technical limitations. We have actually chosen to make email particularly insecure, particularly in the workplace. Numerous times, employers have gone to court and consistently won cases upholding their right to read and monitor employee email without any specific cause or provocation. In addition to lower court rulings, a Supreme Court decision makes the employer’s right to monitor employee communications pretty clear. There have even been cases of employers seeking to legitimize the monitoring of non-work emails of their employees, and sometimes winning those too. Email privacy stands starkly apart from the the sacred trust conferred on a sealed letter headed to the post office. The grim acceptance of email content (and other electronic text) occupying some uniquely not-private status has been the norm for a very long time. Sony–its executives in particular–relied on an extremely untrustworthy medium to make snarky, even offensive comments about actors, projects, and President Obama, but they really should have known better. Unless we’re willing to wage a righteous fight to enshrine email, along with other workplace communications, with the same legitimacy enjoyed by the written (and mailed) word, we all need to grow up right now and stop pretending we can freely dish about our coworkers, clients, bosses, and other important people over email at work without repercussions. Let the Sony email hack serve as an eye-opening reminder to us all.

While we’re on the topic of workplace privacy intrusions, it bears briefly examining others to suggest that there is a progressive erosion of workplace privacy and ever-expanding culture of worker surveillance. If you work in a modern office, perhaps you’ve heard of “presence,” which involves using cues like a colored square in an email program to indicate your engagement with work. Maybe it’s green whenever you’re logged in and using your computer actively, red when you’re away or “busy,” and some other color for when all the system knows is that you’ve stopped typing–presumably to indicate that you might have stepped away or you might be talking directly to a colleague or you might be daydreaming. You’re forced to expose the moments that you dare to stop typing or clicking at your terminal like a good robot, even if you have otherwise satisfied the definitions of “present.” Newer phone systems, through integration with calendaring and messaging systems, helpfully supplement all this surveillance in the name of workplace efficiency and visibility. All in all, each generation of office technology seeks to inform others more and more about our every utterance and inclination.

Given such a workplace climate, should the staff at Sony really have had any expectation that their inner thoughts and most tasteless humor would not be made pubic someday? It’s not enough to cluck our tongues reciting abstractions about electronic privacy, particularly in the business world, being so shockingly vulnerable due to the efforts of hackers and other bad actors. In the case of workplace email, a culture of accessibility, disclosure, and exposure is built right in.

An in-depth NYT overview of the Sony hack can be found here.

Standard
Our Corporate Overlords, Privacy

When Copyright Met Privacy

As I’ve mentioned in an earlier post, I have broken copyright law and probably will do so again. I’ve made copies of books and articles–sometimes using the crude tools of the late 20th century (aka: the copy machine), and other times using far-easier tools like the “right click.” I’ve also copied “records” onto cassette tapes, burned CDs from store-bought ones, and received dozens of hours of music on flash drives from friends. Add to that the number of times I’ve grabbed an image from a website to use in a presentation of some sort. I know that these things are technically no-nos, but I don’t lose much sleep over them because my “piracy” is incredibly small, benefits only me, and doesn’t prevent me from spending a significant part of my income consuming art of every kind. If I were to stop “infringing” on copyrights tomorrow, I would likely spend the exact same amount each year on music, books, tickets to performances, and other arts products.

For most of my life, it has been nearly impossible for record labels and book publishers to detect my acts of copying, which is perhaps why they haven’t lost much sleep over it either (except while targeting technologies, like the VCR, the DAT machine and digital mini disks, RIP). They also haven’t seemed too concerned about used bookstores and the second-hand bin in record stores (for all that remain of those businesses), even though every sale of used music/book media could arguably be said to be one less of new media, with the profits of those sales not returning to the originators of the work or their publishers.

(I have to pause here to mention that I do believe in copyright. I believe in the validity and purpose of intellectual property. Artists and creators deserve to get paid and should be able to pursue violations of their copyrights when it is appropriate, and when that pursuit is not unduly elevated over other valued rights, and especially when the rights-violation is egregious. That said, there have to be limits on what any rights-holder should be allowed to do in pursuit of the protection of their stake. This blog should never be construed to be some libertarian screed against intellectual property rights, or an apologia for ruthless entrepreneurs who happily dismiss well-designed business models (and the people they feed) in the self-serving name of “innovation” and something chimerical, though interesting, called the “sharing economy.”)

The major rights-holders are now very motivated to prevent me and you from making copies of music and books and movies and so on. This has very little to do with the interests of working artists, but more to do with the technical ease with which we can now copy things, and also with the myriad ways that such copying can be observed and tracked like never before. The industry is also inventing new ways to control resale and personal sharing, and this is very novel. For example, books that you buy for a Kindle can’t be handed off to someone else. You can just barely “loan” a Kindle book to someone, but the terms suck. The enormous companies that control the majority of copyrights and eBook publishing are finding new ways to follow you into the formerly private spaces of your library and music collection–your home–to monitor and modify how you experience the creative works you may or may not have directly paid for.

I don’t blame them for wanting to do this: they are corporations, which are predatory organisms whose evolutionary mandate is to consume every ounce of profit that can possibly be consumed. But I can object to how they go about it. A feature of sharing, reselling and copying of books and music that we previously took for granted was that it was largely undetectable–it took place in the sanctum of your homes, your living rooms. Places, it seems, that are no longer truly private. Many “infringing” acts still are undetectable, as they should be. Just like historically unregulated acts like reselling a painting without paying a fee to the painter (or his publisher) or donating a book to your library book sale. With the advent of new ways to consume media, like the Kindle and iTunes, and due to the logging and tracking technologies embedded in internet and mobile device use, it has become very easy to detect and control all sorts of previously unrestrained activities over creative works, and rightsholders are very interested in doing just that.

Consider this: During a major lawsuit against YouTube’s parent company, Google, by Viacom, who sought to collect damages from YouTube claiming they were illegally profiting from hosting copyrighted content, Viacom won the right to view YouTube’s website logs revealing information about every user, every viewer, and every video on the site. Although they eventually conceded to anonymizing the user data, we know that anonymization doesn’t really work, and that evidence of my guilty-pleasure binge of watching old Journey videos (at work no less, identifiable by my IP address for sure) some years ago was handed over to someone without my assent. This may seem like a small thing, except that this really is the thin end of the wedge. Privacy protections for what we do with our surfing habits are already tremendously weak. Rights-holding corporations are going to exploit that, and their lobbying power is very strong. There is something very fucked up about my casual video viewing habits being scooped up and entered into evidence as part of a lawsuit that had nothing to do with me. And it really could have gone much farther:  Viacom specifically sought to view all of the videos on YouTube marked “private” by their owners. While that request wasn’t granted, I think it could have easily gone the other way. Do we own those “private” videos on YouTube? Should we have an expectation of privacy over our emotional outpourings, love-letters, and who-knows-what that falls into the category of a private YouTube video? Maybe, maybe not. Either way, the Viacoms of the world do not care. If they can demonstrate that our privacy rights are not as important as their copyrights, our already tattered privacy protections in the electronic worlds will be further eroded. Free speech, and even historical expectations of “ownership” are unimportant. What matters is that additional profits can be made or protected and all else is frivolous.

In the same way, and using the same logic that Amazon uses to control how we use eBooks, or that Apple controls how we use mp3s, or how media files are increasingly encoded with special signatures that govern whether or not or where they can be used, we should expect that, wherever possible, the industry is also watching our use, or could be forced to reveal that use to other interested parties from time to time, and that increasingly, what we do in our homes, with our art collections will no longer be our own damn business.

Standard
Our Corporate Overlords, Technology and The Law

Copyright Shaming Won’t Work

I’ve been reading Cory Doctorow’s latest book: Information Doesn’t Want to Be Free and it’s a good read. There is a lot of good stuff in here, even if I don’t exactly agree with his insistence that the world is as good as ever for artists who want to create work and get paid for it, despite the abundance of new ways to share that work online. Doctorow does offer a fresh perspective to the “copyfight” and his foundational arguments are compelling.

Including this one: shaming and prosecuting people for copying digital artworks without permission is futile and it’s mean. I’m not talking about people who are making money from ripping and reselling exabytes of digital art. (I refuse to resort to the stark and depressing term “content” to describe what humans make to express themselves.) I’m talking about the threats and actions being taken against individuals who acquire media from unauthorized places or in unauthorized ways for their own use, and who share it with friends, which is what most “piracy” is. Don’t get me wrong, I am very concerned about artists getting paid. I tried to making living as an artist for many years and I know many people who do it now. I want them to get paid for their work and I struggle with how complicated it has gotten for that to happen, but this is nothing new. Being a professional artist has always been very very hard. I also think that the companies that publish and distribute creative work deserve to get paid. But the problem we’re facing is not simply that a bunch of people are sitting at home copying files and thereby negatively impacting the incomes of artists and their enablers, it’s the entire ecosystem of arts and entertainment that has changed radically, and the villains in the highway robbery known as the “entertainment business” are still the same villains as 20 years ago–the major record labels, the movie studios, and, as Doctorow points out, the newly powerful “intermediaries” like Apple and Amazon. Working adversarily at times, and in concert at others, “the industry” has reimagined important parts of the arts business model in ingenious and artist-cheating ways that offer few, if any, additional benefits to “honest” consumers. While reinventing the business, the industry has gone to astonishing lengths to create new crimes and to increase the seriousness of old ones, and their motivations have virtually nothing to do with artists. It’s unfortunate that most working artists have to labor under the overbearing advocacy of the arts and entertainment industry, because artists still do have rights to assert. It’s just not clear that they would choose to assert them the way Sony and Universal and Amazon do.

I have copied music and movies. Lots of them. So have you I imagine. In the prime of my music-making and consuming life in the 1980s, 90s and early 2000s, I inhaled new music. I bought a great deal of music, but I also made cassette tapes and burned CDs of other people’s records, tapes and CDs. I watched entire seasons of The Sopranos on VHS tapes lovingly mailed by a girlfriend’s mom. These things were and still are illegal and I had some vague understanding that this might be “wrong,” but I didn’t care. Why? Because I was still paying good money for the stuff all the time. I was (and still am) supporting artists in myriad ways, and a lot of people still do, albeit in new ways that may not be tabulated as “units” like the old days. A key difference has transpired in the relationship between consumers and creative products. In the days when most media arrived in some sort of package, I felt I had complete ownership rights over what I bought. Total control of it once it was in my hands. The emerging business model now is “licensing.” You pay to use some intangible media, but you can’t do anything else (legally) with it, like share it with your spouse or friends. This is a significant paradigm shift for consumers who have been passing around books for hundreds of years, and recorded media for over a century. This radical shift how creative works are bought and paid for–especially the new limits inherent in the deal–is totally frustrating for people who just want to buy a record and then lend to friends. The business model of “use it for a moment and it’s gone” wasn’t built for us, and it pisses us off and makes us fairly sanguine about breaking the rules so that it does work for us. This is exacerbated by the fact that the same technology that makes media increasingly intangible, also makes it much easier to share. This is not the fault of the sharers, and the industry has also benefited and found lots of new ways to make money from the same intangibility. Copying files remains very easy to do, and since it’s easy, we’re all going to use the objects we have at our disposal because that’s what free, imaginative people do.They don’t wring their hands and recite honor codes handed down by corporations when they want to hear a song. That’s why I owned a tape-to-tape deck in the 1980s and used it to copy music. Trying to convince people not to use the tools in front of them is simply untenable. Making us all outlaws over it is a Kafkaesque absurdity.

Here is a scenario: My wife Sarah and I take a trip together and bring our Kindles, each loaded up with four books. Sarah finishes a book and says “you’ve got to read this.” I say “great, I just finished my book. Give me yours.” The problem is that its stuck on her Kindle and she wants to read one of her other books. She’s not much interested in the rest of mine. Here is where the trouble starts: I just want to borrow a fucking book. I don’t want to manage user accounts or visit an Amazon website to arrange a 14 day loan, or whatever laughably paltry solution they offer. If, at that moment, someone handed me a little box that could make the two Kindles share books, but it ran an illegal program and using it violated some non-negotiable terms of service I was forced to agree to in order to have any technology in my hands at all, I would very likely say “fuck it” and “yes please.” In one form or another, this is what is happening everywhere, except that it’s a lot simpler to copy using the Internet, and it’s not going to get any harder. Making it a crime is not the solution.

This is a rant that does not offer solutions to the bottom-line challenges for artists, and I know that. There are serious problems with the business of being a working artist in the 21st century and file-sharing does play a role. But it’s technically infeasible to stop people from sharing media files, and the sharing is as old as the means to record and preserve the work. It’s incredibly easy to right-click and choose “save,” which is why it feels like, at worst, a “thought crime” and not a real crime at all. No amount of shaming is going to stop that, and prosecuting people only demonstrates a very scary sort of corporate power that we are seeing more and more throughout society. There are other ways to get people to pay for art, and a lot of that reward the artist directly rather than through huge entertainment corporations. Live performance, pay-what-you-will schemes, merchandising, etc. These aren’t optimal money-makers for all creative artists, but it works for some artists, and arguably has given rise to new types of creativity. Whether or not the new deal rescues art-as-we-knew-it is an open question, but I don’t think we can continue to rely on the arts-business models of the 1950s and 60s. It was a good time, but things have changed.

Standard
Power and Privilege, Privacy

Privacy and Privilege (first of many)

I have been thinking a lot about how privacy intersects with privilege, meaning, what one’s wealth and position mean in a surveilled world. A couple of recent talks in Seattle by Cory Doctorow, in which he alluded to the connection between privacy and privilege, reminded me that I can’t think of anyone else who has publicly addressed the issue–even within information ethics circles. I think it’s time we started addressing it. Setting aside, for now, epistemological and normative arguments about whether or not–or how much–we should be concerned about the ongoing diminishment of personal privacy by both the state and the private sector, I think it can safely be argued that privacy does have value to most people. Since the revelations of massive NSA data-collection by Edward Snowden, changes in attitudes towards data privacy are beginning to emerge. People using digital gadgets and systems do cherish their privacy, even if they don’t exactly rend their clothes or take to the streets in response to losing it. Even those who claim to eschew the value of privacy still demonstrate an attachment to it, as with Mark “privacy is an evolving concept” Zuckerberg and news of his purchase of the houses surrounding his in order to preserve his privileged isolation from the masses. As in the physical world, it’s hard to imagine that those with the power to attain more privacy will not seek to attain it in the digital realm.

We can easily point to ways in which privacy loss is already experienced more acutely by marginalized and poor people. Food stamp recipients (no longer using coupons, but instead trackable payment cards) reveal their grocery lists whenever they shop so that they can be prevented from purchasing banned items like alcohol, while those receiving corporate welfare are not prevented from buying alcohol, vacations, governments, etc. Low-income victims of domestic violence must reveal intimate facts of their lives to the state in order to gain a measure of protection while a wealthy victim might avoid this using paid legal counsel, bodyguards, self-imposed protective-custody at a resort, etc. Finance companies have recently begun using remote ignition locks to disable cars whose owners get behind on payments. The addition of such devices is an unlikely requirement for Lexus buyers, even with debt obligations.

Closer to home I had an experience that revealed the limits of my own privilege where privacy was concerned. I recently renegotiated my auto-insurance and, due to an unfortunate mishap involving a rental car, a primitive road, and a tree, my initial quote shot up hundreds of dollars. However, I was offered the option to cut that increase in half in exchange for attaching a device to the data port of my car that logged and transmitted information about my driving habits. The system, called “Rewind,” tracked and scored me by rates of acceleration and braking, and also by the time of day of my trips (late night driving was a demerit). While my driving destinations weren’t part of my scorecard, that information was also collected and I could view it on their website. After 6 months of surveillance and a sufficient amount of driving acts that met this company’s standards, my rate increase was reduced as promised. Had I been wealthier, there are many ways I could have avoided that surveillance. I might have chosen full-coverage on the rental car, for a start, which would have made the rental-car accident a non-issue for my insurance carrier. I could have paid someone to drive a car safely on my behalf. I also could have said “fuck it” and chosen to pay a higher insurance rate rather than agree to surveillance. I was surveilled precisely because I was unable/unwilling to pay to opt out. We see an increasing number of these “choices” offered to us, from the grocery store discount card to that “free” webmail account to the use of “free” mobile apps that collect info about our travels an habits. An increasing number of desirable services exchange surveillance for the opportunity to save money or use unpaid services. As Michel Foucault pointed out in “Discipline and Punish,” surveillance is a form of discipline and an exercise of power. My insurance carrier’s “option” for me to accept voluntary surveillance in exchange for a rate reduction is a nakedly disciplinary act, and one that I was subjected to because I couldn’t bear to pay the extra freight.

However, it’s not as if there are currently all that many ways for the more privileged to become invisible to the web of surveillance that follows us through the public square, across the thresholds of our homes, and into places that we once believed were entirely walled-off from the rest of humanity. In addition to making a lot of the same choices as everyone else to save a little money or get seemingly free stuff, privileged people use credit cards, shop online, view pornography, play fantasy-football, send email, and buy the latest gadgets like iPhones with as much, if not more vigor, than the rest of us. So many of our consumer and information seeking habits are now mediated by technologies that expose information about us that it’s nearly impossible for anyone to completely dodge the giant maw of data harvesting. However, I think this is only a temporary condition, at least for the well-off.

With market societies being what they are, and with privacy’s value increasingly debated and recognized, it’s likely that in short order we’re going to see a burgeoning market in privacy products and a myriad of technology choices for those  who can afford them and are sophisticated enough to use them. Depending on your views about why surveillance is becoming so prevalent and truly invasive, one thing is clear: Those with power, either financially or politically, are advancing, and benefiting from, the erosion of our cherished privacy rights. History shows that those closest to power are in the best position to avoid losing what they cherish.

Some attribution credit is due to Cory Doctorow, who mentioned some of the underlying facts alluded to in this post.

Standard