Power and Privilege, Tech and Society

Let Someone(thing) Else Decide

When times are hard and all around us is evidence of human frailty, many of us dream of reorganizing the world into an ideal state using logic and purely rational thinking. The past year has been a hard one for me. I suffered a devastating personal loss, I limped to the conclusion of my PhD, and I moved thousands of miles away to chase a job while leaving behind the communities that have long sustained me. In the background have been all the shared and personal hardships of a global pandemic along with a climate disaster and deteriorating geopolitical conditions. Add the acceleration of real and imagined conspiracies and deceits and some truly scary and disappointing world leaders and stir. One could be forgiven for losing hope that we–by which I mean any substantial collective body–are capable of making good decisions for ourselves and the world.

Recently, I was talking about artificial intelligence with some truly smart thoughtful people, including my friend and colleague David Newhoff, who posed this provocation: “I have always been skeptical, if not cynical, about ceding human agency to technology, and this despite being rather cynical about human beings. Lately, as we are forced to accept just how fragile liberal democracy is—because it is clearly too difficult for many humans to preserve—I admit that I have lately wondered whether risking governance by AI would not be preferable

It’s important to note that David is no tech-utopian–quite the opposite actually–and this provocation is meant to express skepticism about human nature rather than offer a sincere proposal. But I think we can all relate to sometimes wishing there was something better than us to take over the job of taking care of things. Recent evidence suggests that despite centuries of post-enlightenment faith in human reason, we have not achieved the flourishing of humanity that was promised based purely on our best intentions and essential “good will.” Lately, we haven’t even lived up to the very imperfect ideals of western democracy which, having failed to achieve race or gender justice, not to mention lasting economic justice in the centuries since it took hold, seems like it could be heading us towards oligarchic despotism through the ballot box. In light of these anxieties, isn’t it tempting to believe that a utility-maximizing AI would make better choices about desperate migrants, the climate disaster, and the distribution of food, medicine, and education than short-sighted and selfish humans? Couldn’t such systems nudge or direct us away from a colonial, white-supremacist, billionaire-centric future to one of liberation and shared prosperity?

You already know the answer I am heading for here, which is ‘no.’ And the reason is deceptively simple; because our technologies–all of them–are not better than us. They are us. To quote Tarleton Gillespie, “technology is culture rendered durable.” In other words, everything that we experience as culture, including all that is both right and wrong with society, is present and reproduced in our technology and this is true no matter how “smart” or “intelligent” that technology appears to be. In particular, we often get consumed by visions of artificial intelligence technologies as animate, quasi-humanoids that execute decisions using pure logic, with complete information, and to the highest standard of care. This description is even true of the dystopic visions, which typically retain the idea that AI beings are so much smarter and capable than humans even as they plot our destruction. While it may someday be the case that such beings walk among us as our guides or rulers (either beneficent or malign), we are so very far from that. So-called “super-intelligent” technology does not currently exist and may never exist as it has been so richly imagined in popular narratives. Even if they do eventually appear, we have far more mundane technologies influencing our futures right now. Among these are automated decision systems like recividism-risk prediciton systems and fraud detectors used in government along with recommender and social media algorithms that shape and channel human understanding and engagement with the world. These are built by humans, dependent on data and designs constructed by humans, and exist to carry out human-authored objectives.

Sure, there is tempting promise in intelligent technologies. AI-assisted medical diagnostics, disaster management, and learning technologies may prove themselves and contribute to societal uplift and reduced suffering. However, where well-designed and thoughtfully implemented system might overcome some specific human bias or failure, there is tremendous risk (and numerous examples) of creating or exacerbating other harms. The main source of failure in technology systems is not simply that it’s ‘broken.’ Rather, it is that each one is built upon something–either human or machine–that came before it, including prior technologies and the choices that went into to making those. As Susan Leigh Star offers, new technologies are “built on an installed base.” Nothing springs from a vacuum from which some ideal of perfection might be reached. This is as true of our technologies as it is of our political and social systems. Every technology is an embodiment of human history and culture.

As with human politics, the work of achieving anything good (or at least ‘better’) with technology is a mix of reflection and accountability. By reflecting we come to understand what systems actually do in the world and then apply pressure to push them in positive directions. A key difference between humans decisions and automated ones is not their internal mechanisms but their accessibility. Humans can make terrible choices, but we have far more experience addressing that by holding humans accountable and demanding explanations. We have far less experience doing this with machine learning algorithms. Technological development and assessment can only be done by a tiny class of technological elites (coders) and even they struggle to decipher the reasoning behind the decisions made the most sophisticated systems. Consequently, an AI system that appears to be “better” than a human is probably also one that most of us cannot understand. Our socio-economic-political systems are very imperfect and can be opaque, but they are also always evolving. These human systems are co-produced works that many more of us can take part it. And they are the key ingredients to our technologies, from the choices about what to build to who is granted the privilege of using them and for what purpose. Ideally (and often) social evolutions are driven by what some plurality of people actually want, or something like it. While this is somewhat true of technology, the close relationship between technological innovation and capital means that most advanced technologies are built and operated by a very tiny subset of society and done so within a market-driven, dominating worldview. Technology has politics, as the saying goes. When we stop investing in improving the human systems that are the sources of misery and focus instead on improving technology in the hopes that it will do better than us, we miss the point about what technology is. If it is us, then the work of social justice continues to be the work of improving ourselves. Only through that work can we hope to create technologies that further it.

One Nation Under Surveillance, Power and Privilege, Tech and Society

The Dystopian Path to Bicycle Safety

As an information ethicist who is generally skeptical about digital products and services whose business model is surveillance, I was struck with some serious itnernal conflict by a recent story about ‘Safe Lanes,’ an app for reporting cars and trucks parked in bike lanes. You see, in addition to being an academic, I am a regular bike commuter. Like other urban bicyclists in the United States, I experience a mix of exhilaration and fear on my commute where inattentive or obnoxious drivers and inadequate bike lanes can make biking feel very unsafe. I used to think risk-taking was the price of urban biking and took some pleasure in dodging cars and powering through my commute. But having racked up decades of scrapes and scares, my sense of adventure is waning. While Seattle drivers are relatively decent about giving way to bikes (it’s a pretty ‘sporty’ city after all), collisions–sometimes fatal–between cars and bikes are alarmingly routine. The city’s department of transportation has added a lot of bike lanes since I’ve lived here, but enforcement of the right of way of bikers is nearly non-existent in my experience. By way of example, an intersection by a police precinct in Seattle’s Capitol Hill neighborhood has a marked space reserved for bikes waiting for the light. That space is frequently occupied by drivers–in police cars. As I know from having had the privilege to bike in places like The Netherlands and Denmark, rigorously enforced bike lanes are a game changer for getting more people (all ages and genders) out of cars and onto bikes. The dramatic increase in bike lanes in US cities in the last few decades has been an incredible boon, but lax enforcement leaves many folks wary of using them.

Enter Safe Lanes, an app that uses smart phone hardware to capture the image and location of vehicles blocking bike lanes. The app uses technology similar to police license plate readers to identify the plate number of the car and sends this information to traffic enforcement. While options exist for bicyclists to report blocking cars through other means, like calling the police on a non-emergency number, most of us cannot be bothered to do this. It’s time-consuming, and even if one calls, there is a sense that nothing is likely to happen or not soon enough. While I am the kind of neighbor who reports litter and graffiti, calling in a parking nuisance is pretty unsatisfying. The idea of Safe Lanes is to make reporting bike lane blockers fast and easy, thereby increasing the likelihood that a report will actually lead to a ticket and maybe even change driver behavior. Even for a surveillance-averse person like me, the allure of punishing drivers who make me feel unsafe is very powerful. I was tempted.

But Safe Lanes doesn’t just stop at taking a user’s report and forwarding it to the authorities. Once a car or truck has been reported through the app, the image remains visible in the app for all Safe Lanes users to see, along with statistics about how many times a vehicle has been reported. In other words, it’s not just a reporting app but a shaming app. The persistence and display of the user-generated reports superimposed on a city map carries with it the implication that, in addition to being diligent citizens who report wrongdoing, we are also expected to join a community of fellow reporters and participate in communal rage by staring at the offending Priuses and UPS trucks reported by others with righteous indignation. And at the end of the day, Safe Lanes joins an alarming number of apps and systems that makes city streets into forums of monitoring and control. As discussed in a story about the app that appeared in CityLab, “The illusion of privacy in the public sphere may have always been an illusion, but with many more eyes and lenses trained on the streets, the age-old practice of ‘being seen’ can evolve quickly into being shared, and being stored. And perhaps being unfairly tried and convicted in the court of public opinion.”

Add to this that these apps are most likely to promote the values and worldview of a particular class of urban dweller: the much maligned “tech bros” amassing in places like Seattle, San Francisco, and other popular US cities. While the goal here, bicycle safety, is not particularly controversial or necessarily classist, it is troubling that we so easily trust some techie with programming skills with the authority to shape public behavior by releasing an app. There are myriad assumptions built into this app and its capabilities. One is that bicyclists have rights – I subscribe to that one. But let’s consider who the most likely targets of the app are: low wage ride share and delivery drivers. They are, after all, the folks whose livelihoods depend on hurrying people and passengers around on crowded city streets. I don’t want to excuse anyone making it unsafe to bike in the city; I do have actual skin in the game here, but in an age of rapidly gentrifying cities, there is something repugnant about affluent city dwellers naming and shaming people with much less social and economic power using information technology. All the ingredients are here for something that appears, at first, to be liberating for an arguably vulnerable group – bicyclists trying not to die. But it also joins an increasingly oppressive assemblage of information systems marketed to “concerned citizens” for the purpose of monitoring, shaming, and controlling others. Ugh! Safe Lanes, you had me at “improve bicycle safety,” but lost me at “participate in a surveillance dystopia in which no minor infraction goes unnoticed or unpunished.”

Perhaps this just what happens when we impoverish and abandon public institutions in favor of entrepreneurial techno-solutionism. What if, rather than hiding in our phones and relying on commercial products to mediate our participation in public life we actually spoke to each other and our elected officials – in actual public forums – where we could advocate for better bike lane enforcement, or demand money for driver education programs. What if rather than relying on apps to shame people into compliance with the behavioral paradigms imagined by technologists who happen to like biking we worked on being less suspicious and more patient with each other while also working on developing safe bicycle networks? I know, I know. This is asking a lot of humans – particularly American humans. But there has to be a better way of improving urban life than weaponizing information technology…doesn’t there?

Our Corporate Overlords, Power and Privilege, Tech and Society

Silicon Valley Joins the Culture War?

This past Sunday the web host and domain name registrar GoDaddy bowed to months of pressure from activists and told their longtime customer, The Daily Stormer, to go find another host for their website. On Monday, Google similarly denied a home to the white supremacist, Nazi-aligned website. A denigrating post about a young activist killed by an apparent neo-Nazi at a white nationalist rally in Charlottesville was the final straw that forced GoDaddy’s hand and, presumably, that of Google. In related news, The Los Angeles Times reported over the weekend that other Silicon Valley service providers, such as the short-term rental company Airbnb and the crowdfunding site Patreon, were blocking the use of their services by various “far-right” groups, forcing them to find other providers and, in some cases, to create their own. We should be proud to see Silicon Valley coming to the rescue and fighting on the right side of the culture war, right?

If only it were so simple.

There are several hard questions that should be asked about banning or allowing white nationalists and a whole range of other haters to use the internet to spread their messages, including those messages that strike fear into the hearts of many other users. There are also many important questions—or demands—that should be posed to Silicon Valley firms and our elected leaders to define what should and should not be construed as free expression and to place the burden on policing that in the right sets of hands.

The legal basics involved here are the U.S. Constitution and the “safe harbor” provision of the Communications Decency Act. The free expression guarantees of the First Amendment are frequently cited by white nationalist types as the legitimizing bases for their demonstrations and published hate speech, but they do not apply to services operated by non-governmental entities. In other words, the popular services of the internet, such as Airbnb, Twitter, YouTube, etc. have virtually no obligations under the U.S. Constitution. This means that internet platforms can basically block or allow nearly any type of expression, unless such expression is specifically addressed by a specific law. Section 230 of the Communications Decency Act, which is otherwise known as the “safe harbor” provision, basically absolves providers of internet services, including ISPs, web hosts, media streaming services, and others, of liability for how their customers use those services. If a customer contributes content, it’s on the customer; the internet service is not to be construed as the “publisher” of the content. The CDA’s Section 230 has exceptions of course, but hate speech isn’t one of them.

In the wake of the violence in Charlottesville, many social media commentators were rightly upset at the various enablers of sites like The Daily Stormer and the many other services that provide any sort of comfort to white nationalists. However, as a legal matter, the sites have no constitutional or statutory obligation to do anything, and for the most part, they haven’t. A search on Facebook for “white pride” or similar terms will reveal lists of pages dedicated to white nationalists and the people who love them (really – there’s a white pride dating group). Google has come under fire for failing to police its search auto-complete algorithms from completing sentences like “Muslims are…” with “terrorists” and “the holocaust is” with “a hoax,” and similarly unhelpful constructions (which they’ve improved). Twitter and Facebook have both come under fire from users and commentators who loudly complain that the platforms do far too little to prevent some of their users from engaging in relentless harassment, even when it includes threats. GoDaddy had been under pressure for months to distance itself from The Daily Stormer but chose to do nothing until some magic line was crossed this past weekend.

Silicon Valley’s failure to embody the role of stewardship for civil society should not be surprising, however. For one thing, it is not exactly clear whether it actually makes sense to empower corporations to carry the water of a society’s moral duties. Of course we want corporations to act morally, but as the power of corporations increases—particularly the corporations that are most visible in the internet/mobile sector—the power of non-commercial society seems to be decreasing. (The American companies Alphabet and Apple Computer are worth, together, over $1.4 trillion). The key questions to consider here include: Are we comfortable leaving the decisions about who gets to speak and who does not to enormous institutions that are generally unaccountable to society? How can we be sure that such choices will be made in the best interests of the public rather than to meet narrow, short-term business objectives? Given the increasing importance of the major internet platforms, such as Facebook and YouTube, as accessible and powerful venues for expression of all kinds, it seems obvious that the platform operators must bear some responsibility for what that expression does regardless of how the law and regulatory environment is currently arranged. Yet it is sadly unclear how to make the execution of that responsibility align with the cherished values of electoral democracy and civil society. What is clear to me is that “the market” is not a sufficient incentive structure to ensure that socially beneficial speech and other forms of expression are adequately nurtured and protected.

In a democracy, elected officials such as mayors and governors have the power to determine what constitutes protected expression on the streets of our towns and cities. They will do this imperfectly of course, as seems to be the case in Charlottesville where the city was warned repeatedly about the risk of violence. What is key here is that elected officials serve at the pleasure of their constituents and, in a functioning democracy, such constituents choose those officials and play at least a supporting role in the decisions they make. This process is by no means unproblematic, but the process is well-established and can be influenced by the moral intuitions of voters and activists. Meanwhile, you and me and everyone we know have zero say about what GoDaddy does. Whoever guides the decisions there is far less likely to do so out of moral compunction or the fear of losing political power. Sure, we can get a Twitter gang together to shame them into taking an action against some hate publication, but that is not democracy. For one thing, GoDaddy is only likely to respond when they see money on the line. They didn’t see that until The Daily Stormer did something so vile in the wake of an attention-grabbing murder that they figured they couldn’t pretend they were “neutral” anymore without paying some price. And even though the end result was positive in my view, it wasn’t a democratic action and it will have no impact on what anyone else does. GoDaddy, Google, and Amazon Web Services probably host many hundreds of other websites that promote hatred and will continue to do so.

This should not be surprising because this is the reality of what Silicon Valley does, and particularly what it does when it is enabled by a mix of free-market utopians and free-speech maximalists: It enables bullies. The information industry is built on a winner-take-all model that relentlessly removes revenue from communities and traditional capitalist activities, such as media distribution and street level retail, and redistributes it into increasingly fewer (and richer) hands. In the service of this business model Silicon Valley oracles celebrate ego-driven entrepreneurialism and denigrate steady jobs, equality, wealth-sharing, any sort of collective action. The type of freedom that Silicon Valley celebrates is freedom for the strong, freedom for the already got some, and includes a full-throated claim that only a pure meritocracy that denies the inequities of history is fair or legitimate. Oddly enough, as revealed by the infamous memo by James Damore, Silicon Valley has no problem promoting discredited stereotypes about women and other “less-thans” who aren’t genetically wired to live up to the narrow standards of the engineering elite. All of this taken together is important to note because when Silicon Valley boosters throw around bromides about the value of a free and open internet, what they mean might not be what you think it means.

Don’t get me wrong: I support a free and open internet, but I have my own definition of what that is. For one thing, when I think about free speech, I don’t think about it as completely unfettered, louder-is-freer free speech. Constitutional free speech isn’t wholly free and neither should it be online. For me, free speech is only effective when it doesn’t silence another legitimate speaker. When a cruel or threatening Twitter troll chases an LGBT activist or a game developer off of the platform (or out of their home), that is not an exercise in free speech, that is simply intolerable bullshit. It is vile and immoral behavior that deserves condemnation and little to no protection from the authorities. Now before you go and accuse me of hypocrisy by citing the acts of anti-fascists whose stated aims include silencing neo-Nazi types, note that I wrote that sentence about my view of free expression with care. I have some very specific ideas about what makes a speaker “legitimate.” Just as I claim that free speech is not speech that silences others, I also hold that a legitimate speaker is one who does not aim to deny basic human rights to others. White nationalists fail that test, as do speakers who denigrate women or describe others as inhuman and unworthy to live. Despite whatever non-violent benevolence some white supremacists may claim to espouse, history shows that the end goal of white supremacy is the exclusion, enslavement, or annihilation of non-Whites, including many categories of people with light skin who are otherwise deemed undesirable (Muslims, Jews, LGBT folks, etc.). This is not debatable. Nazi Germany happened. Hundreds of years of African slave trade happened. While we might group hardcore leftists in with other historical rights-deniers like Stalin’s Soviet Union, there is zero evidence that anti-fascists have gulags in mind as the end goal of their activism. Driving racial hatred back into the margins would probably suffice.

This leaves us with a conundrum. We have left the barn door open and allowed Silicon Valley to move the popular venues of expression from the community stage and the city street to their proprietary platforms, where they are guided not by constitutional or democratic principles but by terms-of-service strategically designed to maximize profits and offset risk. This is not the formulation for achieving a civil society. Extremist speech that is permitted to eddy and coalesce in seemingly ungovernable online spaces builds momentum and eventually spills out onto the streets and potentially into violence, as we have witnessed in Charlottesville, in Kansas, and in Portland. Free expression hasn’t lost its value or importance, but we can no longer allow for governments to leave the management of free expression in the hands of the least qualified to handle it—internet platform providers. Broad and generous interpretations of the CDA safe harbor provisions and a misplaced application of constitutional principles on venues that bear no constitutional obligations—or really any obligations to anyone—has pushed society to the brink of chaos. It’s time we focused not only on how to keep the internet “free and open” but also on how to make it fair and inclusive, and ultimately, just.

Who is actually made safer or freer by safe harbors? Do you feel safer? I, for one, do not.

Our Corporate Overlords, Power and Privilege, Tech and Society

The Commodification of People

Among the many ways so-called Big Data is influencing our lives, quantification and predictive analytics is beginning to play a significant role in how people are selected for opportunities, such as jobs, homes, romance, sex, insurance, and so on, substituting the vagaries of human judgement with seemingly objective and reliable analytic scorecards and labels. The same profusion of data that flows from your interactions with the networked and surveiled world, and which results in all those “personalized” ads you routinely encounter, can also be used to evaluate and grade you as a person. Your daily experiences and interactions with websites, mobile apps, credit card processors, eBook readers, cell-phone carriers, security cameras, etc. leaves data trails that are routinely and tirelessly hoovered up to supply the information economy with the raw material of user profiling (but you already knew that, right?). But beyond the now familiar goal of these activities to simply sell you stuff lies a larger information dream: Using data about you to thoroughly understand what makes you tick and using that understanding to predict your future. Opportunity gatekeepers, such as landlords and employers, find this dream very attractive. Business objectives drive gatekeepers to seek out any and all means to maximize efficiency in their operations and reduce their levels of uncertainty and risk. Quantifying people into gradable categories like bushels of rice with consistent and predictable quality is an intoxicating product offering for decision makers, and the data industry is prepared to meet (and create) that demand. By aggregating together your prior preferences and behaviors and comparing those to the preferences and behaviors of thousands of similar people and their choices, a motivated data processor and her algorithms attempts to make a range of predictions about your life, getting out ahead of the uncertainties of evaluating people based on what they self-report or provide through from their chosen references.

But there’s a problem. Quantifying people is not nearly as easy as quantifying grain. Quantification requires standardization, but people aren’t standardized and the data collection methods we have for analyzing people aren’t perfect. So, shortcuts have to be made and proxies must be used to reduce the rich complexity of human experience into discreet buckets. The first reduction comes in the form of the data that is used. Despite the fact that we our lives are increasingly observed and analyzed, the domains and methods of observation come pre-loaded with certain biases. Tracking what books you read with a Kindle (or other eBook reader) requires, first, that you own a Kindle or use the Kindle app. This already eliminates that data point from consideration for all the people who stubbornly continue to read printed books or who choose to spend their limited incomes in more practical ways. Here we see how one chooses to engage with the data ecology might impact her profile. The varied choices people make about participating in social media are similarly influential in profile development, as evidenced by the increasing number of data products that use social media data as inputs (see this for a chilling example).

The data industry also makes use of the open records policies of government agencies to build their profiles. Some types of public records, such as arrest records, tend to reflect negatively on people of color and the poor. For example, there is an abundance of evidence that drugs and weapons laws are routinely violated by people across demographic lines, but African American men are more likely to be arrested and convicted for violations (see this, and this for examples).  As a result, evaluating people based on their criminal histories doesn’t necessarily tell the kind of nuanced story that leads to complete knowledge. These two examples (and there many others) suggest that the construction of the data regime may not be quite as objective and reliable for judging people as we think. In fact, it appears to favor people of privilege – those who can afford to participate richly in the data economy (and choose to) and those for whom readily-available derogatory data is less likely to be discovered.

In addition to understanding how the formation of user profiles might be flawed and unfair, I am also interested in why economic/social gatekeepers are so keen on using analytics to make decisions about people in the first place. And this brings me to the work of Albert Borgmann who writes about the “hyperactivity” of modern society. Borgmann describes a hyperactive society as one that is constantly “mobilized” against the perceived threat of economic ruin. This mobilization has three key features: the suspension of civility, rule of a vanguard, and the subordination of civilians. It is in that third feature that I detect what I would label the “precarity” of the modern worker. Despite our cultural mythologies in the U.S. and elsewhere about how hard work and dedication inevitably lead to riches and success, and in spite of the tremendous wealth our society has created, we have seen in recent decades increasing social and economic inequality and the loss of stable work opportunities for ordinary people due to changes in a variety of structural economic conditions.  There are many reasons for these changes, but one of the results is that those with the power to make important decisions about our lives seem to have considerably more power and incentive now to exploit what Borgmann refers to the “disposability of the noncombatant work force.” In short, the incentives are high to reduce the work force as much as possible and the moral precepts of capitalism do not offer much resistance to doing so. The resulting precarity of work in our society leads to increased competition among workers. In order to survive in this mobilized society, we are basically forced to compete for increasingly scarce resources rather than to join together to challenge the sources (real and imagined) of the scarcity.

While Borgmann tells us something about societal forces that contribute to interpersonal competition for scarce opportunities, another author, James Carey, sheds light on how information systems have provided the means to commodify human beings. Writing in 1989 (but eerily prescient), Carey examined the dramatic social and economic changes wrought by the first electronic mass communication medium: the telegraph. The telegraph was the first technology capable of detaching information from physical objects and constraints, increasing the ability for traders of every stripe to to abstract physical objects into symbols for exchange. With the telegraph, information about the world could travel much faster than any messenger or machine, breaking down prior barriers of time and space. This change in the temporal and physical reach of communication increased a business person’s pool of potential partners, making direct personal experience with each one impossible. As a result, new methods of evaluating strangers had to emerge. This can be linked to another of Carey’s observations about a separate byproduct of electronic communication: the commodification of goods. Carey argues that the emergence of the commodities futures markets was tied to the linking of buyers and sellers regionally and nationally by the telegraph. It became possible to trade goods, such as bushels of wheat, by lots aggregated from dozens or hundreds of sources rather than dealing directly with the individual producers. This practice required the development of standardized grading systems that could be applied to quantities of goods from diverse sources. These seemingly unrelated byproducts of communications technology–the emergence of impersonal business dealings requiring new methods of personal assessment and the invention of the commodities trade that massed and standardized diverse goods into quality categories–set the stage for the emerging commodification of people. In the modern setting, the ability to post a job ad or a dating profile potentially viewable by millions of people means that the “seller” must be able to rapidly sort through dozens, hundreds, or thousands of applicants. The ability to judge candidates individually becomes impossible. Here we see the origins of the reputation industry and commodification of people: Why not employ algorithms to sort them into quality categories as if they were bushels of grain?

How this operates in practice is complex, but one thing is certain: the precarity of position and the perception that resources are scarce motivates people to sacrifice their own freedoms to gain an edge. People will give up their privacy and  otherwise adjust their lives to please opportunity gatekeepers in order to get ahead. A telling example comes from the insurance market where, in exchange for rate reductions, people install data devices in their cars that monitor and report their driving habits to insurers. Even more invasive, people are sharing the data collected by their health tracking wearables for similar incentives. This practice is known as “signaling” by economists. While granting explicit consent to monitor specific activities is a very obvious type of signaling, there are other means of signaling that are a bit more complex, but not too complex for analytics algorithms to notice. Social media activity provides a rich assortment of signals about one’s life, including family composition, health events, employment satisfaction, and financial stability among others. A few banks are confident enough about what they can learn from social media they are basing credit decisions on it (see this and this). As the practice of monitoring social media use to assess one’s worthiness for loans and other opportunities becomes commonplace, it’s not hard to imagine how that may influence how people use social media and therefore how they socialize in general.

There are many reasons why this matters. For one, it represents a progressive rebalancing of information flows. Economists have long rued the “information asymmetry” in buyer-seller transactions, in which the seller uses her deeper knowledge of a good for sale to the potential disadvantage of the buyer. However, one man’s market inefficiency is another’s defense in a world of outsized power imbalances. If the seller is a job applicant for a job at a large corporation, they are arguably arrayed against the titanic power of the modern corporation. Being able to assume some measure of control over the hiring process could be the last semi-free act of her career. Meanwhile, the corporation’s goals are to avoid risk, by choosing the candidate least likely to harm the firm, and to increase efficiency by streamlining the process of choosing from among a pool of candidates. Commodification of the candidate serves the corporation well, but may disadvantage the candidate if she cannot control the sources and biases of the information used to categorize her. As the reputation industry matures and more and more choices about who gets what opportunity are determined by abstracting people into symbols and treating them like graded commodities, the risk that people seeking opportunities become increasing disempowered will emerge as the crowning achievement of information technology: The commodification of precarious lives.

Power and Privilege, Privacy

Privacy and Privilege (first of many)

I have been thinking a lot about how privacy intersects with privilege, meaning, what one’s wealth and position mean in a surveilled world. A couple of recent talks in Seattle by Cory Doctorow, in which he alluded to the connection between privacy and privilege, reminded me that I can’t think of anyone else who has publicly addressed the issue–even within information ethics circles. I think it’s time we started addressing it. Setting aside, for now, epistemological and normative arguments about whether or not–or how much–we should be concerned about the ongoing diminishment of personal privacy by both the state and the private sector, I think it can safely be argued that privacy does have value to most people. Since the revelations of massive NSA data-collection by Edward Snowden, changes in attitudes towards data privacy are beginning to emerge. People using digital gadgets and systems do cherish their privacy, even if they don’t exactly rend their clothes or take to the streets in response to losing it. Even those who claim to eschew the value of privacy still demonstrate an attachment to it, as with Mark “privacy is an evolving concept” Zuckerberg and news of his purchase of the houses surrounding his in order to preserve his privileged isolation from the masses. As in the physical world, it’s hard to imagine that those with the power to attain more privacy will not seek to attain it in the digital realm.

We can easily point to ways in which privacy loss is already experienced more acutely by marginalized and poor people. Food stamp recipients (no longer using coupons, but instead trackable payment cards) reveal their grocery lists whenever they shop so that they can be prevented from purchasing banned items like alcohol, while those receiving corporate welfare are not prevented from buying alcohol, vacations, governments, etc. Low-income victims of domestic violence must reveal intimate facts of their lives to the state in order to gain a measure of protection while a wealthy victim might avoid this using paid legal counsel, bodyguards, self-imposed protective-custody at a resort, etc. Finance companies have recently begun using remote ignition locks to disable cars whose owners get behind on payments. The addition of such devices is an unlikely requirement for Lexus buyers, even with debt obligations.

Closer to home I had an experience that revealed the limits of my own privilege where privacy was concerned. I recently renegotiated my auto-insurance and, due to an unfortunate mishap involving a rental car, a primitive road, and a tree, my initial quote shot up hundreds of dollars. However, I was offered the option to cut that increase in half in exchange for attaching a device to the data port of my car that logged and transmitted information about my driving habits. The system, called “Rewind,” tracked and scored me by rates of acceleration and braking, and also by the time of day of my trips (late night driving was a demerit). While my driving destinations weren’t part of my scorecard, that information was also collected and I could view it on their website. After 6 months of surveillance and a sufficient amount of driving acts that met this company’s standards, my rate increase was reduced as promised. Had I been wealthier, there are many ways I could have avoided that surveillance. I might have chosen full-coverage on the rental car, for a start, which would have made the rental-car accident a non-issue for my insurance carrier. I could have paid someone to drive a car safely on my behalf. I also could have said “fuck it” and chosen to pay a higher insurance rate rather than agree to surveillance. I was surveilled precisely because I was unable/unwilling to pay to opt out. We see an increasing number of these “choices” offered to us, from the grocery store discount card to that “free” webmail account to the use of “free” mobile apps that collect info about our travels an habits. An increasing number of desirable services exchange surveillance for the opportunity to save money or use unpaid services. As Michel Foucault pointed out in “Discipline and Punish,” surveillance is a form of discipline and an exercise of power. My insurance carrier’s “option” for me to accept voluntary surveillance in exchange for a rate reduction is a nakedly disciplinary act, and one that I was subjected to because I couldn’t bear to pay the extra freight.

However, it’s not as if there are currently all that many ways for the more privileged to become invisible to the web of surveillance that follows us through the public square, across the thresholds of our homes, and into places that we once believed were entirely walled-off from the rest of humanity. In addition to making a lot of the same choices as everyone else to save a little money or get seemingly free stuff, privileged people use credit cards, shop online, view pornography, play fantasy-football, send email, and buy the latest gadgets like iPhones with as much, if not more vigor, than the rest of us. So many of our consumer and information seeking habits are now mediated by technologies that expose information about us that it’s nearly impossible for anyone to completely dodge the giant maw of data harvesting. However, I think this is only a temporary condition, at least for the well-off.

With market societies being what they are, and with privacy’s value increasingly debated and recognized, it’s likely that in short order we’re going to see a burgeoning market in privacy products and a myriad of technology choices for those  who can afford them and are sophisticated enough to use them. Depending on your views about why surveillance is becoming so prevalent and truly invasive, one thing is clear: Those with power, either financially or politically, are advancing, and benefiting from, the erosion of our cherished privacy rights. History shows that those closest to power are in the best position to avoid losing what they cherish.

Some attribution credit is due to Cory Doctorow, who mentioned some of the underlying facts alluded to in this post.