File Formats Rule the World

File formats rule the world. They are the procedural means by which unintelligible blocks of code are rendered intelligible, actionable, and transmittable. Embedded in a file format are assumptions about how data can or should be used. Jpegs discard information assumed to be unimportant to human eyes, and replace it with best-fit estimates, so long as they can save some room on your hard drive. Mp3s assume, through psychoacoustic models, what counts as an acceptable rendition of a song. Other file formats are proprietary, such as .excel. Through their technical specifications, they require programs to be bought in order for files to be used. But the file formats experienced by the average user only comprise a small percentage of the the file formats that the economy and culture of modern society depend upon.

I have been playing a game recently, where I pick a file format at random from the Wikipedia list of file formats:

Then I try to imagine what the would be like if the file format never existed. Though I have never heard of most of the file formats on the list, many are essential in their own banal way. For instance, the first file format I opened was OTA Bitmap — a file format created by Nokia for displaying those highly pixelated images exemplary of the mobile phone circa Y2K.

This file format enabled Nokia phones to display simple pictures and logos.

The website for the file format that undergirds banking information exchange is remarkably transparent:

“If you’re not participating in Standards Development, then you’re either re-inventing the wheel or playing by somebody else’s rules!”

The vast majority of us are not participating in Standard Development. But we all rely on file formats to turn what looks like random sequences of 1s and 0s into intelligible information. Whenever computers manage to pull off the task of representing value, they are likely doing so with one of a handful of file formats. We trust — through blind ignorance — that companies responsible for producing these file formats represent value faithfully, but as their website forewarns, companies create the rules by which everyone else has to play.

Documenting Financial Infrastructure

Automated Futures is a sensory ethnographic documentary film that illustrates two eras of American economic history by juxtaposing a specialty fiber-optic cable used for high-frequency trading against the decaying infrastructure of the once industrial Rust Belt, emphasizing an eerily parallel detachment from human lives in both of these planetary-scale built environments. The film documents 827 miles of Spread Networks’ flagship dark fiber line through the now post-industrial towns of La Porte, Elkhart, Toledo, Cleveland, Mesopotamia, and Manahoy City. Based on my thesis research on the materiality of financial infrastructure, the documentary addresses the operative tension between human agency and technological interdependence within the cultural context of American Independence Day celebrations. Video and audio recordings from the summer of 2013 serve to archive the paradigmatic disjunction between the interests of high finance and the decaying industrial economy, while the structure and soundtrack of the film conspire to question the role of history in the temporal scale of exchange.

Immortal Zugzwang

Billed by the media as a drama between man versus machine, Deep Blue’s victory over chess champion Garry Kasparav in 1997 was interpreted by many as evidence of a future world dominated by artificial intelligence. In hindsight, Deep Blue’s victory was not a sign of AI’s power over man; it emblematized a particular political-economic moment concerning how firms developed and monetized computing technology. How then should we interpret the political and economic consequences of Google’s victories in games such as Go, Shogi, and Chess? Just as IBM’s Deep Blue stunned the world when it first beat Kasparov, Google’s machine learning software stunned the communities of many competitive game players, first with AlphaGo’s defeat of human Go world-champion Lee Sedol, and subsequently with Alpha Zero’s defeat of the highest ranked open-source chess engine Stockfish—a chess algorithm orders of magnitude better than any human chess player. Like Deep Blue’s victory over Kasperov, Alpha Zero’s victories stand for more than one event in a narrative between man and machine, they signal important changes in the political-economic organization of information technology.

The most obvious difference between Alpha Zero and previous chess engines is apparent in their respective styles of play. Though Deep Blue is now obsolete, the highest ranked open-source chess algorithm named Stockfish incorporates the fundamental design choices built into Deep Blue. Owing to the combinatorial complexity of Chess pieces and chess positions, it is impossible to search through every possible path to a winning configuration. A chess-playing algorithm must be designed to weigh certain moves higher than others. Deep Blue’s and Stockfish’s evaluative system privileges arbitraging material compensation, piece for piece, over positional configurations. Whereas, Alpha Zero’s playing style privileges rapid development and positional advantage over the material value of pieces. Often, Alpha Zero will sacrifice material in the opening to create enough space for the rapid movement of its most dynamic pieces. More surprisingly, Alpha Zero will use its positional advantage to trap its opponents’ pieces in what chess analysts term a “zugzwang”. A zugzwang is a situation in which the obligation to make a move in one’s turn is a serious, often decisive, disadvantage. The most memorable zugzwang game in human chess—known as the Immortal Zugzwang— was played by Friedrich Sämisch and Aron Nimzowitsch in 1923. An immortal zugzwang is so named not because it lasts forever, but because it forecloses on every possibility such that movement becomes impossible without being accompanied by defeat. In one of Google Alpha Zero’s many victories against Stockfish the machine algorithm orchestrated a similar ‘immortal zugzwang’ against Stockfish, forcing the chess engine to resign

Machine Learning Economics & Intellectual Property


Recent developments in machine learning are likely to have lasting impacts on the organization, productivity, and wealth distribution of the economy. Tasks associated with human intelligence are already being automated by machine learning algorithms that can make predictions and judgements. Despite hype surrounding machine learning, there are many unanswered questions about what the effects will be. One domain that is both tethered to the economics of information and is full of uncertainty is that of intellectual property. Concerns over machine learning tie together questions regarding fair use, copyright protection algorithms, community-driven knowledge production, patents, networked supply chains, and market transparency.

Policy makers and information specialists should be aware that in each of these fields there are mounting adverse effects for consumers. The ways in which machine learning transforms the relationships between data is at odds with traditional conceptions of transformative fair use. Machine learning helps copyright holders infringe upon the legitimate fair use of intellectual property because existing copyright law penalizes human oversight while giving automated systems a free pass. Google’s position with respect to machine learning doubly benefits from patent law. It uses open-sourced forms of knowledge production to capture insights from un-patentable fundamental research while patenting specific applications of machine learning. Lastly the penetration of proprietary machine learning algorithms into the networked supply chain may lead to market failures despite satisfying neoclassical assumptions of rationality and information transparency.

What is Machine Learning?

Machine Learning algorithms encompass a growing set of computational statistical approaches to analyzing data for the purposes of automated categorization, pattern finding and decision making. Because machine learning technology has the ability to mimic some of the faculties normally associated with intelligent behavior, proponents believe that machine learning will have a lasting impact on the economy (NSTC, 2016). As a computational statistical method for discovering patterns in information, the most basic machine learning technique is linear regression. Machine learning researchers, however, are concerned with more complex approaches such support vector machines and neural networks which have wide ranging applications from spam filtering to self-driving cars. Already, machine intelligence is penetrating many steps of the value chain in several industries up and down the technology stack (Zilis & Cham, 2016).

Commonly cited reasons for machine learning’s booming popularity are its disciplinary maturity, the abundance of data and advances in computational power. Machine learning as a field that unites statistics and computer science has a long history, but in the last few years there have been several break throughs in technical knowledge, especially in the field neural networks (NSTC 2016, 9-10). With improvements to both hardware and software, computers have become thousands of times faster than they were in the 1980s (Ford 2016, 71). Alongside advances in research, the magnitude of available data has increased (Ford 2015, 89). This story might make it seem as though the rise of machine learning was natural or even inevitable. What it neglects to mention are the economic motivations behind machine learning.

Machine Learning promises to impact the economy pervasively and productively. Machine intelligence already appears in several sectors such as transportation, finance, healthcare, education, agriculture, gaming, and energy (Zilis & Cham, 2016). One of the first uses of machine learning was to automate check processing (Nielsen 2015). Kroger has fully automated its warehousing except for the unloading and loading of trucks (Ford 2015, 17). Facebook uses a smart agent called Cyborg to monitor tens of thousands of servers for errors and solves them (106). Smart traffic systems can reduce “wait times, energy use, and emissions by as much as 25 percent…” (NSTC 2016). Unlike traditional software, machine learning will leave a unique mark on businesses because it can be used to automate more jobs associated with the service and information sectors.

Despite the hype, there is still a lot of uncertainty about how machine learning will affect the economy. Concerns about labor, security and the distribution of wealth are forefront (see NSTC 2016 & Ford 2015). Intellectual property offers a crucial lens to investigate the nuanced ways in which machine learning will affect the information economy. Creative uses of machine learning in art and design bring up unanswered questions about the relationship between machine learning and attempts to control information as a private, excludable good.

Electric Sheep: Fair Use in the Age of Automated Creativity

As more artists are turning to machine learning to create new forms of expression, artists, design firms, lawyers and intellectual property holders are starting to wonder if machine-generated works can be litigated under existing copyright laws. This uncertainty is typified by a user on the platform Quora who asks: “Would it be considered as a copyright infringement if a machine learning algorithm were trained on streamed frames from YouTube videos in an “unsupervised learning” fashion? (Quora 2016). The novice answers are anything but conclusive, ranging from “probably not” to a lukewarm “I think it won’t be, however…” Unfortunately, expert responses are not any clearer. Partly, because this territory is very new, and what counts as legal precedent is not yet well defined.

Earlier this year, the AI artist Samim Winiger developed a bot that can narrate image descriptions in the lyrical style of Taylor Swift (Hernandez 2015a). As an example of one of many bots that attempt to produce text in the style of an author using recurrent neural networks, this work prompts the question of whether output generated by machine learning algorithms is protected under existing fair use laws. The Stanford library blog on fair use explains that most cases of fair use fall between two categories: “(1) commentary and criticism, or (2) parody” (Stim 2010). Examples such as Winiger’s “Swift Bot” do not fit neatly into these usual categories. Arguably however, Winiger’s use of machine learning algorithms constitutes what fair use law calls “a transformative use” of Swift’s lyrics because, while his process uses intellectual property as an input, the output is something new or significantly different from the original (U.S. Copyright Office 2016a). This interpretation hinges on understanding machine learning algorithms as simply an extension of a human’s creative process, and therefore clearly already covered by existing fair use doctrine (Hernandez 2015b). What this perspective misses, is that if machine learning algorithms can truly capture the “style” of an author, musician, or other creative artist, as Winiger believes, style may become commodifiable or licensable (Winiger 2016).

In another example, the artist and machine learning researcher Terence Broad uploaded a neural-network generated version of Blade Runner—the film adaptation of the novel Do Androids Dream of Electric Sheep? by sci-fi author Philip K. Dick (Broad 2016). Like the previous example, Broad uses copyright protected property as an input, and outputs something different. But is the output transformative enough to constitute fair use?

Broad’s algorithmic adaptation mirrors Blade Runner frame by frame, using a machine learning technique called “auto-encoding” that attempts to embed a process for duplicating images within the neural network itself. The Holy Grail of auto-encoders can produce a perfect copy of the input with the caveat that the input must pass through a mediating layer with the smallest possible number of artificial neurons. Some researchers joke that auto-encoding is the most expensive ‘times one’ because, when it is effective, it superficially seems as if it has not effected any changes. In the machine learning community auto-encoding is not seen as simple copying though; it is understood as reducing an image to a minimal representation or a set of variables for further manipulations (see Kogan 2016). The process has real applications in generating new images and in CSI style digital zoom (see Neural Enhance 2016).

From the perspective of someone unaware of auto-encoding (or from a corporation’s content fingerprinting algorithm), an output generated by an auto-encoder may not be distinguishable from the original. Unsurprisingly, Broad’s project was issued a DMCA take down notice by Warner-Bros (Romano 2016). While the take down notice was later reversed, the scenario triggered reflection on what constitutes fair use when producing and reproducing images with machine learning algorithms. In Broad’s case, the output resembled a poorly compressed version of the original and could not be reasonably used as a substitute for the actual Blade Runner film. In principle however, an auto-encoding algorithm could be used to create a near duplicate rendering of the original. Such use signals an uncharted grey-area in existing fair use policy (Broad 2016).

This case triggers an underlying ontological question for information researchers and information policy makers regarding what constitutes the “property” that is protected by intellectual property law? Can digital information be defined entirely by the bytes the compose digital files, or does a digital work also include, to some extent, the process that produced it or the intent of the producer? These questions are deeply tied with the now age-old question of whether digital information has the properties of a private or public good. Because information can be both consumed without depleting it and without barring others access to it, information does not have the properties that we normally associate with private goods. Nevertheless, instead of answering these questions, intellectual property law and dominant online protocols manage to transform information into a private good, regardless of what its nature might be.

Copyright Policy and Copyright Protocol

The same genres of algorithms artists use to create the artwork that falls within the edge cases of copyright law are now being used to police copyright infringement. Algorithmic procedures undergird the mechanisms by which copyright holders restrict the supply of their intellectual property on the internet, thereby imposing rivalry and excludability on goods that are otherwise inexhaustible in use. In attempts to combat the piracy of digital property—such digital audio and video—law makers passed the Digital Millennium Copyright Act (DMCA) in 1996.

Considering the magnitude of information that is added to the internet every day, the DMCA is unenforceable without automated means of recognizing copyright protected material. For example, Youtube’s Content ID program allows copyright holders to submit files of audio and visual work they own to a database containing millions of reference files. Youtube then uses an algorithm to automatically test if the content contains intellectual property owned by others. The content holder has the ability to block, leave up, or make money off of the content that matches their reference file. Youtube purports that this system balances piracy’s positive effects of free advertisement against the negative effect of lost revenue by making copy protected material a potential source of advertising revenue for both Youtube and the copyright holder (Youtube 2016).

Not only are machine learning techniques required to sift through the hundreds of years worth of audio and video content that is uploaded to the internet every day, the status of machine readership as non-subjective has become the cornerstone of judicial applications of DMCA. Under the Online Copyright Infringement Liability Limitation Act (OCILLA) in the DMCA, copyright holders must knowingly misrepresent the fair use of intellectual property in a DMCA takedown request to be held liable for infringing upon legitimate uses (U.S. Copyright Office 2016a). Because copyright owners do not know exactly what automated algorithms do on their behalf, they have argued successfully that they cannot be held liable for false claims. Under this legal regime, the romantic readership of subjective interpretation becomes a bug while robotic non-subjective readership becomes a feature.

Professor of Law, James Grimmelmann traces the outcomes of several court cases where fair use laws come up against robotic readership. The author demonstrates that there are two tracks for fair use: one for human use and one for robotic readership. (Grimmelmann 2015). Historically U.S. courts have argued that algorithmic readership does not engage with the “expressive content” of works protected by copyright. If not meant primarily for human audiences, the algorithmic output is broadly construed as “transformative” and therefore protected under fair use laws. He speculates that because general level intelligence may one day match human interpretive abilities, subjective tests for “romantic readership” (readership that engages with the expressive content of works) should not be the basis of law. “[The doctrine of] Romantic readership asks a question nobody will be able to answer.” (Grimmelmann 2015, 680). Therefore, the philosophical basis by which robots get a “free pass” in copyright cases should be re-examined.

Because robotic readership gets a free pass, Content ID casts a wide net: Content ID flags any material that might contain copyright owner’s material. As the Electronic Frontier Foundation reports, “The current Content ID regime on YouTube is stacked against the users” (Amul Kalia 2015). Often Youtube’s Content ID system comes into conflict with legitimate fair uses of copyright protected material. Copyright owners using automated take-down services have faced some legal battles, because their system issues take-down notices before considering questions of fair use. While copyright owners are supposed to consider fair use before they send out takedown notices, the automation of the takedown notice system protects copyright owners from being liable. It is because the system is automated that it is impossible to prove that the owners issued the notices “deliberately or with actual subjective knowledge that any individual takedown notice contained a material error” (United States District Court Southern District of Florida 2012, 9).

With the help of the Electronic Frontier Foundation, Stephanie Lenz filed a suit against Universal Music Corp for issuing her a takedown notice after she published a video of her child dancing to a 20 second clip of music by Prince (Harvard Law Review 2016). She argued that Universal Music Corp did not take fair use into consideration before issuing the takedown notice. On the face of it, the court ruled in favor of Lenz, however, the decision may not substantially curb DMCA abuse. According to the Harvard Law Review, the decision may encourage copyright holders to use algorithmic takedown systems with even less human oversight.

Monetizing Learnings from Machine Learning

While Google uses machine learning to profit off piracy and the potential fair use of copy protected material it also champions open-source software development in the production of its own machine learning algorithms, architecture, and libraries. There has been booming activity in open-access research for machine learning techniques such as neural networks. The open-access journal has seen over a 700% increase in machine learning related publications. In 2006, researchers submitted only 12 articles containing the keyword “Machine Learning”. In 2016, researchers summited 942 “Machine Learning” articles. Following this trend, the AI behemoth Google has open-sourced its R&D machine learning libraries in a framework called TensorFlow.

On one hand, this signals a continuity with existing copyleft permissive software licenses for popular deep learning research and development frameworks such Theano and Torch. On the other hand, the fact that Google made TensorFlow free for high-skilled researchers to use may signal that the monetization of Google’s products depends more on the quality and magnitude of their data and their data processing centers than the fundamental research that their data analysis techniques are based on (Thompson 2015). Therefore, Google can reap the rewards of information economics two-fold. First, by open-sourcing their deep learning R&D framework, Google can capture the creativity that thrives in open-source communities. Second, Google can appropriate the creative output for use within the context of their own locked-in networks for data gathering and distribution. Because of the nature of US patent law and policy, the community driven fundamental research is not monetizable on its own, but there are still profits to be made within the wider context of Google’s defensible patents and pipelines.

In legal theory, patents serve three basic functions: incentivizing, contracting and disclosure (Grimmelmann 2016). A functioning patent system is supposed to encourage innovation by balancing the circulation of knowledge with incentives for invention. As the theory goes: if fundamental knowledge were patentable, the circulation of knowledge would falter. However, if no protections were available for inventors, people would have no defensible economic incentive to innovate. Economists have argued that patents do not actually encourage innovation. After an economic analysis, Machlup famously said that if the patent system did not already exist, he would not suggest it, but because it has existed for so long, he could not conscientiously advocate abolishing it (Machlup 1958, 79-80). Today, the economic benefits of the patent system are not any clearer (Bessen & Meurer 2008, p 145-146). Nonetheless, only designs that are framed as specific enough inventions can be granted patents.

While the fundamental research in machine learning is not patentable, some applications of machine learning are. At the heart of many new technology companies, such as Uber and Amazon, are patents that protect algorithmic processes essential to the organizations’ business models. Amazon has patented an anticipatory shipping algorithm that uses machine learning to determine if it likely for a user to buy an item before they click (Kopalle 2014). Uber recently acquired an AI firm called Geometric Intelligence with patent pending machine learning techniques. Technology writers speculate that this might have to do with the race to produce self-driving cars (Newcomer 2016). However, from Uber’s job listings it is evident that they are looking for people with machine learning competencies in several other departments beyond vehicular automation such as: fraud, security, forecasting, finance, market research, business intelligence, platform engineering, and map services (Uber 2016). While machine learning often gets press for impressive feats like self-driving cars and besting human Go champions, the reality is that machine learning will likely have a more pervasive impact on digital marketplaces and value chains.

By integrating machine learning into every step of the production process, the oligopolistic tendencies of network economics are potentially worsened. It is not necessarily the case that networked production models and market transparency lead to freer, more competitive markets. Many retailers with an online presence cannot compete with Amazon’s proprietary pricing algorithm. Amazon prices can fluctuate more than once per day, with changes that may double or cut prices in half (Ezrachi & Stucke 2016, 13). Blindly matching prices is not strategic enough to ensure optimal profits (49). Instead, companies subcontract pricing to third-party technology vendors capable of programing more sophisticated and dynamic pricing software using “game theory and portfolio theory models” (14). This form clientalization dramatically decreases competition in the marketplace. Now many retailers such as Sears, Staples and Groupon Goods all turn to same subcontractor to price their wares: Boomerang (48). By concentrating pricing into only a few hands, this development has the potential to promote effects associated with collusion.

When online market prices are so transparent and de facto oligopolistic, the small pool of pricing algorithms can discover optimal supra-completive price points. First, when many companies all turn to the same vendor to set their prices, there is the possibility for hub-and-spoke collusion—i.e., their prices will move up in concert (Ezrachi & Stucke 2016, 49-50). Second, dynamic pricing algorithms are programmed to respond to one another. Far from being a race to the bottom, this can be a way to coordinate the increase of prices without sales loss (64). If the strategies are similar enough, and if prices changes can be made quick enough that customers cannot flee to cheaper retailers, the algorithms can settle on prices above what we should expect from a perfectly efficient market. Moreover, because of how much data is collected on users, sites can know if users are likely to practice comparison shopping. If they do not, pricing algorithms may charge them more than other shoppers (91). By integrating smart and adaptive algorithms into every step of the supply chain we have a scenario where highly rational actors with near perfect information do not necessarily set the most efficient prices.


At the intersection of information economics, machine learning and intellectual property are several concerns regarding social warfare—both in terms of freedom of information and economic inequality. Machine learning, bolstered by the DMCA, threatens fair use with copyright protection algorithms. Asymmetrical incentives rooted in patent law may continue to exploit community-driven machine learning research and drive rational, transparent, and yet inefficient online markets. It is time for information researchers, economists, and policy makers to come together and answer the unanswered questions about intellectual property.


Alexjc. 2016. “Neural Enhance.”

Amul Kalia. 2015. “Congrats on the 10-Year Anniversary YouTube, Now Please Fix Content ID.” Electronic Frontier Foundation.

Bessen, James, and Michael James. Meurer. 2008. Patent Failure: How Judges, Bureaucrats, and Lawyers Put Innovators at Risk. Princeton : Princeton University Press.

Broad Terence. 2016. “Autoencoding Blade Runner” Medium.

Grimmelmann, James. 2015. “Copyright for Literate Robots.” SSRN, 1–31.

Ezrachi, Ariel, and Maurice E. Stucke. 2016. Virtual Competition : The Promise and Perils of the Algorithm-Driven Economy.


Hernandez, Daniela. 2015a. “New Bot Uses Taylor Swift-Inspired Lyrics to Describe What It Sees” Fusion.

Hernandez, Daniela. 2015b. “What If a Robot Stole Your Work and Passed It as Its Own?” Fusion.

Knight, Will. 2016. “Uber Launches an AI Lab.” MIT Technology Review.

Kogan, Gene. 2016. “Game AI and Deep Reinforcement Learning.” ITP-NYU.

Ford, Martin. 2015. Rise of the Robots : Technology and the Threat of a Jobless Future.

Machlup, Fritz. 1958. An Economic Review of the Patent System. Washington: U.S. Govt. Printing Office

Nielsen, Michael A. 2015. “Neural Networks and Deep Learning.” Determination Press.

Quora. 2016. “Would It Be Considered as a Copyright Infringement If a Machine Learning Algorithm Were Trained on Streamed Frames from YouTube Videos in an ‘Unsupervised Learning’ Fashion?”

Romano, Aja. 2016. “A Guy Trained a Machine to ‘watch’ Blade Runner. Then Things Got Seriously Sci-Fi.” Vox.

Stim, Rich. 2010. “What Is Fair Use?” Stanford Copyright and Fair Use Center.

Thompson, Ben. 2015. “TensorFlow and Monetizing Intellectual Property” Stratechery.

U.S. Copyright Office. 2016. “Copyright Law: Chapter 5.”

U.S. Copyright Office. 2016. “More Information on Fair Use.”

United States District Court Southern District of Florida. 2012. Disney Enters., Inc. v Hotfile Corp. No. 1:11-cv-20427-KMW.

U.S. Science and Technology Council. 2016. “PREPARING FOR THE FUTURE OF ARTIFICIAL INTELLIGENCE,” no. October.

Winiger, Samim. 2016. “Samim A. Winiger – Generative Design” The Conference.

Youtube. 2016. “How Content ID Works.” YouTube Help.

Zilis, Shivon, and James Cham. 2016. “The Current State of Machine Intelligence 3.0” O’Reilly.

An Island of the Blessed

Atmospheres pose a vexing problem to libertarian ideologies. How do you square shared atmosphere with radical libertarianism? This thought first came to me as I was exploring Second Life in 2014, the well known virtual utopia where there are few laws, no taxes and everyone is at liberty to express their own individuality. While exploring Second Life realized that the virtual world has no commons. All land is privatized. Some landowners let you visit their property for free, especially if it is a store. To build anything, however, you must be a landowner yourself. To be a landowner you must pay rent to Linden, the company that owns Second Life. All interaction is mediated by an expansive market that carries everything from new hair-dos, to houses; —even custom animations for gestures and emotions are for sale. Every everything is an asset– there is no atmosphere.

I first visited second life to track down the story of Carmen Hermosillo. A folk hero in some internet circles, Carmen Hermosillo is self proclaimed semiotician and cultural critic who drew most attention from her early criticism of the internet in the now classic essay “Paradox Vox”.

In 1994 the internet barely had a graphical user interface. Carmen Hermosillo frequented Stewart Brand’s online messaging board WELL (short for Whole Earth ‘Lectonic Link). She was an active participant in the online community and enjoyed her anonymity under the pseudonym ‘humdog’. Under mounting concerns — both personal and political — her attitudes shifted, she switched, unleashing a stinging polemic, spawning flame wars like brush fire on a hot summer afternoon. In her essay, she denounces Believers in the internet, writing “it is fashionable to suggest that cyberspace is some kind of island of the blessed where people are free to indulge and express their Individuality” but, she argues, “in reality, this is not true.” In reality, online services “guide and censor discourse,” in reality, “cyber-communities are businesses that rely upon the commodification of human interaction.” Humdog points to her own thoughts, her feelings, her own guts that she’s spilled as that which she is alienated from herself, for no compensation, for the monetary benefit of telecom or sitemasters.
Beyond social critique, humdog addresses a more intensive problem: the ontological tension between representation and reality as a subject of human attention. She suspects that “cyberspace exists because it is the purest manifestation of the mass (masse) as Jean Beaudrilliard[sic] described it. it is a black hole; it absorbs energy and personality and then re-presents it as spectacle.”

It is odd that humdog insists on misspelling Baudrillard’s name. By inserting an ‘e’, humdog transforms the name, intentionally or not, into a frankenword that captures a foundational tension in the experience of the internet. “Beaudrillard” reminds us of both the simulacrum as well as the “beau” or beauty that draws us in — conflicting forces of beauty and horror contained within the the ‘discrete’ bits of information that compose an increasing proportion of contemporary experience. This is the paradox, we are enamored by new forms of control, seduced but also horrified. As she says in her own words “the heart of the problem of cyberspace: the desire to invest the simulacrum with the weight of reality.”

The neologism is all too tragic for it perfectly encapsulates Carmen Hermosillon’s own life and premature death. Despite conceptualizing the negative psychological and social effects that the internet creates as a medium for the surveillance and commodification of human pathos, Carmen Hermosillo couldn’t resist its magnetism. This was not for want of trying. Carmen Hermosillo, after killing-off humdog in protest of WELL, attempted to unpluged herself from the internet.

She cycled through numerous internet identities, eventually surfacing on the shore of what appears to be her own ‘island of the blessed.’ But this time, the island was less of a metaphor, and more of a manifestation made real, or as real as it gets in the virtual. In second life, Carmen Hermosillo designed and built a virtual island called The Island of Shivar, where Carmen was a Queen. Neither humdog, nor Carmen anymore, she was known as “Queen Montserrat Snakeankle aka Montserrat Tovar aka Sparrowhawk Perhaps.”

With the aid of a Second Life architect named Yolandi and a large group of friends, she had a beautiful french kingdom constructed. But like a self fulfilling prophecy–or perhaps just acceptance of the terms given–Carmen’s “magical island” was also a business. Her collaborator and friend Mark Meadows wrote in I, Avatar that the early days of Second Life were just like the Southern California in the 1920s, “there were a thousand get-rich-quick-schemes” (Meadows, 2008).

Carmen ran the Island of Shivar selling real estate while Mark would furnish buildings to increase their value. Their goal was to make $200 US a month all in game, enough to cover IRL rent at the time. The strategy was to build a cathedral to do weddings, but that only brought in 2,000 or so Linden (Second Life dollars), hardly pocket change in US currency. So they decided to open up shops — a virutal stripmall nestled into the cobble stone streets of her french island kingdom. It took some time, but eventually business picked up. They had vendors selling eyeballs and hair, shoes, utilities and furnishing, high-fashion clothing, biker gear, bodies, pinup girl posters, paintings and avatar animations. However, by 2006 Second Life was getting really mass media attention. Meatspace corporations like IBM, Toyota and Sun Microsystems started developing Second Life stores and the real estate prices went up. According to Mark, Carmen earned more than $100,000 real world dollars that year. From this vantage, humdog’s dictum, “the heart of the problem of cyberspace: the desire to invest the simulacrum with the weight of reality,” seems prophetic, though the problem did not manifest for her yet.

If she was attempting to give the utopianism of the island of the blessed a second chance, it is both an irony and a tragedy that she chose Second Life as the digital domain of her private paradise. Second Life epitomizes the randian imaginary of floating individuals attempting to manifest their private dreams. Second life was the most literal manifestation of the liberal impulse towards individual “self” realization, because it was designed with pervasive property rights such that anything could be bought and sold—including gestures, and emotional displays. A space where every interaction is transactional amplifies the contradictions of capitalism, as well as its seductions. If everything can be property, what of human life?

As Carmen’s real life and online life became more entangled, things began to get weird. As Mark Meadows puts it, “Involving money in the equation of our little world changed everything. Not because we were making money, but because we had invested our real lives” (Meadows, 2008). Carmen began investing more than money into the online world. The information that can be gleaned from what are admittedly somewhat questionable sources, such as Second Life’s online newspaper, the Alphaville Herald, suggest Carmen was deeply involved in an massive online role playing world patterned off the fantasy series by John Norman, Gor. Devotees to this fantasy world, Goreans as they are called, believe in a radicalized form of institutionalized male dominance whereby women are meant to be sexual slaves. Second Life, with its anything goes policies towards sexual slavery, rape play and other extreme fetishes, has become somewhat of a Gorean utopia, with hundreds of islands fashioned after the worlds described Norman’s fantasy novels. The gender dynamics have their complications, however, as many users drive multiple avatars with genders across the spectrum. To put it bluntly, you might be a die hard gorean master who owns a female slave that, unbeknownst to you, is actually some dude. Despite the complications of practice, the discourse surrounding Gor is squarely misogynistic, based on crass essentialisms of female vs. male strength and intellectual competence –where the ideal woman is that who ‘recognizes the true power’ of her male master, and freely sells herself into slavery to him. As Gor-pedia puts it:

“The woman enters into this arrangement freely; she cannot, of course, withdraw from it in the same way. The reason for this is clear. As soon as the words are spoken, or her signature is placed on the pertinent document, or documents, she is no longer a free person. She is then only a slave, an animal, no longer with any legal powers whatsoever.”


While no such contract would be upheld in the court of law, the philosophy of Gor is taken so seriously by some in the Second Life community that the fantasy lifestyle becomes a sort of lived reality. Some goreans claim to be ‘in-world only’, presuming that their actions are confined to the boundaries of second life. As the line between IRL and cyberspace blurred, many gorean slaves found themselves psychologically affected by the fantasy culture. Carmen recognized her emotional involvement but did not take it as a cue to get out of the demanding relationship. Her response was to hire a psychologist to help other gorean slaves in second life through an in-world therapy center.

“Carmen had become arm-wavingly passionate about setting up a sort of halfway house for abused slaves that the Gorean play groups had chewed up and spit out. Carmen went so far as to hire — and pay — a psychologist to come in-world and talk to these “girls” (they may well have been men), to help them get over the misuse their “masters” had played out on them. So, like a world made of onion skins, psychological layers of role-play were wrapping around the already visually layered “worlds” that everyone agreed were real because, well, they were.”

Meadows and Ludlow, 2009

Of course, in a fictional psych ward on a virtual island in a make-believe France in a replica of a science fiction slave-trade world based on John Norman’s novels — well, there was plenty to think about. Here we have a scenario where an intelligent early critic of the internet, who was one of the first to warn the world that the internet is not ‘an island of the blessed’ became so deeply invested in the online role-play of a science fiction fantasy series based around baseless sexual contracts that she decides to set up a psych ward on a virtual island staffed by actual therapists, paid for possibly by money made from selling digital real estate for US currency. As Hermasillo put it so succinctly: “Beaudrillard”

This story does not have a beautiful ending. There is a sense that Carman became subject to her own prophecy, without heeding its ill omens. Though her sister vehemently denies it, some of Carmen’s online friends suggest that in response to a mix of financial trouble and a failing online gorean relationship, Carmen stopped taking her heart medication and passively committed suicide after deleting all of her online accounts. I think about this history often with a sense of unease, as if her life may be a continuation of the prophetic vision laid down in Paradox Vox.