Dark ‘n’ Sue Me

3602172700_caa848d4ce_o

My friend Jacob Grier, who splits his time between writing and tending bar, draws my attention to a brewing (so to speak) conflict over the trademark for a mixed drinked called the Dark and Stormy. Rum manufacturer Gosling has the trademark on the term, and insists that it can only be used to describe drinks containing Gosling’s own rum. They were none too happy when a competitor, Zaya, started running ads in Imbibe magazine urging bartenders to substitute Zaya’s rum when mixing a Dark ‘n’ Stormy. Jacob’s take on this seems spot on:

Gosling’s might be on solid legal ground, but as a craft bartender I’m firmly on the side of Zaya. I use Gosling’s in a Dark ‘n’ Stormy because it tastes good, but it’s hardly written in the fabric of the universe that no other rum pairs so perfectly with ginger beer. If another rum company thinks they’ve made a product that’s even better, I want them to tell me about it. Using unique ingredients in classic cocktails is part of what makes tending bar creative.

One of the interesting things about this dispute is how deeply confused non-lawyers are about the various legal regimes commonly described under the rubric of “intellectual property.” Jacob also links to this article about trademarks in the bartending business that’s a particularly egregious offender. The article is talking about trademarks, but the author treats the terms “patent,” “copyright,” and “trademark” as though there were interchangeable. They’re not. To quickly recap: copyrights apply to expressive works like music, books, movies, and software. They’re granted automatically and last for the life of the author plus 70 years. Patents cover inventions like light bulbs and transistors. They require a lengthy and expensive application process and last for only 20 years. Trademarks protect brand names like “Coca-Cola” and “Nike” (and “Dark ‘n’ Stormy”). They’re relatively easy to get, and they last forever.

I’ll spare you any more of the boring details. But perhaps the most important difference is that trademarks have a dramatically different policy rationale from patents and copyrights. Copyrights and patents are designed to create legal monopolies that drive up the price of creative works and thereby reward authors and inventors for their creativity. Although consumers may benefit from the resulting increase in creativity, the short-term effect is to force them to pay more than they would in a competitive market. Trademarks aren’t like that at all. The point is not to limit competition. To the contrary, the point is to enhance competition by ensuring that consumers know what they’re getting. This is why it’s emphatically legal to run comparative advertising featuring your competitor’s trademarks. Microsoft may own the “Windows” trademark, but Apple is free to use it as a punching bag as long as they don’t mislead consumers about what they’re getting.
mac-pc
The same principle applies in the Dark and Stormy case. The point of trademark law is to make sure consumers know what they’re getting (whether it’s Gosling or Zaya), not to give Gosling a monopoly on the concept of mixing ginger beer with rum. I haven’t seen Zaya’s ad and I’m not a trademark lawyer, so I don’t want to speculate on the legal merits of Gosling’s position. But certainly the apparently purpose of Zaya’s ad—encouraging bartenders to substitute their own rum in place of Gosling’s—is entirely within the spirit of trademark law. If the net effect of Gosling’s threats is that consumers wind up with fewer opportunities to try mixing ginger beer with different kinds of rum, that is certainly not what trademark law is supposed to accomplish.

Unfortunately, this doesn’t seem to be widely understood among the general public. And as a consequence, when a company claims that its trademark gives it broad powers to control how certain words are used, many people believe them. As a result, a legal regime that’s supposed to be about protecting consumers from fraud winds up protecting producers from competition, to consumers’ detriment.

Posted in Uncategorized | 12 Comments

Attribution Is a Two-Way Street

To add to yesterday’s post about the WaPo/Gawker spat, Spencer Ackerman offers another example of a blogger breaking a story that got picked up without attribution by numerous mainstream media outlets:

This is the fault of an outdated newspaper convention that equates proper referencing with an admission of professional failure. Before the internet, it was pretty easy to get away with slighting your colleagues. But now that everyone has GoogleNews at their fingertips, it looks like exactly what it is: churlish and archaic vanity. Everyone can see who got the story first. Not a single reader, I’ll bet, will ever say, “Aha! Because Noah Shachtman got the story first, clearly Julian Barnes is an inferior reporter!”

It’s not just blogs, either. There are a ton of specialist newsletters doing deep in-the-weeds reporting — Inside the Pentagon is one — that newspapers treat like uncreditable wire copy. This has to end. I credited Bloomberg and the LAT in my story today, because they got material I used. It didn’t hurt my pride or discredit my piece. Not citing itwould have, though.

2573472909_e85a2cb6d0_bThere are a couple of things worth highlighting here. First, I think newspaper partisans drastically underestimate the amount of important, “deep in the weeds” reporting is done by blogs and other online media outlets. In the field I know the most about, tech policy, the best material is almost all produced online. Ars Technica, for example, covers tech news in a way that’s more thorough, more insightful, and more timely than any dead-tree publication. The folks at Wired’s Threat Level blog do a ton of high-quality reporting on the security and privacy beat. And for all its flaws (and there are a lot of them), Techcrunch regularly breaks news about the latest developments in the technology industry. The commonly-repeated claim that all the important news is broken by traditional newspapers simply isn’t true.

Secondly, online media sources are way more sophisticated than their print colleagues when it comes to providing proper credit for breaking stories. The blogosphere has evolved a subtle set of norms about when linking is appropriate, and it even has a fairly sophisticated enforcement mechanism: if you gain a reputation for not giving credit where it’s due, others will be less likely to give you credit for your stories. The result is a robust link economy in which the people who break news do more often than not get credit (and the traffic that goes with it) for their work.

Of course, if you start with the assumption that all the “real reporting” is done by a handful of print newspapers, and that anyone who writes for a blog must not be a real journalist, then you’re going to miss these subtleties.

Posted in Uncategorized | Leave a comment

Journalism and the Arrogance of Power

This week’s link-bait champion is a story by the Washington Post‘s Ian Shapira. Last month, Shapira wrote a profile of a “generational guru” named Anne Loehr who charges corporate executives hundreds of dollars an hour to dispense platitudes about today’s 20-somethings. The same day, Gawker’s Hamilton Nolan did a post that pulled out the funniest quotes from Shapira’s story and added a few sentences of Gawker’s patented snark. Finally, last Sunday, Shapira wrote a long follow-up titled “The Death of Journalism,” arguing that Nolan’s post is a prime example of what’s killing “real reporting” of the kind that’s practiced at the Post.

In the last four days, so many people have weighed in that it would be impossible to summarize the whole conversation in one blog post. Mike Masnick and Matthew Ingram have two of the sharpest takes I’ve seen. But one thing I haven’t seen anyone point out is that Shapira’s complaints about Nolan could just as easily be made about Shapira himself.

Shapira is angry that Gawker reaped several thousand pageviews by doing a post that was basically just a summary of his own work and didn’t share a dime of the advertising revenue. But my question for Shapira is this: how much revenue did the Post share with Anne Loehr, the subject of the Shapira’s story? After all, Shapira’s column is basically just a summary of some things that that Loehr and some of her students said. Shouldn’t Loehr be getting a cut?

323608825_de8f210e40_bOf course, journalists would retort that paying your sources is tawdry checkbook journalism. Reporters have gotten used to getting material from their sources for free, and on their own terms. Some even take umbrage if a source insists on having more control over the interview process.

Moreover, as Spencer Ackerman notes, mainstream media sources routinely mine one another’s stories for material. Attribution is given only in the most clear-cut cases, and revenue-sharing is unheard of. Moreover, mainstream media outlets are rather less likely to give credit to non-traditional sources like blogs than they are to other major media outlets. Indeed, until recently many mainstream media outlets refused to put outbound links in their news stories at all.

Still, Shapira did spend a lot more time writing his story than Nolan spent quoting it. So isn’t it true that Gawker depends on people like Shapira to provide them the raw material to write about? Not really. The Internet is chock full of stupid people saying funny things that sites like Gawker could snark about. Shapira was simply one source among many. If Shapira hadn’t written his story, Nolan would’ve simply written about something else. And in many cases, the “something else” would not have been something that Shapira would regard as “real journalism.”

Indeed, as Gawker itself has noted, the Washington Post‘s communications shop understands this point all too clearly. At the very same time Shapira was working on his piece, the Post was sending Gawker daily emails urging them to “rip off” more of their content.

DE-Zeitungsrollenoffsetdruck_by_SteschkeSo Shapira’s complaints ring a little hollow to me. I think that what’s fundamentally going on here is the following: until the emergence of the web, newspaper reporters were members of a tiny elite that had exclusive access to the means of mass communications. That made reporters powerful people; advertisers needed newspapers to promote their stuff and sources needed reporters to get their message out to a wide audience. Decades of privilege bred a certain degree of arrogance. The print journalism profession has developed an elaborate, self-justifying ideology in which their own activities are central to the functioning of American democracy.

In the last decade, newspapers have lost their privileged position. Now everyone can publish for a large audience. And so lots of people are doing to the Post what the Post has always done to the rest of the world: treat them as raw material from which to fashion stories, without sharing any of the resulting revenue. The only way this could be considered an outrage is if you’ve grown used to the process only happening in the other direction.

This isn’t to deny that the Post is in a tough spot. Nor is it to deny that the Post does a lot of worthwhile things, and it would be nice for them to stay in business. But what’s happening isn’t “the death of journalism” or the end of “real reporting.” It’s the death of a one-way model of journalism in which a handful of print and broadcast journalists decide what to cover and the rest of society gratefully accepts what’s dished out. Shapira seems to regard his employer as separate from the “wild and riffy world of the Internet,” but he’s wrong. The Post is now just a medium-sized fish in a vastly larger pond. If it wants to survive, the first thing it needs to do is to get over itself.

Posted in Uncategorized | 2 Comments

Charles Darwin: Bottom-up Thinker

Charles_Darwin_photograph_by_Herbert_Rose_Barraud,_1881.jpg

So why is the blog called “bottom-up?” One of the central themes of the blog will be a contrast between two styles of thinking, which I’ll call “top-down” and “bottom-up.” I’ll argue that top-down thinking comes more naturally to people, but that as society grows more complex, bottom-up thinking is becoming increasingly important to understanding the world around us.

I’ll contend that one of the big problems with top-down thinking is a tendency to pay too much attention to abstract arguments and too little to the concrete “facts on the ground.” So rather than launching into a long “bottom-up manifesto,” which would be a little bit hypocritical, let me start with an example of someone who exemplified the bottom-up method of understanding the world: Charles Darwin.

This is a big year for Darwin fans. The British biologist was born 200 years ago, and he published his most famous work, On the Origin of Species, 150 years ago. Although Darwin’s theory of evolution isn’t remotely controversial among people with relevant expertise, it continues to be intensely controversial among the general public, with just 39 percent of Americans saying they “believe in evolution.”

This is puzzling. Darwin’s argument is compelling, and in the last 150 years biologists have accumulated overwhelming evidence in its support. (I’m not going to belabor the point, but if you’re not already convinced I encourage you to read Dawkins and Dennett.) Moreover, the basic theory has a remarkable elegance and simplicity. It’s less complicated than Maxwell’s equations of electromagnetism, for example, which were published around the same time. Yet you’re not likely to meet people at parties convinced that Maxwellism is a “theory in crisis.”

So what’s going on? One possible explanation is that people perceive the theory of evolution as a threat to their religious beliefs. There’s certainly something to this, but it can’t be the whole story. Only 55 percent of people who seldom or never go to church profess belief in evolution—clearly the other 45 percent aren’t all religious zealots. Moreover, the Catholic Church itself has concluded that Darwin’s theory can be reconciled with Christian faith. If the Pope can do it, you’d think other Christians would be able to follow suit.

765px-TlalocbraziercenteredI think one of the most important factors is that people have a strong inclination to think in terms of stories involving anthropomorphized actors. Historically, when people lacked a good explanation for some phenomenon, they would often invent a sentient being whose actions could explain it. When primitive tribes lacked a naturalistic explanation for weather patterns, they often postulated a deity that controlled the weather and they would perform rain dances or give offerings in hopes of influencing it. When they lacked naturalistic explanation for diseases, they postulated the existence of demons, witches, or other malevolent forces to explain them.

Fortunately, when naturalistic explanations for these phenomena came along, most people didn’t reject them on religious grounds. Today there’s no shortage of people who believe in demons and witches, but most of them will still take their kid to the doctor when he gets sick. So why hasn’t the same thing happened with evolution? I think the key difference is this: the germ theory of disease offers a new, tangible thing—the “germ”—that makes you sick. Likewise, the stylized weather map you see on television, with its cold fronts and low pressure systems, offers people a plausible story about what caused the weather to change.

61309174_35c7397efc-2In a sense, people simply traded in the old, supernatural entities—demons, witches, Tlaloc—for new, natural ones: germs, low pressure systems, cold fronts. I doubt one person in ten could give a detailed explanation of how germs make people sick (come to think of it, I’m not sure I could) but this doesn’t matter. Most people are happy to simply accept the formulation “germs cause sickness,” without delving any deeper.

The problem is that it’s almost impossible to frame Darwin’s theory in these same terms. There are no particular things that “cause” evolution. Darwin’s central metaphor, natural selection, is far too abstract to serve the same role that germs play in the germ theory. And as a consequence, when you try to explain evolution to someone who’s used to thinking about scientific findings in “X causes Y” terms, they often interpret you as saying nothing causes evolution—that the diversity of life is some kind of fluke or accident.

Darwin’s theory is hard for people to grasp because it relies on a subtle, statistical concept of causality. Natural selection happened in a series of infinitesimal steps over a mind-bogglingly long period of time. Our brains, which are optimized for thinking about the people around us and how to get our next meal, are bad at thinking about phenomena like that. People find it easy to understand a single step in the evolutionary process, but they find it hard to imagine that millions of tiny steps, undirected by any intelligence, could in the aggregate produce trees and elephants and people.

Unfortunately, the universe doesn’t care if we find it counter-intuitive. Evolution happened, and human beings wanting to study it effectively have to learn to think about the world in ways that don’t come naturally. Moreover, evolution isn’t an isolated example: modern societies have many complex, decentralized systems whose understanding requires us to think in counter-intuitive ways.

Posted in Uncategorized | 10 Comments

Blogging from the Bottom Up

Thanks for reading my new blog, Bottom-Up! If you’re not familiar with my past work, I’m a grad student in computer science at Princeton and a sometime writer with a focus on technology policy. Over the last five years or so, I’ve gradually accumulated a number of group blogging opportunites. They’re all great blogs, and I’d love to write regularly for all of them, but I’m started to find myself stretched pretty thin. So this is my attempt to consolidate and focus my energies on a single blogging project.

This blog is going to be a bit of an experiment. The typical blog is tightly coupled to the news cycle. That makes for a lively read, but it tends to produce a kind of scatter-shot effect, with little connection between one post and the next. I’m going to try to achieve a bit more continuity and coherence here. Posts may run long, and if a topic strikes my fancy I may do several posts in a row about it. I’m not going to try to hit every story that shows up on Techmeme; if you’re looking for comprehensive coverage of tech policy (or anything else) I suggest you look elsewhere.

Lest that scare you off, I’m very conscious of the dangers of the opposite extreme. Nothing is more tedious than excessive naval-gazing, especially from a writer with no editor and no word limits. Blogging is fundamentally a conversational medium, so I plan to jump into the blogospheric fray on a regular basis.

Ronald Burt

Ronald Burt

One of my goals for the blog is to bring some of the key insights of the tech policy world to broader public policy conversations. In his brilliant book, Here Comes Everybody, NYU’s Clay Shirky devotes several paragraphs to “The Social Origins of Good Ideas,” a paper by Chicago sociologist Ronald Burt. Burt studied the process of idea-generation in a large electronics firm, and found that the most creative individuals were often those who had regular conversations with people outside their own departments. As Burt describes it:

People whose networks span structural holes have early access to diverse, often contradictory, information and interpretations which gives them a competitive advantage in delivering good ideas. People connected to groups beyond their own can expect to find themselves delivering valuable ideas, seeming to be gifted with creativity. This is not creativity born of deep intellectual ability. It is creativity as an import-export business. An idea mundane in one group can be a valuable insight in another.

I’ve found technology policy to be a remarkably fertile source of interesting ideas about the broader worlds of business and public policy. This is true for at least three reasons.

3056780375_b7520fdf76_oFirst, the rapid pace of technological change means that it’s fairly common for important arguments to get settled with one side as a decisive winner. Most industries aren’t like that. People have been arguing about health care reform for decades, and they were making pretty much the same arguments in 1948, 1965, or 1994 as they are today. That’s not true in tech policy. It’s easy to look back at the major technology debates of the last couple of decades and spot clear winners (open networks, free software) and losers (encryption export controls, micropayments, “thin clients”). And knowing who lost a given debate makes it much easier to say something interesting about why they got it wrong.

Second, the software industry is home to extraordinary institutional diversity. Again, other industries aren’t like that. For example, car companies pretty much all look the same: large, capital-intensive, and bureaucratic. In contrast, software is produced by all sorts of people and institutions: large software companies, small startups, ideologically-driven non-profits, academics, loose networks of volunteers, and so on. I think we can learn a lot about the effectiveness of different kinds of organizations by observing what happens when Microsoft, say, has to compete directly with a small startup like Google circa 1999 or a non-profit organization like Mozilla today.

262875283_ca33f09edd_o

Finally, computer programmers have unique experience dealing with problems of scale and complexity. Almost any working programmer can tell stories where she built software that worked flawlessly with a handful of test users, only to encounter unexpected problems “scaling up” to thousands or millions of users. Programmers have developed some interesting techniques to manage problems of scale and complexity. Many professions face problems of scale and complexity, but very few face it as acutely or as regularly as computer programmers do. Therefore, I think examining them can produce insights that will be valuable to people who have no intention of writing computer code.

So thanks for reading, and I hope you’ll stick around. A link to the RSS feed can be found at the bottom of the page.

Posted in Uncategorized | 2 Comments