In the midst of our current debate about Facebook, have we ignored a core issue? Public scrutiny has focused almost entirely on the company and its practices. Congressional testimony of a whistle-blower earlier this fall鈥攁nd the Wall Street Journal鈥s continuing expos茅鈥攈ave revealed the extent to which its employees knew, through their own research, about the damage that their product causes. And yet the product itself has been strangely absent from much of the discussion.
There has been talk of algorithms, notably how Facebook determines which posts users see, and how rankings favor sensational content, feeding extremism and aiding the spread of disinformation. And user interface experts have long noted the myriad small and subtle ways in which sites like Facebook entice the user into more frequent and impetuous interactions.
These things matter, and their pernicious effects are well known, if not always acknowledged. But the essence of a software product (such as the Facebook app) is not found in the buttons and colors that appear on the screen, nor in the algorithms that prioritize one data item over another. Instead, it lies in the concepts of an app鈥攖he behavioral building blocks we interact with鈥攖hat shape how we use and understand it, and that determine the impacts of our actions.
The concepts of 鈥渘ewsfeed,鈥 鈥渓ikes,鈥 鈥渇riends,鈥 鈥渢agging鈥 and so on鈥攖hese are the core of Facebook, and scrutinizing them reveals the ways in which Facebook鈥檚 design often serves the interests not of users but of Facebook itself. These concepts, in other words, are the drivers behind Facebook鈥檚 wider societal impacts, and the damage they cause is not accidental but is by design.
The purpose of the newsfeed is, according to , 鈥connecting people to the stories that matter most to them.鈥 If that were true, you should be able to filter and sort posts as you would items in an online store. And yet Facebook鈥s newsfeed not only lacks the most elementary controls but is not even stable: a refresh of your browser window will show you a new selection of posts, changing not only their order but even dropping top posts that you might have wanted to read.
We鈥re so familiar with this concept that we fail to notice how strange it is. The newsfeed concept has conditioned us to accept what appears to be a near-random selection of posts, opening the void into which Facebook can insert the algorithms that supplant our own choices.
Just imagine how many books Amazon would sell if it 鈥connected us to the books that matter most鈥 by showing us ever-changing, endless lists of titles. Now you might counter that these practical concerns are not what Facebook鈥s designers have in mind. It might surprise you, then, to read in their own of seven guiding principles the one entitled 鈥渦seful,鈥 which begins: 鈥Our product is more utility than entertainment, meant for repeated daily use, providing value efficiently.鈥
Sometimes the problem is not an individual concept but the way in which multiple concepts are overlaid. We are all familiar with the concept of upvoting, in which users鈥 approvals or disapprovals of items (such as comments on a newspaper article) are aggregated to rank them by popularity. We鈥ve also seen (in Slack, for example) the concept of emotional reaction, in which readers can respond to a post with a smiley face or a heart. Facebook鈥s like concept ingeniously fuses these two together; reacting to a post with a heart implicitly upvotes it. What not all users realize is that an angry reaction counts as an upvote too, and according to a recent , any emotional reaction counts for more than a simple like. A design that separated these concepts would empower users to make independent decisions: to express anger, for example, without contributing to a post鈥s promotion. It would not, however, serve the interests of Facebook.
Problems can also arise in the way in which concepts are synchronized together. Some degree of automation, in which actions in one concept can trigger actions in another, is often desirable; if you decline an invitation in your calendar, for example, you expect the associated event to be removed. But such linkages are not always what the user wants. When you tag someone in a photo in Facebook, their name becomes attached to the image. In addition, however, the visibility of the photo changes: now all the friends of the person being tagged can see it. In effect, this means that someone can share your photo not only with their friends but also with yours. You can turn this behavior off, but unfortunately it鈥s the default and many users aren鈥t even aware of it. Worse, anyone can tag you, even if not a friend, and it鈥s not clear what control you have in that case: Facebook鈥s warns ominously that 鈥tags from people you’re not friends with may appear in your timeline review.鈥
In all these cases, Facebook鈥檚 design is intricate and carefully considered. The problem is not an egregious design flaw that subverts the concept鈥檚 purpose. Rather, it is that the actual purpose may not be what we users had in mind: it might be Facebook鈥檚, and not our own.
Consumers today have a greater awareness of design than ever before. We expect our appliances and products to be easy to use, with features that are aligned with our needs. As software products become increasingly pervasive in all aspects of our lives, we must balance an appreciation of the benefits they bring with a cool assessment of the risks they pose. Such an assessment must begin with a product鈥檚 core concepts, and by posing a simple question: whose needs are they designed to serve?
Daniel Jackson is professor of computer science at MIT, and author of The Essence of Software: Why Concepts Matter for Great Design.