1. Introduction

“…a five-hundred-year-old painting, bought for a museum with public funds, or perhaps given to it by a generous benefactor, and with no possible copyright claim from the artist or his heirs, is suddenly claimed to be the copyright of the museum where by a series of historical accidents it has come to be lodged. This is of course nonsense…” (Nicoll, 2005, p. 74)

With a no-holds barred declaration, John Nicoll (a respected publisher of art books who had spent more than 25 years building Yale University Press’s place in the publishing world), called for a restructuring of the publishing infrastructure supporting art history. His argument specifically focused on whether fees should be charged for the use of images of public domain works that resided in tax-subsidized institutions. The answer – that there are, in fact, ways for institutional players to re-engineer the publishing ecosystem – represents a chronicle worth telling, with lessons worth learning.

Hilary Ballon and Mariet Westermann (2006), also writing about the struggles of publishing in art history noted that “[i]t is a paradox of the digital revolution that it has never been easier to produce and circulate a reproductive image, and never harder to publish one.” (p. 21). If publishing in general is in crisis because of the seismic re-ordering of a digital world, the field of art history represents the extreme of the topsy-turvy spectrum. Rights holders are accustomed to licensing image content for limited edition print runs. Since e-journals and e-books cannot possibly be issued with a promise that they will disappear after three years, e-publishing in this field is paralyzed. Out of this particularly challenging corner of the publishing work, a project initiated by the Metropolitan Museum offers some hope of a collaborative way forward. What sociological re-engineering enabled progress on this problem? Are there other lessons here to throw at least streaks of light on process re-engineering provoked by digital innovation in publishing?

In this article, I begin by reviewing how a leading repository of art, the Metropolitan Museum of Art, and a non-profit intermediary, Artstor, created an alternative pathway to provide primary source content in support of image-intensive publishing. [Disclosure: I have been the president of Artstor throughout this project.] This venture is framed in the context of a publishing system moving toward greater freedom and an aim to bring about ever lower (or no) fees to readers.

In general, providing academic content for free requires some sort of restructuring of a public release process – whether it is the distribution of processed content (such as journal literature) or less processed content (such as primary source content like images). To the extent that the distribution adds value, it might be worth paying for. This case study argues that there are places where community-wide interests align to support no-cost distribution, describes what it takes to keep those interests aligned, and explores what we did collectively to facilitate re-structuring and make it ongoing. Are there lessons for open access publishing more generally in this example of cross-subsidization among mission-driven organizations?

                       * * *

In April 2002, I gave a presentation at Bryn Mawr College to introduce Artstor to a group of art historians and librarians from a range of area colleges and universities. Artstor was an effort established by the Mellon Foundation to create a large and growing digital library of images to replace and eventually improve upon the teaching slide collections many colleges and universities were seeking at that time to digitize. Museums, artists, and photographers were fairly anxious about what digitization would mean. We were trying to do something useful – and not upset a delicate ecosystem – by isolating and trying to support one very narrow set of uses – the teaching and studying functions that slide libraries had been supporting for most of the 20th century by creating 35 mm slides by photographing pictures in published books and journals. I finished my talk; the first question came from one of the professors: “The idea of a digital version of a slide library sounds fine,” he began, “but the real problem is getting images for publishing. It costs a fortune and a lot of the time I can’t even find the people to contact anyway. Are you guys going to do anything about publication?” I clarified that we had our hands full just trying to keep the whole community working together to allow this better version of the slide library to go forward. The next hand went up. “I appreciate what you’re trying to do to support teaching,” she said, “but are you going to be able to do anything to help with getting images for publication? It’s a real problem.” Point taken.

Concerns about the permissions and associated costs for images for use in publications rose in prominence in the time leading up to, and more fervently after, the Nicoll article in Apollo. While the practices about which he complained were widespread, his argument was met with sympathy on the part of some in museums. Museums publishers themselves had to pay fees to other museums in support of catalogues and other curator publications. Ken Hamma (2005), then the Executive Director for Digital Policy and Initiatives at the J. Paul Getty Trust, wrote:

This resistance to free and unfettered access may well result from a seemingly well-grounded concern: many museums assume that an important part of their core business is the acquisition and management of rights in art works to maximum return on investment. That might be true in the case of the recording industry, but it should not be true for nonprofit institutions holding public domain art works; it is not even their secondary business. Indeed, restricting access seems all the more inappropriate when measured against a museum’s mission – a responsibility to provide public access.

And in 2007, senior staff of the Metropolitan Museum of Art embarked on an effort to digitize images of its enormous collection and roll out an ambitious program to manage these assets and make broad use of them. Files were being managed on an ad hoc basis and getting lost, and the museum’s publishing and collection management leadership realized they needed to invest in infrastructure. As Susan Chun, then director of publishing, notes: “a staff photographer was now able to take 40 images a day rather than five – we needed to manage those assets or they would be lost.” As part of justifying the investment in managing the content, Chun and others recognized that the museum could now begin to respond to the needs of scholars.

Met staff asked whether Artstor would be willing to use its infrastructure to provide images for academic publication without charging fees. The museum wanted to license images commercially (when possible and appropriate) and to offer, systematically, images for academic publishing with no fees, but it was interested in exploring options for someone else to do the work of fulfillment of these requests. An earlier museum collaboration that the Met and approximately 30 other museums had built collaboratively (AMICO) starting in 1997 had sought to help museums collaboratively promote the museums’ missions through digitization. Despite what seemed – to Nicoll and others – rapacious policies, these were neither universally practiced nor universally supported within the museum universe. And yet, there was tension surrounding the very significant costs that museums were facing when going digital.

A huge challenge for museums (in 1997, 2007 and today) is the significant cost of managing the technology infrastructure around digital assets and associated data. Internally, a great deal of re-alignment was needed to justify the investment in enabling infrastructure and then to shift museums’ mindset about the “value” of the images. At the same time that they had to invest in technology to manage digital images, museums were being challenged on whether they actually owned the rights to wring revenues out of the images. As many have long recognized, a painting that might be “worth” millions of dollars were it on the open market is actually a ­liability rather than an asset to a collecting institution. It needs to be conserved and insured, and supported by physical plant and staff. And to provide an image of that work requires the work of photographers, catalogers, file clerks to fill out forms and provide negatives. To museums, charging for the use of an image was small recompense for all the past and continuing investments that caring for and documenting the work required. As the Met moved toward a model it would have a digitally re-directable copy, the only barrier left to providing it without fee to educational authors was the delivery of the image and the gathering of a (no-fee) license agreement. They believed that Artstor’s technology and reach could take the burden of collecting licenses and fulfilling orders off of their own plate.

When Artstor was first approached about providing the fulfillment service that would deliver Met images to users, we didn’t hesitate. We could see enough symbiosis with our existing image delivery infrastructure to assume that we could make it work. And with the memory of the Bryn Mawr conversation (and many others) in mind, the benefits to the community (and to our reputation) seemed irresistible. There would be new work, of course:

  • Instead of only loading derivative images onto our production servers (located in a commercial hosting facility), we had to be prepared to load and provide 20 megabyte TIFFs.
  • We needed to prepare a path for users that clearly distinguished that they were entering a different space than the Artstor Digital Library, and that the terms of use in that new space were different from the Library’s normal terms (that allowed only classroom and fire-walled course website types of use).
  • We had to assist the Met in preparing an online license form to capture users’ data, determine if their uses were suitable, and provide those data back to the Met.
  • We had to develop an opening in our authorization and access infrastructure that would allow non-Artstor subscribers to get a password to access the images that could be used freely for academic publications.

Another important question was whether to include images of works under copyright. On the one hand it would have been perfectly consistent with earlier practice to do so (since when the Met sent out a transparency of a Jackson Pollock painting for publication, it only reminded the user to seek all necessary permissions, without policing whether those permissions were sought or attained). But the ease of downloading could make artists or their representatives uncomfortable, and museums need to be mindful of maintaining mutually respectful relationships with artists.1

And how many downloads should a user be allowed to make (since the Met was willing to provide content on a trust basis but didn’t see the need to enable wholesale access to their archive)? In the end, downloads were limited to public domain images and 10 per user per month, since it was highly unlikely that anyone would need more than this for any particular academic publication.

We had to develop the capacity for non-Artstor subscribers to get a password for access (either by contacting us or by contacting the Met). This meant opening up a view on the Artstor platform to non-subscribers for the first time. We decided to launch the first version just to subscribers but were able a few months later to create a segregated view of the images accessible by login/password which could be provided to anyone in the world either by Met or Artstor staff.

In one sense, the program, which we called Images for Academic Publishing (IAP), represented a big new step, but in another, it was merely an extension of the trust system that museums had been relying on for years in sending out 4×5 transparencies for one-time use.

We invited other museums to participate, but most decided to wait and see how it went. A few archives, including the photography of Professor Mellink at Bryn Mawr College, asked to participate, but for the most part the Met was by itself. Some other museums began to take similar steps on their own, though it required building their own infrastructure for delivering files. Some still required human intervention by soliciting information on their site from prospective authors and then having a person determine whether fees should be charged or waived, before completing electronic file transfer.

And, perhaps predictably, there were critiques. The Met had decided to set as a parameter a print run under 2,000 to distinguish academic publishing from monographs seeking to reach beyond the academic market. At a panel on publishing sponsored by METRO (a New York library consortium), Susan Chun and Doralynn Pines of the Met faced challenges from members of the audience about the allowable uses for IAP images – despite the fact that no other museum was systematically providing images for academic work at all. The Mellon Foundation sponsored a review of the Met’s program and held conversations with other museums about the Met’s model. Some participants expressed support; others saw the revenue that was produced as fair compensation for the effort museums put into creating the images.

Only in 2011 did other museums begin to see that having their content aggregated for exploration for use in academic publishing would be more useful – and cost-effective – than setting up their own service.

As other museums started to join IAP, they did so primarily for three reasons:

  • Broaden access to the collection while saving effort: As David Farneth of the Getty Research Institute notes (in discussion with the author), “for us the main attraction of IAP was the ability for users to have a quick, self-serve, and no-cost way of obtaining high-resolution images for scholarly publishing. It proved an effective way of proving how use increased when images were literally free and easy to obtain.” And as John ffrench of the Yale University Art Gallery (YUAG) notes (in discussion with the author), “at the time, we did not have a way to deliver files online through our own website.”
  • Outreach to an important audience that is drawn to the ­aggregation: As ffrench notes, “There are times a researcher may not know a particular work is at YUAG but upon finding it in your database may look further at our collection.”
  • Museums are committed to stewarding the integrity of their works’ public presentation and therefore care about the values of channels for its distribution: Rob Stein (in discussion with the author), Deputy Director of the Dallas Museum of Art and formerly of the Indianapolis Museum of Art (both IAP participants), notes those institutions’ commitment “to providing academic and public license to artworks in our collections whenever possible. In general, I feel that museums ought to be using as many reputable channels as possible to disseminate knowledge of our works of art to the wider community.”

These advantages – labor saving, aggregation, and serving as a reputable channel – are increasingly important as new channels appear.

In addition to the “mission good” achieved by supporting the publishing work of colleagues (in or out of academia) and the “mission good” of promoting research and awareness of their collections, museums also saw some value in no longer having to service the requests themselves. As of this writing, 15 institutions have collections in IAP.

Other museums that have been progressive in sharing their collections on their own are also exploring whether it would be helpful to their mission to provide copies to IAP as well. This includes the Los Angeles County Museum of Art, which provides over 20,000 images for free on its site, and the Rijksmuseum, which has taken a very significant step among European museums by making over 125,000 images of its works freely available through its website as well as an Application Programming Interface service for making data and images available for reuse to others, with a goal of adding 40,000 images a year.

2. The Image Permissions Problem and Open Access

The IAP experiment offers useful lessons for those seeking to re-engineer publishing processes to increase access and lower costs.

Other fields may face fees to license translations or other cited sources, but the costs of image licensing fees and rights licensing fees, on top of the extra costs of printing images of publication quality, make for an unusual challenge in preparing an image-intensive publication. These fees weigh especially heavily on the cost of scholarly discourse in art history and other image-intensive fields. Ballon and Westermann write that:

It is clear that the current regime of images and permissions impedes scholarly publication in art history in its print as well as digital forms. We recommend an organized campaign to break down barriers to access and distribution of images, in all media and at affordable prices, for scholarly research and publication. (Ballon and Westermann, 2006, p. 33)

Art history would benefit from communal efforts to support academic discourse. The IAP initiative’s effort to ameliorate these costs (and encourage scholarly publishing) re-works an unusual cost problem in publishing. But re-engineering the publishing process in art history also means understanding the currents – and counter-currents – that merge and pull within academic publishing. To be sure, the permissioning of images is but one tributary and one that has the particular issues that accompany primary source materials, but in the sense that publishing consists of processing, and dissemination, of content via increasingly networked and digital channels, the role of image provision offers a particular lens on the “Open Access” movement towards lower costs and greater access.

The main current in Open Access concerns secondary literature, especially journal literature, and proposes that since society provides financial support for the research that is conducted (primarily in academia and the sciences), research institutions and society at large should be able to access that material without paying a second time via journal subscription fees. Within this movement are different currents – those who advocate for “gold” access (by which the author pays and access is completely free and open) and those who support “green” access (whereby publications are made open after an embargo period or in a deprecated form so as to allow for premium level, subscription-supported, access as well).2 One major movement to support this access is the increasing number of universities that require researchers to put a copy of published material in an openly accessible institutional repository. That approach (of opening up the author’s copy of a published journal article) re-purposes already processed articles. The main emphasis of the main Open Access current (journal literature) focuses on the degree to which the value-adding processing (authoring, editing, peer reviewing) is provided by the community and asks whether journal publishers should be charging as much as they do for the interstitial work of assembling and disseminating.

But while the Open Access movement is strongest as it relates to journal literature, “Free and open” are directional beacons for all levels of raw and processed academic materials, with various mission- and market-driven justifications. On the raw content side, libraries, archives and museums are moving, with increasing rapidity, away from the anxiety that seeing a manifestation of an object in digital form will satiate interest. Part of this is spurred by the ubiquity of cameras and the inevitable fact that casual photography resists constraint.3 Part of the movement is due to the fact that those who care for artworks are growing less defensive about whether a picture of the work satiates the viewer’s interest. As one observer put it, they now know that seeing a picture of the beach doesn’t lessen one’s interest in going to the beach. Many museums have begun to feel that digitization (on their own website and elsewhere) promotes interest in the collections.4 Scholars and museums who felt in the past that they are obligated to review, update, and adjudicate any opinions about the attribution of a work in their collection are now open to dialogue and recognize that even when the “last word” on a work is published, it will never be the last word.

On the other end of the digital content range (where primary source and secondary source content has been used to produce lectures or other presentations of academic work), Open CourseWare, open educational resources like Khan Academy, and Massive Open Online Courses (MOOCs) propose that the content of courses (processed by faculty) can re-engineer the pedagogical forum and channel in disruptive ways to dramatically increase collective knowledge. Free and open is celebrated as the ultimate fulfillment of a tax-subsidized institution’s mission.

Disruption of the practices of existing firms (as articulated by Clayton Christensen) is a re-engineering – a re-channeling of the energy of sellers and buyers to create new channels when atavistic means are too calcified to enable new needs and new opportunities. Those who believe that the traditional models (of publishing, accessing images, or delivering education) have not adapted to new technological possibilities or are not using those new technologies to attain maximum societal benefit may be inclined to overturn the roles of traditional institutions.

But, unlike those who build new companies predicated on undercutting old industries, many who advocate for change in these processes do so while wanting to uphold traditional institutions. Few have interest in completely overturning the system. Scholars still want to publish. Institutions care more, not less, about peer review, impact factor, altmetrics, and prestige. The Open Access movement itself argues that journals that are more open will perform strongly on the traditional scales, not invent new ones.

So, back to publishing images: even if the price to the end-user is reduced, the costs of producing the product must be paid, by someone, somehow. New technologies may introduce new costs or new savings. New investments or capital costs might result in reduced ongoing or operating costs. But, inevitably, there will be costs. These can be divided into two categories: the cost of producing and internally managing the information (images and data, in the case of IAP), and the cost of distributing the data effectively. To re-engineer any publishing project, we must consider the goal of the process (what audience is ideally served with what quality of solution). But also, we need to ask what is the best way to cover the costs – first of digital content production and then of distribution?

3. The Costs of Free

Libraries have been moving toward “publishing” their content ever since they began digitizing and managing it. The library ethos is focused on preservation and access. Commercialization has rarely been a significant force in decision-making about archival collections and “control” over primary source texts has, perhaps, been less of an issue than in media types where the material must be labeled. That is to say that the papers of Benjamin Franklin certainly include evidence for a range of scholars’ research, but they generally include the papers of Ben Franklin. A vase or an old master drawing or an African mask are objects that need to be categorized and labeled in order to be managed and used, putting perhaps a greater mandate for “control” in the hands of the curators who care for the objects. This need, combined with the inevitable sense of responsibility (which, can in extreme cases, border on possessiveness) that any scholar feels towards material in his or her care and about which he or she hopes to make an important argument, probably have contributed to the tendency of museums to assert control over access to and documentation of their works. The technology that breaks down that ability to control access puts museums in the new position of both looking for the positive aspects of their works being set free and the position of moving from closed to setting the terms of open. In other words, they can let open access happen to them (as museum visitors snap away with their smart phones) or they can embrace it and set the terms of how the museum interacts with the world. In 2006, when the Museum of Modern Art saw iPod users listening to free podcasts about MoMA works, it moved to make its own free “official” audio guides and eventually hired professor and guerilla art commentator, Beth Harris, to lead their digital education efforts.5

Nevertheless, the provision of images requires that those images exist and are accessible without a lot of staff intervention, and the work that enables this is vastly misunderstood. Just because any 15-year-old with a smartphone can snap a picture, the assumption is that high-quality documentation is also easy. But the work required to properly document a work – remove it from the gallery or storage with care, perform any necessary conservation work, light and photograph it properly, color correct and crop the raw image file is significant. And cataloging the work, whether it requires new research or not, means creating electronic fielded data, perhaps having to re-key data that was created for a publication or a wall placard. Even more significant are the costs and challenges associated with managing, maintaining, and eventually migrating digital image files – an emerging field of Digital Asset Management that consumes billions of dollars across all sectors. One rumored bid to provide an integrated digital asset management solution for the Smithsonian as a whole was over $2 billion.

A little historical perspective is now possible in considering museums’ arc on the costs (and potential benefits) of digitizing images from their collections. The first step was taken in the early 1990s when Bill Gates’ Interactive Home Systems, later renamed Corbis, sought – and acquired – the digital rights to works from museums like the Frick, Philadelphia and Detroit Museums of Art for fees. Museums had little sense of what they were providing but were intrigued by the idea that their images might produce funding. But the realization that an external, commercially-driven organization was setting the terms of what museums could do with images of works under their care quickly raised concerns. In reaction to this misadventure, a group of leading museums under the leadership of Max Anderson (then director of the Whitney) sought to collaborate and develop AMICO as a network to advance the needs and options for its members. The consortium made very significant progress in setting standards and educating staff about rights issues, workflow around digitization, and building a shared set of museum community values. Some staff, such as the Met’s Susan Chun, who had been active in the building of AMICO, would become the leaders of new efforts to broaden access to museum content, including Images for Academic Publication.6

One of the lessons learned in the building of AMICO was also that the work of creating, preparing, and managing digital content was a costly and difficult undertaking for which museums were not prepared. AMICO itself was happening on the backs of hardworking staff, and few institutions were able to scale their contributions significantly. Museums that sought to make the necessary investments struggled to justify the investments on the basis of a hoped-for return on investment. External subsidies – whether provided by Corbis or by individuals seeking to license images for whatever purpose – were most welcome in an environment in which all of a sudden computer programmers, new software systems, cataloging staff and digital asset management expertise were suddenly required.

In the analog licensing days it was not clear whether there was net financial benefit to a museum after accounting for the staff costs of filling out forms, mailing out transparencies, and following up. In a digital age, that “what does this operation net” question requires an analysis of different investments, and administrations at most institutions are still working their way through the question of what they must do and why.

The Met had the advantage of having made significant strides in asking – and answering – these questions. They knew that being ready to “provide” was neither cheap nor easy. The Met’s Shyam Oberoi noted:

…our experience suggests that there are no shortcuts, no easy answers and no way to escape the fact that a DAMS [Digital Asset Management System] is a complex mechanism which, like any enterprise-level application, requires a significant amount of supervision and technical expertise and touches on a range of different information technology and management skill sets, including database administration, web application development, network administration, and storage and backup strategies. (Oberoi, 2008, p. 21)

Oberoi’s main point is that for a museum to manage its digital assets in a way that allows it to use those assets in a range of ways is far from easy or cheap. This infrastructure challenge (and the funding challenge associated with it) plagues all museums, but the Metropolitan was in an unusual position to invest in a solution and make it work. This capacity was being built because the staff at the Met felt was crucially important to be able to support a dynamic website including the popular “Timeline of Art.” “After five years,” Susan Chun recalled (in discussion with the author), “of shaping proposal after proposal to the board, they agreed that we need to make significant investments in imaging and asset management. Part of why they decided that it was time was the productivity of digital photography. Whereas a photographer used to shoot one image per day, digital was allowing him to shoot 40. We knew that we needed something – given the scale of the Met’s collections – to manage all of that content. And the trustees began to see that the managing of images was core to the museum’s intellectual property just like managing books that we published.”

To do this while being mindful of the concerns of internal constituencies, the infrastructure needed to be living and reactive (so that if a curator changed the date of a work, the website would reflect that change). And the Met now had such an infrastructure, lowering barriers to supporting a mission need identified with AMICO. The Met invested in digital infrastructure, not so that it could answer Nicoll’s call to action, but for its own reasons. Then, having made these very significant investments, it was in a position to support something else, something more socially-minded. This progression brings to mind Clay Shirky’s argument that social activity coalesces around a platform because of what it enables, even if that platform was built for another purpose. He argues that in order for tools to lead to successful collaborations, the tools “must help people do something they actually want to do.” (Shirky, 2008, p. 265). The Met wanted to manage and distribute its assets and had to invest on that basis; had it set out only to support academic publishing, it seems highly unlikely that it would have been in a position to do so.

4. What it Takes to Distribute Free Content

If the first step to re-engineering the content distribution process is finding a rationalization for investing in content creation and internal management, a separate engineering – and rationalization – is needed for re-engineering the distribution process. In publishing, of course, this part of the process is the crux of the stress in the journal debates. The “packagers and distributors” are seen as charging irrational fees for selling universities’ own creative works back to them. The publishers, on the other hand, believe that the value that they add to the process is un-appreciated by those who believe that they are merely recycling authors’ content back to their own institutions and charging profiteering rates for doing so:

Ms. Wise said that it’s also a misconception that publishers like Elsevier make scientists pay to read their own work. “What publishers charge for is the distribution system. We identify emerging areas of research and support them by establishing journals. We pay editors who build a distinguished brand that is set apart from 27,000 other journals. We identify peer reviewers. And we invest a lot in infrastructure, the tags and metadata attached to each article that makes it discoverable by other researchers through search engines, and that links papers together through citations and subject matter. All of that has changed the way research is done today and makes it more efficient. That’s the added value that we bring.” (Fischman, 2012)

Publishers, as intermediaries, have their own motives for doing what they do. Some presses (university presses and scholarly societies) serve to further the missions of their affiliated institutions, but often they are seen as drains upon those missions when they require excess financial subsidy. Others, such as Elsevier, are for-profit firms for whom profit seeking may or may not be tempered (to lesser or greater degrees) by the need to maintain their market’s respect. Clearly, in the degree of frustration that has entered the Open Access debates, trust of the intermediaries has been called into question, leading to calls for radical re-working of the system. If the existing mission-driven solutions are seen as too financially weak and the existing market-driven solutions are seen as too assertive of their own financial interests, what model was driving Artstor’s role as an intermediary in the IAP process?

Historian of technology Josh Greenberg (2008) writes about the role of intermediary organizations in times of technological transition, focusing on the role of video stores in creating the videotape industry. In his investigation of the development of home video as a means of renting movies, he unpeels the role of video rental stores as a new intermediary organization that could translate to both sides (in the case of movies, between the film industry and the home rental consumer). He notes how these new entities have the motivation, fresh perspective and varied skills to identify new applications of technology:

If there is a lesson to be learned from the story of the video store, it may simply be that mediation matters. On one hand, the history of a consumer technology is not simply a story of producers and consumers, but also of the varying levels of mediation that lie between the two. … [I]n order to understand how a specific configuration of the consumption junction comes to be, we must look beyond the consumer’s perspective to the actions of the mediators, who create a context for manufacturers’ products…( p. 158)7

But the work of mediating and connecting needs to be supported. In the realm of providing large format image files, museums can either do it themselves or – as the Metropolitan chose to do – ask someone else to do the work of fulfillment. As the Met’s Susan Chun recalls, “We needed someone who could deliver the high-resolution images, collect licensing data and not charge us for doing so. Artstor reached the audience we were trying to reach and was willing to provide the fulfillment service without any cost to the museum.” But the distribution service will face maintenance costs in a way that content building will not. As economist of higher education Bill Bowen (2013) has noted in a discussion of developments in new models of online education:

A major lesson from the earlier MIT OpenCourseWare (OCW) experience is that it can be much easier to create something like OCW, often with philanthropic support, than to find regular sources of revenue to pay the ongoing costs of maintaining and upgrading the system . . . . [W]e are told that the faculty and trustees of MIT are convinced that they cannot go down the same path again – their pride in OCW as a truly pioneering venture notwithstanding . . . . There is real danger in announcing that something is free without knowing who is going to pay the ongoing costs, which are all too real and cannot be ignored. (p. 60).

And yet, how can a free service (be it the provision of online courses, scholarly articles, or images to support academic publishing) be supported if the driving cultural mandate is to provide content for no charges? In publishing, the main model being assessed today is to tack the costs of delivery onto the researchers’ (or the researchers’ institutions’) tab that has already subsidized the creation of the research. In the “gold open access” model, the researcher either subsidizes an existing distribution service or self-publishes via an open channel (such as an institutional repository). Whether the self-publishing route can attract readers with the same efficacy (or potentially with greater efficacy) as the traditional distribution model remains to be determined. The upending of the distribution channel – which includes processes such as trust-building (wherein the reader prefers one channel to another due to a comfort level with the quality of material that a certain channel provides) and marketing (whereby awareness is raised) – means either a new role for content creators and their repositories or a reliance on new players.

5. Do Motives Matter in the World of Free?

The provision of free content – meaning with limited access restrictions and without fees charged to users – happens in academic work in a number of ways. In some models, researchers pay for publication of their work (such as the Public Library of Science) with the funding coming from grant support, institutional support, or the researchers’ pocket.8 Sometimes institutions themselves pay. Sometimes free content is subsidized by profit-making enterprises like Google (which has digitized over 10 million books. And non-profit community-based efforts sometimes collaborate to react against the ownership role of outside entities (in the way that the University of Michigan and other partners in the Google book scanning project formed the HathiTrust as a repository for academia’s copy of the scanned material, or AMICO was formed to represent museum interests when Corbis sought to appropriate the museums’ role in overseeing the licensing of their own digital images.

When the Metropolitan Museum expanded its investment in creating and managing digital assets, it faced the question of how best to manage the external flow of content. Recognizing the need both to manage the two-way flows of content (the Met wanted to know who was licensing the content even if it was given away for free) and the fulfillment infrastructure to support the distribution, the Met decided that the burden at that point would be too high to take on directly. But many other free resources (including library special collection sites, cross-institutional collaborations like the Biodiversity Heritage Library, and course sites like MIT’s OpenCourseWare) are fostered by universities who pay both to create and support their content.

What motivates these various models? These efforts are considered an investment of one kind or another for the institution. They can be thought of as promoting the resources that research institutions are charged with caring for, and they also can be seen – fairly – as marketing campaigns that lift the stature of a library or a university in the eyes of potential applicants, faculty, or funders. Universities distribute free online courses and libraries share special collections online both because they want to be known and appreciated – and because the institution deems those activities as worth subsidizing in support of the institutional mission. As Chris Anderson (2009) notes, in discussing the “the use of free as a marketing gimmick”:

I suspect that there isn’t an industry that doesn’t use this in one way or another, from free trials to free prizes inside. But most of that isn’t really free – it’s just a direct cross-subsidy of one sort or another (p. 131).

There is a healthy merging of self-promotion and public service that helps to justify the very significant investments that need to be made in the digitization of content. But, as noted above in the Bowen citation, creating and managing such content already entails a non-trivial burden; it is unlikely that many institutions beyond the wealthiest ones can also take on the distribution and marketing of the content that they produce.9

6. Free Lunches from Commercial Partners

As repositories, scholars, and those who bear the costs of academic subscriptions search for answers to the cost problems of promulgating academic content, commercial firms such as Google, Amazon, and Apple represent obvious possible partners. After all, their work in the commercial marketplace is not segregated from the academic markets in the way that commercial conglomerates might have been in another era. Over the past 20 years, they have become everyday parts of academic work. These firms – and new start-ups like Coursera in the world of online courses – are driven by very sophisticated engineering and marketing talent and are eager to utilize university-created content. These firms live every day in markets in which free and fee-based services are being tested and deployed at scales and at economic stakes that dwarf academic transactions. Given their scale, technological abilities, and interest in unusual (and sometime unique) content, it is understandable that these channels might be seen as the ideal – and knowledgeable – partners for the re-engineering of academic publishing needs.

And these firms bring very appreciable talent to the academic world. They also bring the capacity to provide the potential of financial incentives to employees that non-profits only dream about. And they bring, as Wired editor Chris Anderson notes, a set of new business models that enables them to do a lot of things for free in the name of supporting their core commercial enterprises:

“Today Google offers nearly a hundred products, from photo editing software to word processors and spreadsheet, and almost all of them are free of charge. Really free – no trick. It does it the way any modern digital company should: by handing out a lot of things to make money on a few.” (Anderson, 2009, p. 97).

In this model, Google has a plan to cross-subsidize its free activities (at least for the foreseeable future) in order to bolster the community that funds its AdSense revenues. Long-term customer acquisition serves the interest of shareholders, and those priorities drive the “free” services rather than a “mission.” These ventures need to make money somewhere, to return funds to their investors and shareholders. They may well have “change-the-world” intentions but they also have a legal responsibility to seek profits for their shareholders. When the free things that they do are in the service of promoting their commercial services, are they dependable partners?

Commentators have noted that reliance on commercial services as partners in the publication pipeline might not always be on the same terms as those to which non-profit institutions are accustomed:

If you want to know how people found your ebook, whether they went directly to the Amazon page or found it via a within-Amazon recommendation, or how many people looked at your ebook compared to how many bought it, well I’m afraid you’re out of luck. Amazon’s reports for sellers are restricted to basic sales and royalties numbers. No other data is made available.

This effectively means that there’s no way to compare the success of your promotional campaigns, or spot interesting routes of discovery. The stats you might be used to seeing for your web properties simply don’t exist for ebook sales. (Charman-Anderson, 2012)

Amazon and the other major players defining the web’s landscape that utilize “Free” as one of their strategies are inclined to share as little as possible, knowing that the data that they gather is of enormous – and flexible – value. What is useful tomorrow may not be obvious in the data gathering of today. See also a letter from Bob Meister (2013), the chair of the faculty council of the University of California institutions, to the president of Coursera:

Eventually, all students in my Coursera class will learn that data that they now provide to the company for free–perhaps so that it can grade them–will be the private property of Coursera, which can then sell it back to them in the form of “services,” which could include their own performance record but also different “views” comparing it with that of students at better universities, those with higher test scores and with advanced degrees. The possibilities for renting this information back to its students are endless, not to mention the added possibility of developing other markets for the user-assessment information that Coursera will “own.”10

Aggregators like Amazon or Google, for whom free is part of an interlocking set of business strategies, may regard the interests of other players in the ecosystem as old models prime for disruption. As library blogger Peter Brantley notes, Amazon’s strategies for being the central node in a disrupted publishing world can come at the expense of the other players, and eventually the system. Brantley (2010) reflects on how Amazon reserves the right to offer an author’s book at whatever the lowest rate it is selling for elsewhere on the web:

While such a strategy makes short-term financial sense for me as an individual author, in the long term it severely restricts my opportunities to reach readers through other outlets, and it makes me dependent upon a single retailer. It is also detrimental for the broader ebook market because it generates a positive feedback loop that deepens Amazon’s share of self-published and low-priced ebooks. For anyone who believes that self-published ebooks will grow as a percentage of book industry sales, there should be concern that Amazon’s pricing policies will weaken retailers that are abandoned by authors seeking to avoid triggering Amazon’s pricing retaliation…. Amazon’s pricing policies are unfortunate for authors, and ultimately, for readers.

It is easy to understand why libraries and others aim their ire at Elsevier and other for-profit publishers. Businesses whose models of cross-subsidizing free content are less obviously threatening to scholarship than those whose models seemingly drive journal prices ever higher. But since their subsidization of free is in the name of marketing their conglomerate’s other offerings, their alignment with scholarship may have limits, of which educational players might be conscious before building reliance upon them.

Non-profits have always been susceptible to the will of other players – such is the way of those who depend on the financial generosity of others. And so it is not a new risk for non-profits to be exposed to corporations’ strategies. In 2007, when Altria (parent company of Phillip Morris) moved its corporate headquarters out of New York, its 20-year support for art exhibitions and dance companies also ended.

Jennifer P. Goodale, a former actress who is Altria’s vice president for contributions, said in an e-mail message: “It is unlikely that Altria will be funding arts organizations in New York in the future because, as far as we know, there will be no Altria corporate contributions program, which is a result of the decentralization we’ve been working on over the past few years.”

“It’s going to be hard for many of us,” said Patricia Cruz, executive director of the Harlem Stage, which received $125,000 last year from Altria. “I hope people look at this as a vacuum to be filled rather than leading an exodus.” (Martin, 2007, p. A14).

Marketing interests that may provide a rationale for a for-profit firm to support a non-profit’s work (either directly or indirectly) may be different from mission interests in that they might not endure. Marketing is by no means a dirty word for non-profits. But their own internal marketing undertakings are initiated by a mission and this shapes how and why they do the things that they do, and how they treat their customers or users. As economist Burt Weisbrod (1998) notes:

Consumer willingness to pay results from both the consumer’s preference for a particular good or service and that consumer’s ability to pay. When two consumers have the same ability to pay, differences in willingness to pay reflect preferences; however, in general one cannot distinguish whether someone who is willing to pay a large sum is wealthy and relatively unconcerned with the cost or is of modest means and greatly desires the good or service. Non-profits sometimes care about the intensity of want or need, apart from wealth, whereas profit-maximizers always care about the willingness-to-pay composite. (p. 304)

This impulse – to respond to the intensity of user need while being mindful of ability to pay – is a responsive and mission-driven motive that drove the Met to initiate the IAP idea. They could have sought – even in the name of supporting their other charitable and educational undertakings – to charge everyone every dollar that they could. But they decided not to. In seeking a partner for the distribution of the content, they chose an intermediary for whom the cross-subsidization of the distribution process could be sponsored with the same mission-driven “care about the intensity of want” that Weisbrod notes.

Reliance upon commercial partners might have strings attached. Even to the extent that those strings might be subtle or not immediately impactful, non-profits should recognize that commercial partners have different agendas and need to be considered with that in mind. A non-profit partner has its motives too, and some of these may be self-promotional or self-aggrandizing. But there is also an alignment that may be found between the missions of non-profits. As I was reminded in the Bryn Mawr talk at the very beginning of Artstor, serving these communities is why we exist.

7. Sustainability

Funding agencies are torn. Enthusiastic about how digital opens up manifold possibilities for democratized access to knowledge, they want the broadest possible societal value for societally-sponsored investments. Yet they also want those services to be valuable and enduring; they know that open access needs to be dependable or we will collectively look up and find out that we have not suitably planned, jeopardizing not only “open” but also “access.” Non-profits need to have a better plan than hope. Blogger Joe Esposito, in commenting on the greater effectiveness of decision-making and market awareness among for-profit publishers urges non-profits to make – and non-profit boards to test and challenge – reality-based financial plans:

What we should be asking the Boards of NFP publishers is that they be held accountable for the financial success of their publishing entities. “Success” can be defined in different ways; I would include an acceptable level of subsidy in my definition, provided that the size of that subsidy and the means to pay for it are established in advance. (Esposito, 2013b).

Subsidy is a fine part of the solution since (as we all know) universities (for example) cross-subsidize all the time, such as undergraduate fees help to subsidize graduate education. But there must be committed revenues sources to provide the support. Moreover, in entering into the sphere of technology platform development and ongoing maintenance, non-profits cannot merely “throw together a website” if they want to provide a dependable service.

We can depend on commercial services to be charitable insofar as it is in their interest to do so. It might be preferable to work with non-profits so as to align interests and harness the voluntary impulses of mission-driven institutions. Yet such shared missions can help build the new bridges, but they cannot run only on the steam engine of good intentions. If the peril of reliance on commercial enterprises might be found at the point at which their interests and the community’s diverge, the corresponding peril of depending on non-profit partners or collaborations comes when they cannot be depended upon.

Artstor as an organization supports the Images for Academic Publishing project both because we believe in it and because we (and our board of Trustees) believe that it is an appropriate use of funds generated by Artstor Digital Library subscriptions. But as IAP grows, it will require continued infrastructure investment. Making such investments to benefit both a range of contributing institutions that will not have to build their own distribution mechanisms and a wide range of scholarly users will, we hope, continue to make sense to our board and to the philanthropic community.

8. Lessons Learned

Setting up this new model to support scholarly communication might provide some answers or at least suggest some approaches as the larger world of scholarly publishing redefines its processes:

  1. Re-calibrating ecosystems works best when the broad set of community players are involved: finding, defining, and communicating what the potential benefits are for all involved can forge a balanced solution.
  2. Disaggregating audiences (such as supporting authors of scholarly monographs but not “coffee table” books) helps both to define the audience’s needs and to articulate who is and who is not threatened by the change.
  3. Distributing shared and scalable costs reduces individual investments in infrastructure.
  4. Cross-subsidy of mission-driven efforts requires both the funds to make it possible and a thoroughly legitimate justification of why the project is worth subsidizing.
  5. It helps to piggy-back on investments that need to be made anyway. The Metropolitan was only ready to do IAP when it realized it needed digital asset management infrastructure lest it lose the digital images that it needed for varied purposes.
  6. Intermediaries might be needed to bridge across constituencies who do not know or trust each other and to forge the needed solutions.
  7. Commercial partners are dependable only insofar as it is in their for-profit interest.
  8. And yet commercial partners are attractive because of their capacity. If non-profits want to provide alternative, mission-focused solutions, they, too, must invest capital in a long-term vision. Good intentions are not sufficient for dependable service.