Collection Management Strategies for a New Research University Library

This post is the first of three installments of an interview with Jim Dooley, Head, Collection Services at the University of California, Merced on June 13, 2012. The interview was conducted by Matt Connor, Instruction/Reference Librarian at the University of California, Davis as research for a book he wrote that was recently published by ALA Editions: The New University Library: Four Case Studies. The remaining two installments will cover print and electronic books and Interlibrary Loan.

Founded in 2002, the University of California, Merced (UC Merced) is the first new University of California campus in 40 years and the first American research university of the 21st century. Since opening in 2005, UC Merced has grown in enrollment to over 6,200 students, including more than 350 graduate students.

Jim Dooley has been the Head, Collection Services at UC Merced since 2003. Prior to coming to UC Merced, he served in various capacities in the J. Willard Marriott Library at the University of Utah for ten years. He has published in Against the Grain and presented at the Charleston Conference for many years. In 2012 he was elected chair of the ALCTS Acquisitions Section and currently serves on the ALCTS Budget and Finance Committee.

Matt: We could start with you giving me an overview of the innovations in the collection development practices here at UC Merced. Maybe that’s a place to start.

Jim: Well, this is an interesting question, and part of this is simply historical because of the way we developed. When I got to UC Merced in 2003 which was two years before the campus opened and began to engage [University Librarian] Bruce Miller with collection building questions, there were a couple of things that we determined pretty early on. We both decided that we’d already reached a tipping point in journals, that it did not make sense for us to be collecting print serials. And so we made a very conscious decision that we were only going to collect electronic serials except in those cases that we thought would become rarer and rarer as life went on, where we had specific faculty requests and the journal was only available in print. We would not refuse requests. So, that’s one bedrock thing we started out with. The result is at this point we have access to over 60,000 online journals. We have literally a dozen print journals. And if any of those dozen go online, I will cancel the print immediately. So, that was one decision that we made, and the faculty have accepted it. We have not had complaints from faculty that we don’t have miles and miles of stacks of bound journals.

Matt: This would seem to reverse the paradigm of the high-level researcher browsing through their print journal. Without having the journals in front of them, they don’t seem to miss it.

Jim: They don’t. Or at least they haven’t. It tends to be somewhat discipline-specific in that I think that all of our dozen print journals are in the humanities. None of our science faculty are interested in print at all that we can determine. The requests for print journals have come unanimously from humanities faculty. And we’re starting to see the same sort of division in terms of book requests. I do want to say at the very beginning that none of us set out with the goal of having an all-electronic library. We are simply saying that we are responding legitimately to the directions for scholarly communication. And I see us as being more and more in all ways electronic.

Matt: Getting back to numbers. I guess for the electronic journal collection, a certain amount is taken up with the Tier 1 subscriptions that are general for all the UCs, and then a certain amount for the Tier 2 subscriptions which are specific for subsets of campuses. So, what fraction of the electronic collection is Merced specific where it is ordered outside the packages?

Jim: As a matter of policy, we have participated from the beginning in financially supporting the acquisition of all electronic resources centrally licensed for all ten UC campuses—what we call Tier 1 resources—even if we had no immediate need for the content. We’ve done this to help build a unified UC digital collection. We also participate in acquiring electronic resources licensed by a subset of UC campuses that directly support teaching and research at UC Merced—so-called Tier 2 resources. In terms of locally licensed journals, I can give you a couple of ballpark numbers. As I said, we have access to over 60,000 centrally acquired electronic journals. Those are both licensed and free. The freely available are primarily journals that are listed in the Directory of Open Access Journals. Both licensed and freely available journals are catalogued by the UC Shared Cataloging Program that distributes bibliographic records to all the UC campus libraries. We have local subscriptions to probably about 130 electronic journals that we have subscribed to because they’re not available through a system-wide license for a variety of reasons and because faculty request them. So, we’ve gone from 60,000 to 130. In general terms, I will always favor being part of a systemwide subscription license to journals rather than going it alone. We pay less than list price which is what I have to pay if I go it alone. Also, if it is licensed centrally by the California Digital Library (CDL), then access is managed centrally by CDL instead of my tracking down the publisher.

Matt: That would be a significant hassle.

Jim: And I don’t have the staff to do that. So, it makes it much more useful all the way around to be in on a systemwide license. From the perspective of UC Merced as a start up with a very small staff, for anything that CDL will do for me I’m very appreciative and thankful.

Matt: Maybe this would be a good time to talk about issues of packaging. It seems like we’re getting savings with these large packages of journals that are put out by vendors. But on the other hand, we ended up paying for a certain amount of content that we don’t really need. And on a finer level, this seems like it affects mostly smaller schools rather than larger where the larger have such a huge audiences they can find uses for everything that might be included in a package, but a smaller place might have more specific needs where more of the content ends up being unused. Have you seen anything like that?

Jim: This is an issue that the University of California Collection Development Committee (CDC) has been grappling with for several years. I would say it really hit us from about the time the university began to experience severe budget cuts a couple of years ago. There are a wide variety of reasons why packages make sense. There’s a really low per unit cost if you factor the whole thing out. There’s one license not ten. So, from the publisher’s perspective there are economies of scale. From the campuses’ perspective, it saves work. The fact that we do have the Shared Cataloging Program which provides MARC records for Tier 1s [electronic resources acquired by all ten UC campuses] and Tier 2s [more specialized electronic resources acquired by a subset of UC campuses] means there’s significant economies of scale for technical services. So, there are all of these reasons why packages are good. Where we get between the rock and the hard place is if we simply do not have the money to pay for the package. Then the pressures are really strong either to break the package or license only those journals individually that there is a specific need for, or else to try to negotiate the price down. The publisher response in many cases as we have seen the last several years is to then cut a much higher proportion of the content out of the package in order to achieve a relatively small price decrease. In this case, their strategy seems to be to make it so painful for the library that the libraries will simply give up and pay.

The UC libraries over the last several years have been very hard-nosed. We have broken one package completely, and we have accepted significant percentage cuts to other packages in order to achieve some level of cost containment. So, I can understand on the individual campus level why a selector librarian would say that we are paying for unnecessary or unwanted or unused content. I think if you look at the larger picture, there are some reasons for subscribing to packages that are still valid.

 


Call for Papers: Collection Management & Development Research Forum

Call for Papers
The Fifth Annual Collection Management & Development Research Forum
ALA Annual 2014

The Publications Committee of the Collection Management Section of ALCTS is sponsoring the Fifth Annual Collection Management & Development Research Forum (formerly known as the  Emerging Research in Collection Management & Development Forum) at the 2014 American Library Association Annual Conference in Las Vegas.

This is an opportunity to present and discuss your research.  Both completed research and research in progress will be considered.  All researchers, including collection practitioners from all types of libraries, library school faculty and students, and other interested individuals, are encouraged to submit a proposal.

The Committee will use a “blind review” process to select two projects.  The selected researchers are required to present their papers in person at the forum.  Each researcher should plan for a 25-30 minute presentation, with a 10-15 minute open discussion following each presentation.    Criteria for selection are:
Significance of the study for improving collection management and development practices
Potential for research to fill a gap in collections scholarship or to build on previous studies
Quality and creativity of the methodology
Previously published research or research accepted for publication prior to January 15, 2014, will not be accepted.

The submission must consist of no more than two pages.  On the first page, please list your name(s), title(s), institutional affiliation(s), and contact information (including your mailing address, telephone number, fax number, and email address).  The second page should be a one-page proposal, and it should NOT show your name or any personal information.  Instead, it must include only:
The title of your project
A clear statement of the research problem
A description of the research methodology used
Results of the project, if any
The deadline for proposals is January 15, 2014.

Notification of acceptance will be made by February 15, 2014.

ALCTS, in its bylaws, claims the right of first refusal for publication of any work emanating from an ALCTS body or program.


Researcher Networking and Profile Systems: Library Collections and Liaison Opportunities

For those attending ALA Midwinter, we hope you can join us for a  scholarly communication session co-sponsored by ALCTS Collection Management Section and ACRL Science & Technology Section.

Title: Researcher Networking and Profile Systems: Library Collections and Liaison Opportunities
Date:  Sunday, January 26, 2014
Time: 4:30pm – 5:30pm

Location:  Pennsylvania Convention Center – Room 121 B

Abstract: Researcher networking and profile systems such as VIVO, Symplectic Elements, Elsevier’s SciVal Experts, and Harvard Catalyst Profiles present interesting opportunities for libraries as they continue to address the evolving information needs of their constituents. Such systems might offer librarians and libraries opportunities for extensive new engagement with campus research environments, including: increased participation in team-based research projects and further development of born-digital collections of scholarly materials through the leveraging of existing library collections and campus academic support infrastructures. Speakers will discuss their experience working with (and in some cases developing) profile systems at their institutions, addressing library-related benefits and challenges associated with their implementation.

Speakers:

Paolo Mangiafico, Coordinator of Scholarly Communication Technologies, Duke University Libraries

In a former role as Director of Digital Information Strategy in the Office of the Provost at Duke, he co-chaired the Provost-appointed Digital Futures Task Force, which developed an open access policy for Duke faculty scholarship (adopted by the Duke Academic Council in 2010) and a set of recommendations for developing better infrastructure and support for management, publication, and archiving of research data. He is now working with librarians, technologists, and faculty to implement these, and serves on both management and implementation teams of the Library’s open access and digital repository projects and the University’s VIVO-based faculty data system. Paolo has been a fellow in the John Hope Franklin Humanities Institute at Duke, led an early digital library project called The Digital Scriptorium and Duke Libraries’ Web Services and Research & Development groups, and has served as a consultant for universities, university presses, and government agencies, as well as a lecturer in information science. He recently completed a term as a member of the Board of Trustees of the Durham County Library system. His current work focuses on how new technologies can be adapted to further the knowledge-sharing mission of research universities, and the intersection between social, economic, and technical systems.

Griffin M Weber, MD, PhD

Dr. Griffin Weber is an Assistant Professor of Medicine and the Chief Technology Officer of Harvard Medical School and Director of the Biomedical Research Informatics Core at Beth Israel Deaconess Medical Center in Boston. His research is in the area of expertise mining and social network analysis. He invented Harvard Catalyst Profiles, which is an open source website that creates research profiles for an institution’s faculty, and links these together through both Passive Networks, which are automatically generated based on information known about investigators, and Active Networks, which users themselves create by indicating their relationships to other researchers. These networks have numerous applications, ranging from finding individual collaborators and mentors to understanding the dynamics of an entire research community. Dr. Weber is also an investigator on Informatics for Integrating Biology and the Bedside (i2b2), an NIH National Center for Biomedical Computing, for which he developed a web-based open source platform that enables query and analysis of large clinical repositories. Dr. Weber received an MD degree and a PhD in computer science from Harvard University and has worked on numerous biomedical informatics projects, such as analyzing DNA microarrays, modeling the growth of breast cancer tumors, developing algorithms to predict life expectancy, and building a medical education web portal.

Steve Adams, Life Sciences Librarian, Northwestern University

Steven M. Adams is currently the Life Sciences Librarian at Northwestern University (NU). In this position, he is responsible for doing collection development, instruction, outreach, and reference to several departments in the biological, behavioral, and environmental sciences. He is currently coordinating the reference management training workshops for Northwestern, leading an initiative to promote cooperative collection development, and working on several initiatives related to instruction and outreach. His current research interests include developing new roles for science librarians, modernizing outreach and instructional services in academic libraries, scholarly communication, and research networking tools. Previously, Steven was the Biological and Life Sciences Librarian (2003-2011) and Interim Psychology Librarian (2007-2011) at Princeton University. His Princeton projects included developing Princeton’s implementation of the LibX toolbar, starting Princeton’s first library blog for departmental outreach, designing and executing several successful curriculum-integrated instruction initiatives. Steven received a B.A in Biology in 1998 and an M.L.S. in 2000 from Clark Atlanta University, and a certificate in Instructional Design from Langevin Learning Services.


#ICanHazPDF vs. #ICanHazLibrary: Where Librarians Need to Rise to the Occasion

At the end of last week, a blog post by  Alex Bond, a post-doctoral fellow at the University of Saskatchewan, on the blog: The Lab and Field caught quite a bit of attention on Twitter.  The blog post is entitled: “How #icanhazpdf can hurt our academic libraries.” The post describes a twitter stream based on this hash-tagged phrase where researchers, academics, students, and other interested parties basically ask one another to share PDFs of articles to which the originator of the post does not have access but hopes to gain free access via someone who does have access to the content. Librarians jumped to attention and re-tweeted the blog post throughout the next couple of days pointing out that Inter-Library Lending (ILL) through libraries could also supply PDFs or articles for free. Some researchers countered that ILL takes too long (up to two weeks still at many institutions) and is still delivered at many institutions in print form as opposed to sending an electronic file to the requestor. In some cases, the researcher/academic wrote they were invited to come into the library to find the content they needed but either couldn’t because of time constraints placed on their research or due to wanting to access content when libraries were closed and unavailable. For a sense of the growing use and development of #icanhazpdf, I recommend the blog post by Jean Lui at AltMetrics entitled: Interactions: the Numbers behind #IcanHazPDF.

In the information glut society in which we now reside, librarians have opportunities to expand services and provide more content at the point of need. First, we need to insure that all open access content is made readily available through our discovery systems, include all OA articles and do not “not include” resources within our library catalogs/discovery systems because they do not represent a complete journal issue or journal run. There are times when access does trump ownership and open access content is one specific situation we can readily remedy by providing broader access through our current systems. Secondly, we need to explore how to provide context sensitive content delivery when possible and make it happen. This means document delivery. Build into your service and/or collection budget a fund for rapid supply of article delivery to the desktops of faculty and graduate students. It has to be article delivery that is useful to the end-user, it cannot just be a PDF that can only be used for a short period of time or not printed-out. It has to be full on article delivery to the end-user. There are service providers, some from within our very own community, who can make this happen. This type of service may in fact, be more important than the addition of more journals or journal packages. Lastly, establish a 24 hour article delivery option of print resources to faculty and graduate students by offering to scan and deliver to desktops, content that reside within your stacks or storage facility. It is worth the staff time and resource commitment to do this as this type of service will be greatly lauded by faculty and researchers on campus.

Librarians cannot fully compete with Twitter but we can improve both access and availability to the content we’ve collected and the content our end-users demand.

 


Does the Buck Stop Here?

ALA conferences always seed something valuable for me.  Midwinter was no exception, with the ALCTS CMS Forum, “Scholarly Communication and Collections:  From Crisis to Creative Response”, yielding interesting questions about library investment in and the cost of open access.  As librarians, we seem to agree that open access is a good thing and a model we should advocate.  There is less agreement about our financial role in transforming the scholarly communication system.  Below are some of the questions that have preoccupied me of late…and a plea for further conversation.

When are we going to see costs decrease because of open access?

This question was raised at the forum and is filled with expectation.  I don’t regard OA as a means for reducing cost.  I think it is a more productive and efficient model for scholarly and scientific exchange and that library investment in OA publishing is a difference in kind from money spent on fee-based access models.  SCOAP3is an excellent example of this thinking—the consortium members are focused on re-directing, not reducing, their expenditures.  To paraphrase Kevin Smith, I believe that to realize significant change, libraries must be willing to apply their collection budgets to open access.

But is it a matter of will or capacity?

My library is a member of Public Library of Science, Hindawi, and Biomed Central.  Our support – drawn from our collection budget – allows OHSU authors to publish in open access journals at a reduced cost.  The money we spend on subscription journals dwarfs this investment.  Our subscription decisions are driven by faculty requests, publication and citation activity, and cost per use data.  There’s not a lot of fat.  Additional investment in OA would require difficult decisions about cuts to other resources that our patrons want and use.  Initiatives like SCOAP3 offer strategic models to emulate, but we’re still wanting for everyday, practical tactics that meet my institution’s content needs.

Why does it matter where the money comes from?

My friend Jill Emery and I have been talking about this as we prepare for our ACRL program on hybrid open access models.  I think there is something to be said for libraries contributing to the cost of open access besides the investment in building a better scholarly communication system.  Faculty value highly the library’s role as a buyer (see the 2009 ITHAKA S+R Faculty Survey).  Open access is not free.  By supporting OA financially – ideally in partnership with other stakeholders – libraries can preserve an evolved but familiar connection to the scholarly record.  Moreover, open content still requires management and curation, library expertise representing significant costs.

What are your answers to these questions? 

Mine are obviously still evolving.  So this is a call to continue the conversation we started at Midwinter within and across our institutions.  Ultimately, our conclusions will guide how we work to transform the scholarly communication system.

 

 


Textbooks on Reserve in Academic Libraries

Library Philosophy and Practice, an online journal published by the University of Nebraska recently published an article of mine. It details a program we tried at our campus in 2010, in which the library purchased or obtained a copy of every required text, and put them on reserve.  Here is the abstract:

In the fall of 2010, a grant of $36,000 allowed Portland Community College Library to purchase and place on reserve a copy of every required text at one of its campuses. A smaller college “center” also placed all required texts on reserve. The program was very popular with students and parts of the reserve collection received heavy use. Compared to the previous fall term, overall use of reserves at the Cascade Campus library rose 35%, and the Southeast Center collection saw an increase of 110%. However, use of the collection was unevenly distributed, with 26% of the books having more than 11 uses that quarter, but a troubling number (37%) receiving no checkouts at all. An analysis of the data suggests several ways that books with 11 or more uses per quarter could be increased to over 70%. These are to purchase and process books in a timely manner, to adjust loan periods for some items, or to purchase texts only for courses with multiple sections. Use numbers compiled over the following 8 quarters show that textbooks purchased and placed on reserve will be used for several successive terms.

If you want to read the entire article, here is the link:

“All Textbooks in the library: An experiment with Library Reserves.

http://digitalcommons.unl.edu/libphilprac/838/

Tony Greiner


Ebook attitudes and Purchase on Demand

Last week I read a recent article in Collection Building on user attitudes towards ebooks at Colorado State Univ Library, as well as ebook use.

Merinda McLure, Amy Hoseth, (2012),”Patron-driven e-book use and users’ e-book perceptions: a snapshot”,
Collection Building, Vol. 31 Iss: 4 pp. 136 – 147

The study was done from May-Dec, 2010, and consisted of a survey of ebook user attitudes, and a check on ebook use.

The highlights:

Readers who used an ebook were divided between those who prefer ebooks, those who prefer print, and those who don’t care, 1/3 for each. Half of the users had never used an ebook before their borrowing of one from the Colorado State Library.

Most readers used an ebook for an assignment.

Because the library launched a ‘Purchase on Demand’ ebook program during the study, the number of ebook titles available rose from 4,475 to 7,942 during the 7 months of the study.

Of the entire ebook collection of, 11.6% were ‘browsed’ meaning looked at for less than five minutes, with 7.7% being used more than five minutes. (Totaling 1533 unique titles browsed or more over the 7 months.) Thus, 19.3% of the titles were used at least once in the 7 months, although the number really studied (if you call more than five minutes really studying) is not even 8%. There is the chance that relevant chapters were quickly printed and read on paper, but still, use was fairly low.

Although there wasn’t a report specifically on the use of titles used after launching the Purchase on Demand (Patron Driven Aquisitions) program, they acquired 3,467 titles via Purchase on Demand, and only 1533 ebooks received use, so it seems safe to say that ebooks purchased under Patron Driven Aquisitions are not necessarily being used! It makes me wonder why they were requested. I have a hand in the patron-driven acquisitions of print books at Portland Community College, and while we occasionally purchase books where the patron doesn’t bother to pick them up from the holds shelf, that is the exception rather than the rule. I wonder if the Colorado State patrons were given a choice between ebook or print book when they put in their request.

The citation:

Merinda McLure, Amy Hoseth, (2012),”Patron-driven e-book use and users’ e-book perceptions: a snapshot”,
Collection Building, Vol. 31 Iss: 4 pp. 136 – 147

Tony Greiner


Weeding, Part 2

The right way to weed: Stanley Slote’s “Shelf Time Method.”

Stanley Slote devoted much of his career to studying library weeding, and methods of doing so. His book Weeding Library Collections (the 4th edition came out in 1997) summarizes these nicely. Slote’s own research showed that the amount of time since an item was last used is the best indicator of whether it will ever be used again. I’ll say that another way: The longer it has been since an item was checked out, the more likely it is that it will never be checked out again.

Slote also discovered, in several studies, that after removing books that had not been used for awhile (he gives several ways of judging what that time period should be, depending on the space available on your shelves and other factors) circulation went up! This is the root of the truism that ‘Weeding will increase circulation.’ The full sentence should be: “Weeding by the shelf-time method will increase circulation.”

How to get a list of books that haven’t checked out in awhile. (We are talking circulating collections here.)

With the data in computerized Integrated Library Systems, it is usually easy to run a list of items that have been in the collection at least X years . I suggest starting with five, and have not circulated for the last Y years. I suggest starting with five again.

So, you ask your ILS:

Give me a list of books that have been in the collection at least FIVE years, and which have not circulated for at least FIVE years.

(So a book that was in the collection for seven years, but hasn’t been used after the first 2 will show up on the list. A book that has been in the collection for four years won’t.)

Get the title, call number, location and status of the items.

Slote says to just send a circulation worker into the stacks, go get the items on this list and withdraw them. I’m not crazy about that, partially because of local authors and history, and other things that your library just should have.

Instead, send your lowest-paid COMPETENT circulation worker into the stacks to tip down the books on the list. (Like circulation workers in training do. They take a cart of books to shelf, and put them on the shelf tipped down on thefore-edge. The trainer then goes by and sees if the books were shelved correctly.) For books not where the catalog things they are, have the worker write ‘NOS’ (not on shelf) on the list. Technical Services can then change those to MISSING status, or just withdraw the item, whichever applies best.

In this case, the librarian in charge of the weeding (or that section of the collection) goes by with a cart, looking ONLY at the tipped books. If the book isn’t worth keeping, it goes on the cart. If it should be kept, it is just returned upright. If it REALLY should be kept, the librarian takes it to the circulation desk, checks it out and then checks it in again. That way the book is ‘safe’ for another five years.

Set aside a section of the shelf for the tipped books that should be repaired/reordered/moved to another part of the collection. But mostly, you just fill those carts. This goes FAST! And even if you miss a few books on condition, etc. you are still improving the collection, and doing it efficiently.

Naturally, academic libraries can use this method to place an item into storage, rather than weeding.

I think the ‘shelf time’ method is better than C.R.E.W. because it reflects how your patrons at your library use the collection, and it is faster. Certainly, if you spot a hopelessly out-of-date computer title, or one in filthy condition, weed it as well, but the ‘shelf time’ method eliminates the tedious title-by-title weeding process.

Tony Greiner


Weeding, Part 1

‘Weeding will increase circulation.’

Really? Any weeding at all? Of course, you need a method to weed properly. The problem is, too many of us were never taught the right method.

Probably the most common method is C.R.E.W. (Continuous Review Evaluation and Weeding) It was developed by the Texas State Library, and gives guidelines of how long to keep materials in various disciplines. Here is a link to a pdf, last updated in 2008:

https://www.tsl.state.tx.us/ld/pubs/crew/index.html

There are worse ways to weed, but I’m not entirely happy with it. My gripes with CREW are two:

1. It relies on someone else’s idea of what your collection should be like- not what your library’s readers want.
2. There has never (to my knowledge) been any sort of study to show if weeding by the CREW method makes for a better, more responsive collection. It ‘sounds right’, but we don’t know if it is right.

There is a companion method to C.R.E.W., the acronym ‘MUSTIE’ Misleading, Ugly’ Superseded (new editions have come out); Trivial; Irrelevant; Elsewhere (may easily be obtained elsehwere.)

These are cute, but I’m uneasy about UGLY and ELSEWHERE. I see people reading all sorts of ugly books. I think ugly bothers librarians more than it does readers. I especially dislike weeding an item that continues to get use simply on condition. If the reader doesn’t mind condition, why should the librarian?

And Elsewhere? You gonna pitch your copies of “50 Shades of Grey” because it is readily available? What about LOCKSS? (Lots of copies keeps stuff safe.)

Next time: If CREW ain’t right, what is?