15
th
International Congress on Archives
Onuf
www.wien2004.ica.org
2
allow us to get some sort of preliminary yet uniform physical and intellectual control over our
collections -- in effect a massive shelfread -- (which would in turn allows us to impress funders with
our thorough knowledge of our collections and their needs). By assigning numerical ratings to each
collection during the survey we would be able to quantify our need and by entering the survey data into
a relational database -- we used Microsoft Access -- we would be able to develop formulas and query
the data in ways that would allow us to assess the results with some sophistication.
The survey and assessment proposal was funded and I was hired as Asst. Project Director in June of
2000. I spent much of the first 2 months hiring a team of six surveyors, creating the database, and
tediously entering our printed guide to manuscripts, which I found on a floppy disk, into it to serve as
the survey's backbone. Then I took the laptop into the 4th floor vault (one of 10 stack areas for archival
collections) to blaze a trail for the surveyors, so that when the time came they could focus on
surveying. Armed with the card locator file, I began to shelfread, literally identifying the beginning
and end of the collection, comparing what the database and the shelflist had for main entry and extent,
especially, and entering locations into the database. If the collection was not in the printed guide, I
needed to create a basic record for it. In some cases I would flag issues I wanted the surveyors to try to
resolve when they took a more in-depth look at the collection.
Ok, the ratings. The survey model rates the condition, intellectual access, physical access, and research
value of each collection, using a scale of 1 (bad) to 5 (fabulous). The condition refers to the materials
themselves, intellectual access to the ability of researchers to find out about the collection, and physical
access relates to the arrangement of the collection. The Research Value Rating (RVR) is made up to 2
separate ratings, the Interest and Documentation Quality Ratings, which were both rated on the 1 to 5
scale, giving the RVR a range from a minimum of 2 and a maximum of 10. A rating of 7 or greater
gave a collection priority status. The interest rating looked at the various topics that were documented
in the collection and the documentation quality rating measured the depth and breadth of the
representation of those topics. The Documentation Quality rating also intended to take the relative
rarity of the material into account.
Before the survey team started in early August I worked with the Project Director (who was also HSP's
art and artifacts curator) Kristen Froehlich to develop training for the survey team. Much of this
focused on the research value rating -- particularly the interest rating -- and how we would ourselves
learn and in turn teach others to get as full a grasp as possible not only on current research trends but
also forecast future trends as well.
We exploited local talent, and discussed use of the collections and current scholarly trends with HSP's
public services and publications staff. While we had initially thought use statistics might factor into the
interest rating, we quickly realized there were some issues with that. Although Max Moeller, the
current Director of Reader Services now collects and compiles awesome statistics relating to use of the
collections, number of patrons in the library every hour, and how many bathroom keys the reference
staff has handed out, at the time information about collection use existed solely in raw form: boxes and
boxes of call slips and rights and reproductions requests. In addition to needing aggregation, we knew
that the call slips did not necessarily represent "successful" searches for the users. We were also wary
about factoring use into the RVR rating because it would privilege collections to which users had good
or decent intellectual access over those that researchers were not using because they were not
accessible at all or had no way of knowing that the collection included materials that were relevant to
their research. Although we did not make use an official criteria for the interest rating we did trawl
American History and Life to see which HSP collections had been cited in the past 25 years and heard
anecdotally from public services staff which collections were most frequently used.
We also examined the current scholarly climate by scanning the table of contents of scholarly journals,
reading articles about historiographical trends, and reviewing the lists of the fellowships given in the
last decade by the HSP as well as our neighbors the Library Company of Philadelphia, the American
Philosophical Society, and the McNeil Center for Early American Studies at the University of
Pennsylvania. We talked about the research process and the different type of records and the various
ways they might be used. We identified genealogists, who use our archival collections almost as much
as scholars, as another potential audience to keep in mind when assessing the RVR of a collection:
although a collection was not a genealogical collection or even a family papers collection, could it be of
use to a family historian?
The survey team we hired had diverse backgrounds in various types of American history, public
history, and art history. They were young and... less young. Some had BAs from excellent liberal arts
colleges like Bryn Mawr and Haverford, some had Masters Degrees in Library Science or History. All