Bibliography

Biblio Citation Abstract
Siemens, R. (2011).  Subsidium: Master List of REKn Primary Sources. Digital Studies / Le champ numérique. 2,

This article provides a comprehensive list of the REKn primary sources as of 2011. This list of resources is considered a subsidies to the article "Prototyping the Renaissance English Knowledgebase (REKn) and Professional Reading Environment (PReE), Past, Present, and Future Concerns: A Digital Humanities Project Narrative."

Halbert, M. (2003).  The Metascholar Initiative: AmericanSouth.Org and MetaArchive.Org. Library Hi Tech. 21, 182–198.

This article reviews the "accomplishments and findings to date" of the MetaScholar Initiative (comprised of AmericanSouth.Org and MetaArchive.Org). The MetaScholar Initiative is a "two‐year endeavor" aimed at fostering the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). The MetaArchive is a demonstration project that arose from the belief that "services for researchers must do more than simply aggregate metadata." The MetaArchive was created to demonstrate "how contextual materials, and other sorts of information that add value to the basic functions of metadata aggregation and search" are feasible to integrate. To date, the projects have developed a central harvesting and indexing system, have developed a metadata aggregation network, have undertaken aggregation actives, have designed two working portals to facilitate online communities, and have selected of scholarly resources. Encountering various challenges, the projects have discovered that collaboration is difficult, that there needs to be more training and education on the OAI-PMH initiative, and, finally, quality controlling metadata is a particular challenge. The next steps for the projects are to further develop infrastructure and refine the portals.

Connaway, L. Sillipigni, Dickey T. J., & Radford M. L. (2011).  “If it is too inconvenient I'm not going after it:” Convenience as a critical factor in information-seeking behaviors. Library & Information Science Research. 33, 179–190.

This article unpacks the findings of two multi-year studies designed to help answer why individuals choose to consult one source over another? What factors play into this decision? The authors found that today's users have limited time or attention so convenience is a prime motivator in information seeking. The studies found that convenience is especially important to millennials but that the factor did hold weight across multiple categories of survey respondent. Convenience was found to be important for both academic and every-life information seeking. Alongside ease of access, complete access was determined to be an important factor for information seekers. The authors suggest that libraries promote their complete access resources online.

Bowen, W. R., Siemens R. G., Thomas S. F., Roast C. R., & Ritche I. E. (2003).  Present and Future Directions in Developing Online Resources for Renaissance Studies. 63–65.

This conference proceeding looks at three developing initiatives of online Renaissance study and research: creating an online database of digital resources for Renaissance scholars; the conjunction of dynamic and hyper textual edition-building practices in the digital environment; and the Active Reading project. The scholars address the challenges of collaboration, access, knowledge management, standards, and delivery as they relate to the production of these resources.

Dunsire, G., & Willer M. (2011).  Standard library metadata models and structures for the Semantic Web. Library Hi Tech News. 28, 1–12.

This essay addresses the recent initiatives to "standardize library metadata models, structures, and vocabularies." Specifically, in this article Dunshire and Willer discuss the International Federation of Library Associations (IFLA) and Institutions proposed standards for the Semantic Web. "Many of IFLA's standard bibliographic models and applications are inter‐related on a formal or informal basis. The majority of formal relationships consist of direct references from one standard to another, or of mappings between elements from different standards which have been approved by the IFLA groups which maintain the standards." The DCMI RDA Task Group, created in 2007, has three goals: modelling entities using specific vocabulary of properties and classes; identifying vocabularies; and developing a Dublin Core Application Profile.

Hollink, L., Schreiber G., Wielemaker J., & Wielinga B. (2003).  Semantic Annotation of Image Collections. In Workshop on Knowledge Markup and Semantic Annotation, KCAP ’03, 2003. . 0–3.

This paper begins by acknowledging the increased sophistication in information retrieval since the advent of the semantic web. However, the authors argue that the semantic has introduced the problem of differentiating between relation and relevance. Using a case study on 202 paintings from Artchive, the researchers compare the expansion and precision of queries using exact phrases, hyponyms, or mixed relations. Their objective is to pinpoint the correct balance between enough relations (i.e. high recall) and too many relations (i.e. compromised/low precision). The results indicate that queries expanded with the use of hyponyms, holonyms, and meronyms maintain precision while increasing recall. However, introducing hypernyms with other relations was more detrimental than beneficial.

Larson, R. R., & Janakiraman K. (2011).  Connecting Archival Collections: The Social Networks and Archival Context Project. (Gradmann, S., Borri F., Meghini C., & Schuldt H., Ed.).Research and Advanced Technology for Digital Libraries. 3–14.

This paper describes the Social Networks and Archival Context (SNAC) project, a database, which “merges information from each instance of an individual name found in the Encoded Archival Description (EAD) resources, along with variant names, biographical notes, and their topical descriptions. “ The database merges information from different sources to offer a rich and varied insight into the social-historical context of the name provided in the search query. In this article, Ray Larson and Krishna Janakiraman address the processes involved in and issues that arise from deriving information from the database, such as name-matching and merging information from different sources. They also describe the SNAC prototype interface for public use. The SNAC project is still undergoing development, where the research results are being used to improve the database.

Hyvönen, E., Viljanen K., Tuominen J., & Seppälä K. (2008).  Building a National Semantic Web Ontology and Ontology Service Infrastructure –The FinnONTO Approach. (Bechhofer, S., Hauswirth M., Hoffmann J., & Koubarakis M., Ed.).The Semantic Web: Research and Applications. 95–109.

This paper presents the vision and results of creating a national level cross-domain ontology and ontology service infrastructure in Finland. The novelty of the infrastructure is based on two ideas. First, a system of open source core ontologies is being developed by transforming thesauri into mutually aligned lightweight ontologies, including a large top ontology that is extended by various domain specific ontologies. Second, the ONKI Ontology Server framework for publishing ontologies as ready to use services has been designed and implemented. ONKI provides legacy and other applications with ready to use functionalities for using ontologies on the HTML level by Ajax and semantic widgets. The idea is to use ONKI for creating mash-up applications in a way analogous to using Google or Yahoo Maps, but in our case external applications are mashed-up with ontology support.

Knight, K. (2006).  Collex. Transliteracies Project.

This research report and evaluation by Kim Knight summarizes, describes, explores, and critiques to Collex tool. The function of Collex is to facilitate the searching across "different peer-reviewed, scholarly databases." The tool allows users to collect resources in "exhibits" that can be privately or publicly stored. The aim of Collex is to move away from the "centrally organized, hierarchical content and away from a data structure in which the complex relationships of an archive are only apparent to those who are intimately familiar with the software platform." In order to accomplish this, NINES aggregates search results on their site and then, when a user selects an object, they are directed to the original online source, "thus, Collex retains the unique interpretive and presentational framework of each individual archive or journal." Knight concludes by asserting that "Collex is a unique and exciting tool." She argues that the "recursive user interactions highlights the unique benefits that online research has to offer" and the "benefits of doing research with Collex [...] is likely to entice very traditional scholars to engage in more robust online research activities."

Mandell, L. (2012).  Brave New World: A Look at 18thConnect. Age of Johnson. 21, 299–307.

To begin this article Laura Mandell calls researchers to collaborate with librarians, computer scientists, and administrators in order to preserve our "many valuable digital materials." Mandell asserts the efforts of 18thConnect as an exemplar of this type of collaborative effort. Taking its inspiration from the established NINES project, 18thConnect "offers scholars access to electronic resources by providing an aggregated integration of those resources." 18thConnect is also supporting the development of a highly tuned OCR software - Gamera - which the team is training to distinguish unique characters from older texts without errors. Future 18thConnect plans include the integration of text analysis tools such as "Voyeur" (developed by Geoffrey Rockwell and Stefan Sinclair). As Mandell concludes, "18thConnect aspires to be a community of scholars working together to make our research and publication environments all that they can be, all that we want them to be."

Castell, T. (1997).  Maintaining Web-based Bibliographies: A Case Study of Iter, the Bibliography of Renaissance Europe. Proceedings of the ASIS Annual Meeting. 34, 174–82.

Tracy Castell provides an overview of the information management tools used by the Iter bibliography. Castello begins by articulating why Iter decided to design their bibliography for the web: accessibility and updatabilty. These principles played a large role in the selection of information management tools as the Iter team understood that the design of their user interface and search system would impact how their audience interacted with the resource. Iter elected to use a combination of MARC (Machine Readable Code), AACR2R (Anglo-American Cataloguing Rules), LCSH (Library of Congress Subject Headings), and DDC (Dewey Decimal Classification). Castell notes that as the bibliography grows and develops, new information management issues will arise that will necessitate a revision of the current system. To conclude, Castell lists some of the Iter teams anticipated concerns or questions when it comes to future information management.