A Definition of Digital Humanities, Revisited!

The first post on this blog detailed my initial impressions regarding the Digital Humanities, which resulted in my interpretation of the field by trying to form a definition for it. This final post will review this initial definition, my updated understanding of the field in relation to that definition, what I believe I got right in my initial thoughts, and how we can improve upon all of this with an updated definition!

Located below is the initial definition; directly retrieved from the initial post:

Digital Humanities encompasses both an academic discipline and digital representations of research that, when put together, seek to act as a mediator between the academic and the public. The current landscape of the field tends to focus on tackling difficult topics: representation, intersectional analysis, accessibility, and other areas that are discussed through digital mediums globally. This endeavor must be multidisciplinary in nature, while fostering an environment that encourages new types of research as digital tools only increase in effectiveness and capabilities. Digital Humanities is at its best when its foundation is based on expanding knowledge upon different cultures, structural issues, and the experiences of others with the explicit goal of better understanding the world around us.

Personally, I think that this definition is consistent with some principles surrounding Digital Humanities that we have explored over the past three months or so. As the definition stands currently, I will keep the parts that I have bolded for us (although the wording may be altered here and there). We have explored different types of programs and projects that ultimately incorporated interdisciplinary study, are oriented towards public audiences, and have been adjusting to the ever-changing landscape of technology. I also believe that the distinction of DH as a field and mediator remains important to preserve in our new definition; due to the fact that its accessibility allows for target audiences of all kinds, visual information for the viewer to digest, and the ability to engage the public without needing an article/book-length text to make one’s point.

The entire definition is not in bold because I believe we can heavily revise those portions, as my understanding of the field has changed since late August and early September. I still believe these portions to be true, however more context needs to be added in order for the definition to represent my current thoughts. Features of those portions will be included in the revised definition, but will also incorporate other elements.

“The current landscape of the field tends to focus on tackling difficult topics: representation, intersectional analysis, accessibility, and other areas that are discussed through digital mediums globally.”

To start with the first portion of text that is not in bold (located directly above), I do believe that current landscape of the field is engaged with topics that have implications for broader society. However, I now realize that these elements are circumstantial and do not properly represent the point that I was trying to make at the time of writing. I believe that these are components that make for good research and the communication of the author’s points, rather than being the focus itself. In the new definition, these previous elements will work to support assertions, rather than being the assertions themselves.

“Digital Humanities is at its best when its foundation is based on expanding knowledge upon different cultures, structural issues, and the experiences of others…”

To address this second portion of text that is not in bold, I do still believe in this sentiment of DH being at its best when it is able to draw conclusions that have broader implications to better understand the world. However, I also believe that I am confining the field to these types of projects that explicitly tackle these topics; thus discrediting the projects that exclusively digitize and/or transcribe texts, connections between a specific group of people, and those that track the progression of objects, ideas, or a trend through visual/audio mediums. This is why the last portion of that section, “…with the explicit goal of better understanding the world around us” will make it into my final definition. I believe that was the true sentiment of the overall statement, which can be elaborated upon with this newfound understanding we have gained throughout our journey.

The following text is my revised definition that I believe better represents such a diverse and complicated field…

Digital Humanities encompasses both an academic discipline and digital representations of research that, when put together, seek to act as a mediator between academic research in the humanities and the public through target audiences. Work in Digital Humanities includes–but is not limited to–utilizing podcasting, mapping software, text mining and/or analysis, network graphing, online exhibitions, digitizing collections, crowdsourcing projects, and more.

Projects in the Digital Humanities tend to be at their best when they feature multidisciplinary collaboration, while also fostering an environment that encourages new types of research as digital tools only improve regarding their effectiveness and the means of visualizing data.

Also, biases should be kept in mind to properly incorporate marginalized voices in one’s research, language, presentation, and analysis. This includes one’s personal biases and the biases that have been formed during the course of human history, such as: gender identity, race and ethnicity, sex, sexual orientation, representation in corpora/archives, class, and other such identifiers.

Projects in the Digital Humanities have the explicit goal of better understanding the world around us through determining and cultivating target audiences, figuring out the medium in which they will present their research, and creating projects that are free and accessible to the public.

This newer definition–as concisely as I deemed possible–incorporates my updated understanding of the Digital Humanities. I believe we have better clarified what DH projects can look like, the process, and some core features that make DH projects successful and representative of the field!

With that, I want to thank you for joining me on this blog that I created to familiarize myself with the Digital Humanities by engaging with writing and projects related to the field.

Podcasts & The Digital Humanities

Podcasting as a medium has only increased in prominence and popularity over the course of the 21st century. It serves a variety of niches and interests, whether these be academic, comedy, news, pop culture, and so on. It is arguably the most accessible form of the Digital Humanities out there, having a platform built into most smartphones! This post seeks to explore this phenomenon: how podcasting sets itself apart, and what it has in common with other digital tools I am familiar with.

I want to look first at what sets podcasting apart. Looking at history podcasts as an example, I think that they usually employ certain modes of storytelling that I have not seen replicated in other DH projects. First, podcasts are usually done through an audio-only format (some podcasts have video of the host(s) and sources that require visual aids incorporated into them). This means that there is more room for oral histories, interviews with reputable guests or members of the public, and so on (usually it is the former). I also think the hosts and studios that produce the podcast get to cater their audience through a variety of related topics, rather than focusing in on major themes. If podcasts have more than one episode, then the audience can pick and choose, if there is not a lot of continuity. I believe that the last major thing that sets it apart is its reach. Apple Podcasts and Spotify for instance host an unfathomable amount of content, so one must brand it right, but the potential and accessibility is quite versatile if you capture the audience. I think that overall, it is probably the most popular form of research that comes out of the Digital Humanities in the 21st century.

I think that while podcasting is quite unique in the realm of DH, I do think that it is not so different as to exclude it entirely. One must consider that it has a large reach oriented towards catered audiences, as DH projects tend to do. I do think that there is the most potential for general audiences here, however. Regardless, it still fits within those parameters. I also think that there is great potential for interdisciplinary collaboration, as guests can shape conversations or topics that will be discussed during any given episode based on their expertise and what they can add to the broader conversation. Podcasts also rely on the similar distribution services, only existing in a server. Otherwise, it would sit on one’s hard drive, unable to be accessed by any given listener unless sent the file(s). Lastly, I think that the express purpose of podcasts surrounding the Digital Humanities is educating those who are willing to engage with the content. If I were to describe these elements without categorizing them under the umbrella of podcasting, one could point to many different projects in Omeka, kepler.gl, Palladio, and so on. There is a lot of versatility to the Digital Humanities in general, but podcasting is amongst my and many other’s personal favorites, whether they realizing they are getting something out of it or not!

I would say podcasting is an emerging mode of conducting public engagement and research, but the sheer volume of shows out there proves that the time for such characterization has already past. While there are inherent risks and dangers to podcasting just as there is any other mode of communication, I think that it is quite unique in its approach; and one that I hope sticks around for a long time.

What Can Digital Humanities Do With Crowdsourcing?

Crowdsourcing in the Digital Humanities can entail a lot of different actions and concepts, so this post seeks to explore the effect that the public can have in contributing to the Digital Humanities; whether they realize it or not! Crowdsourcing is done through voluntary means, and is carried out by contributing to some sort of project, in the case of DH. This can mean a variety of things, but I want to focus on areas in which I believe the public can make the most significant difference in the field: the transcription and/or digitization of texts in existing collections run by an educational or non-profit institution, and community-driven collections.

The arduous process of digitizing collections to its fullest extent can be a daunting task for one group to accomplish. Some projects seek the aid of the community; a newer example being the Library of Congress (LOC). Their “By The People” project asks anyone to transcribe text from scanned images of their collections. The type of source varies: this can include handwritten letters, sheets of music (with lyrics and some indicators for the performers), promotional material, and more! Without making an account, one can go in and participate in transcribing texts. After the entire scan is transcribed, it is sent in for review by users and the LOC. Projects liken this process to a “puzzle” in some cases, and it is completely voluntary and non-committal.

The site will be linked below if you wish to explore the collection, or contribute!

Link: https://crowd.loc.gov/

Another way that the community engages in crowdsourcing content is doing it themselves! There are plenty of forums and projects dedicated to involving the community by offering a place to store and showcase collections digitally or traditionally. This can include common narratives, such as family memories being digitized that tell a greater story, lineage, and ties to the military being digitized, to name a few examples. When the public feels like they can contribute to something that honors memory, provides entertainment, or informs others, there is a value to contributing to a body of work or a project.

What seems challenging is selling that to the public. It is all about the framing, in other words. As I mentioned before, formal projects characterizing the transcription of scans as “puzzles” in some instances highlights an interesting approach to contributing to DH, without FEELING like one is contributing to something “boring” or perhaps “nerdy.” There must be an emphasis on non-committal entertainment and the true value of the work. Everyone is different, which makes outreach such a meticulous endeavor.

Overall, crowdsourcing can be full of uncertainty surrounding retention and garnering an adequate audience. When one expects  small contributions from hundreds, they may receive dozens of dedicated users that see the vision of the creator(s). The Digital Humanities needs the public, and can always learn from them and innovate in involving them. Crowdsourcing is a great step towards a sort of hands-off engagement, but more and more steps are being taken to ensure that voices are heard, and contributions are recognized across the Internet!

How to Read Crowdsourced Knowledge

Wikipedia has a mixed reception when it comes to the academic world; creating an interesting conversation about the utility of crowdsourced information and its current iteration in the form of AI. This post will explore the current state of Wikipedia and ChatGPT alike, hopefully coming to a satisfactory conclusion about how we can view them moving forward.

Link: https://en.wikipedia.org/wiki/Digital_humanities

I want to use the Wikipedia page on Digital Humanities as a basis for analysis (linked directly above this paragraph). This page has an interesting history; being created all the way back in 2006! This was during the earlier days of the formalized Digital Humanities field, which is reflected in several different places, including this page. For those unaware, the “Talk” page on Wikipedia articles (located directly under the article title) contains user discussion surrounding the page; highlighting anything from potential additions to the page, places where things should be revised or removed, and other general discussion about the future of the article. On the page for the Digital Humanities, users have discussed a lot of revisions, but the most striking detail that stood out to me is the lack of a clear definition for the Digital Humanities, as one user addresses during earlier iterations of the page, shown in the image below this paragraph.

Although in 2023, the page is quite thorough in highlighting criticisms, definitions, examples of visuals and projects generated by scholars in the Digital Humanities, and so on, it was not always like this! The contributors of this page over the years addressed the vagueness of the Digital Humanities just as the field did within over several decades and iterations. One can see every version of the page by exploring the “View History” tab, located on the right side of the same bar as the “Talk” page. Every edit and reason for editing appears, which can be filtered by user, date, type of edit, and so on. This shows a very precise evolution of the page one is looking at, and one can start to pinpoint where changes occur if one is inclined. Pictured below is the general “View History” page layout, and an example of the DH page when it was first established by user “Elijahmeeks:”

All of this is to showcase the progression and utility of crowdsourcing information, and the level of investment core users and casual editors collectively put into a page over the span of over a decade! I personally use Wikipedia as a starting point for casual or academic questions I may have, looking into the source material they utilize first and foremost! I think that depending on how one utilizes crowdsourced material, there can be a lot of benefits for users across the Internet!

This brings us to AI, which is essentially automating crowdsourced information that the user can interact with in different forms. In the case of ChatGPT, a free service that is only growing in popularity, one cannot see what exactly it is pulling from. When inquiring about this, the bot will say something along the lines of pulling information from a database that includes textbooks, reputable sources, and other such places. It is vague, and there is no clear way to check its work. With Wikipedia, it is all there for the user; one can even see which user made how many edits during specific timeframes! What is interesting is that when asked questions about the Digital Humanities, ChatGPT gave decent answers when compared to Wikipedia and other source material that I have read up on. It really protects its corpus, and results generated by AI are starting to pop up everywhere, with varying degrees of accuracy. Just because there is a decent success rate with this experiment I conducted with it, I am not sure where that leaves its impact throughout the rest of the Internet.

To summarize my feelings concisely, I am skeptical at best, but there is potential if it is developed ethically. I see it being inevitable regardless of my feelings on the matter, so time will only tell where this new age of crowdsourcing takes us, and if we will be begging for the days and uncertainty of Wikipedia ten years from now. It is an interesting thought experiment though, and I cautiously encourage you to try it for yourself on topics that you are passionate and/or well-read in.

Network Analysis with Palladio

This post will explore Palladio; an online digital tool that creates malleable visualizations to interpret certain formats of data.

Initially, the process of using Palladio is similar to kepler.gl; where one submits data (in this case I used .csv files) into its repository, which then converts this into data that is readable through the program to plot one’s points to their specifications. Palladio is not a full-fledged mapping tool; but rather, it uses the data to create connections and visualizations of patterns, similar to Voyant, a text-mining tool. In my exploration of the program, I utilized data surrounding the interviews of formerly-enslaved people in Alabama, creating “network maps” and “network graphs,” mainly. Although there are other functions that one can explore through this service, I mainly utilized those two.

Network maps take points which can be plotted through simple points, or through “point-to-point” plotting, which displays connecting lines through the data; indicating a path or another sort of relation between two or more points. This is useful in identifying patterns within geographical data, but the full or specified metadata is not displayed, as that is not the primary function of Palladio. It is more about the overarching connections present in the data, which the next tool I utilized exemplifies further.

There are other customization features in the “map” section, being split up into “tiles” and “shapes.” Tiles mostly shows geographic, more 3-D means of representing a space; through terrain, streets, satellite imagery, infrastructure, and custom tiles that the user can insert! Shapes allows the user to insert shapes that presumably indicate signifiers determined by the user. This system is open-ended enough for the user to do anything they want with it; whether it is for emphasis or to serve as a key/legend.

Moving on to the “tables” section, Network Graphing takes the data one submitted, and identifies themes based on one’s parameters they documented. For instance, when exploring the interviews of formerly-enslaved people, one can find connections between the sexes and topics they discussed during their interviews, age groupings, region in which the interviews took place, and other combinations that one wishes to explore further (mostly between one set of data columns within the overall document). This essentially functions as an interconnected web, creating connections for the data entries so identifiable patterns can emerge and be interpreted by the user. It is all accessible, easy to adjust, and quite flexible in what it can do! Some screenshots will be attached at the bottom of this post to give you a sense of what I am describing. Due to the volume of data, these screenshots are representing the visualizations that Palladio can produce, rather than the content itself.

There are also other visualization methods, such as “galleries” and other formats for tables. I did not utilize these functions, so although I cannot describe their functionality in-depth, it is worth mentioning that there are a variety of ways to represent user data.

Overall, Palladio enabled me to generate context and seek patterns in the data that I may have otherwise not spotted! The user experience was the best for me during my analysis of type of enslavement corresponding to the topics that were brought up in the respective interviews through network graphing. Although overwhelming at first, organizing the data through dragging circles to desired locations within the space (another great feature) enabled me to identify patterns of lingo, priorities, and the hierarchy of each condition represented in the data. It was something I was not entirely expecting, mostly because I did not know what to expect out of this program. It was surprising, and something that would have been possible through manual review; just not on this scale. This kind of technology is interesting, and something that I want to utilize more often.

Comparing Digital Tools (Voyant, kepler.gl, and Palladio)

Over the last few weeks, I have been exploring a variety of tools that are utilized in the Digital Humanities to read, interpret, map, and visualize data.  Among these tools are “Voyant,” “kepler.gl,” and “Palladio.” This post will explore what each program does, my experiences with all three, and how I will approach utilizing these tools moving forward. As I am gaining a better understanding of digital tools and the Digital Humanities as a field, it has been really interesting to observe the capabilities of these programs!

Starting with Voyant: this is a tool that takes one’s submitted corpus and analyzes it through different, generated visualizations based on the preset parameters and filters that one can modify to their needs. Without modifying any of the windows, this includes five main features: the “cirrus” (essentially a word cloud for simplicity), a “reader” that displays the chosen text and words/phrases that one explores in other sections, “trends” or a relative frequency graph that analyzes the frequencies of most-used words by default (which can be modified to analyze specific texts and/or words), a “summary” tool that identifies distinctive qualities of the texts submitted and general trends throughout the corpus by document, and a “context” tool that will find instances of the word/phrase one has selected (preceding and proceeding words in the sentence in which the word/phrase is used in). Voyant has some of the capabilities of Palladio, which will be discussed briefly.

Kepler.gl is a mapping tool that takes a set of data (traditionally through .csv files generated through programs like Microsoft Excel), and maps it based on coordinates and other relevant metadata. There are a lot of different visualizations one can make with programs like kepler.gl, including heat maps, data clusters, point maps, timelines, and more! The capabilities of kepler.gl–at least in terms of what I explored–provides a lot of variety for visual storytelling. This is best reserved for regional analysis, as plotting points/trends on a global scale can get quite complex, especially when showing broader connections between points and the conclusions made from said points. This is not to say that it is not possible, but rather not the scale that mapping projects typically deal with.

Lastly, Palladio is a tool commonly utilized to make visualizations of data and connect two or more parameters to each other. Palladio does have mapping capabilities just as kepler.gl has, which is quite useful (although quite limited in comparison)! However, its main appeal lies in its variety of visualization techniques. I utilized “Network Maps” and “Network Graphs,” so I will speak on the functionality of both in-depth to represent the technology as a whole. After uploading files (in this case a few .csv files), I was able to create visualizations through two parameters that I set. I did this for several pairings involving interviews from formerly-enslaved people in Alabama. With the info contained in the .csv files, I was able to determine connections between the type of work that enslaved people were forced into relative to the topics that they discussed in their interviews, among other trends with the different data present in the files. Although visualizations can get a bit messy when analyzing many different points of data and themes that it pulls, there are still interesting conclusions one can draw–or at the very least explore–with visual aids.

I believe that all of these tools can all pair well together in some form. Voyant is great for text-based analysis and as a starting point for broader trends in large corpora. Voyant and Palladio for instance could have the potential to pair well together! Although situational, source material that initially is ran through Voyant could pre-emptively identify themes and make parsing through data when converted and transferred to Palladio quite a bit easier!

I believe that kepler.gl and Palladio have the most potential out of these three, however. While kepler.gl is best used for mapping, I believe that Palladio better identifies patterns within the data itself. If nothing else, kepler.gl can be used to map the .csv files, while Palladio provides basic mapping functionality; also creating visual connections that Voyant would not be able to do effectively with this type of format. Voyant could potentially analyze the source material itself however, and that is where I believe its strength lies when comparing these three tools.

Overall, I believe that all three of these tools have great potential and purposes, and should be used together! Although different projects/research will call for different needs, I believe that if one has the time and knowledge, these tools will be essential for a growing digital catalogue of source material and means of digitization. I would assume and hope that these methods will only improve in accessibility and availability over time, so it would not hurt to familiarize oneself with the world of digital tools!

My experiences with these tools make me wonder what else is out there. My professors at George Mason University have encouraged me to familiarize myself with the Digital Humanities, so I decided to take two courses related to the topic this semester. Through these courses, I am utilizing a variety of different tools, and in the process of completing projects using online exhibition tools and other mapping programs. Both traditional means of research and newer ideas (at least to me) revolving around digital representations for the public have been at the front of my mind all of this year. I have found new ways to uncover new angles, narrow my research parameters, and create accessible projects for topics that I care deeply about. Digital tools, while intimidating at first, have allowed me to see the work that goes into research that targets the public, which I am greatly interested in pursuing in some form. The present day is not just museums and websites, but so much more.

Mapping with Kepler.gl

In this post, I will be discussing digital mapping as a whole through kepler.gl, a free online mapping tool that serves a lot of different analytical functions! Based on my experiences with this program, I could analyze data through coordinates, trends relating to proximity, clusters of mapping data, timelines, heat maps, color coding based on one of the provided parameters in the uploaded file(s) (in this case a .csv file through Microsoft Excel), and more! It is a tool with a lot of versatility and features to visualize data through its different filters, mapping options, and its ability to show connections within the data that the user can then interpret through other means!

At the bottom of this post is an example of a map that was created through data surrounding interviews with formerly-enslaved people in the state of Alabama in the 1930s. It specifically plots their name, age, their sex, where they were interviewed, and their place of birth. Additionally, there is more metadata that could be enabled through the source material it is visualizing, meaning that there is a lot of different information that can be provided through the metadata. Visualizations that utilize this type of source material can be useful for analyzing the interviewer and interviewee alike. It can visualize demographic data in the case of the interviewee; encouraging questions surrounding the formerly-enslaved population of Alabama at the time, their age and life-expectancy, where they came from and who ended up there at the time of the interviews (and perhaps “why” in some cases). On the side of the interviewer, it can document where these interviews took place, who interviewed them if that was inserted into the map, and the timeline in which these interviews were conducted. Through timelines–another feature that was previously mentioned–one can track the time, day, month, and year in which interviews took place, while isolating different points if one has that data entered into the files that they submitted.

I first utilized kepler.gl during my undergraduate education, so I had some prior experience before revisiting this tool (admittedly, it has been a while since I last utilized tools of this nature in any capacity). Upon doing this, I remembered the potential of not only the tool, but the practice itself. It is a multi-step process which necessitates thorough data that covers multiple aspects of what it is one is researching and eventually plotting on a map. I was vaguely aware of this process, but seeing it unfold again and being able to edit the information yourself is a valuable experience. If nothing else, it makes me appreciate this process that the public can take for granted when utilizing these types of services, even in everyday life. Geospatial research is a crucial tool in the digital humanities, and one of my favorite methods of visualizing data!

Text Analysis with Voyant

Content Warning: presence of the N-word in visuals (with the hard-R), topics surrounding chattel slavery in the United States

Voyant is a digital text-mining tool that allows for the user to analyze their submitted corpus through different visual tools. This includes (but is not limited to) word clouds, relative frequency graphs for chosen words/phrases for one or all of the documents in one’s corpus, a context tool to analyze the preceding and proceeding words in sentences containing a chosen word/phrase, a text reader to provide full texts that are being analyzed, and a summary tool that spots patterns and distinctive qualities of each document in one’s corpus. Each tool has different parameters one can set to fit their analytical needs; creating a malleable tool with a lot of different options for exploration!

Text mining can be useful if one’s source material relates to each other in some way, but the corpus’ overall length remains an obstacle for extensive/effective analysis. Some projects utilize millions of pages of text, therefore digital tools like Voyant were created to parse through raw data to create a user-friendly experience.

Voyant’s utility is in its ability to find core themes within documents or throughout the corpus. Below is an example of a word cloud generated through the “cirrus” tool; using a corpus of interviews with formerly-enslaved people in the state of Georgia:

This was useful in determining topics, themes, and the framing of the interviews. The word cloud shows themes of racial disparity, age, a sense of place through the home, plantations and the hierarchy inherent to slavery, and some instances of African-American Vernacular English (or AAVE). It sums up different aspects of the interview, which while lacking its entire context, can be a representative introduction to the content of specific documents or corpora.

The “context” tool was another feature that allows for a much better understanding of this corpus. In Maryland interviews, the “summary” tool identified the distinctive phrase “rezin” seven different times. When investigating further, it turns out that this was a local “freeman”; “Uncle ‘Rezin’ Williams.” It not only brought an individual’s story to life, but also gave me an opportunity to explore something specific to the state of Maryland through these interviews. The context tool filtered to show instances of “rezin” is pictured below:

Prior to exploring digital tools, I was not familiar with text-mining tools, or how they worked. Voyant (and I imagine other programs in the same vein) enabled me to explore texts in a whole new way. Being more of a visual learner, I appreciate when projects or texts have an interactive element that deepens one’s understanding of the topic that they are exploring. Through visual analysis driven by text, it is both a way to connect to the broader public, while succinctly conveying one’s points through more accessible means. I am grateful for tools like this, and I hope they only get utilized more in the years to come!

Why Metadata Matters

Metadata, to me, can be defined simply as “the details of data points.” By this, I mean that metadata serves as an organizational tool, while also providing context surrounding an object or text. If one were to manually fill in the metadata on an image of one’s own common frying pan, for instance, one would take dimensions, identify raw materials, when (and potentially where) the image was taken, the file format, copyright information, and so on. If the object or text is not one’s own, then this would necessitate the addition of where the item was found and analyzed; whether this be an archive, a collection, or other such creations.

Digital tools assist immensely in keeping this information together in an ethical and efficient way that provides proper context and credit. However, the effectiveness of these tools is dependent on the user compiling their primary and secondary sources. Omeka and Tropy, for instance, provide premade and customizable templates to fit the needs of the source one is adding to their online exhibit or archive, respectively.

In trying to understand the importance of proper, manually-generated metadata, we can start with the reliability of records versus human memory when one must utilize these kinds of online tools. Research requires a multitude of sources to make a convincing and holistic argument/narrative. When considering the arduous task of conducting research itself–let alone turning that into a coherent piece–one must consider that these programs are here for a reason. The field of history and Digital Humanities in general are dependent on ethical citation. They are fields that build off centuries of analysis and research to improve our understanding of the world. The manual creation of metadata, in my eyes, is a two-step process: the creation, and the observation. If one values their peers, it is vital to understand where one’s objects originate, which will help others build off of your own findings.

George Mason University Database Review (African American Periodicals: Voices of Black society and culture, 1825-1995)

Link to Database: https://infoweb-newsbank-com.mutex.gmu.edu/apps/readex/?p=EAPX

Overview: The African American Periodicals: Voices of Black society and culture, 1825-1995 database features “news, commentary, advertisements, literature, drawings and photographs” from African American society and culture in the United States, as described by their “How to use this database” page. This comes from the curation of “170 periodicals from 26 states” originating from collections at Harvard and the Wisconsin Historical Society. These are digitized texts that have been transcribed for user analysis.

History: This database is based on the work of award-winning historian James P. Danky (1947-Present) at the University of Wisconsin.

Info from Publisher: https://infoweb-newsbank-com.mutex.gmu.edu/apps/readex/product-help/eapx?p=EAPX

Search: Search options include the following: simple and advanced searches with filters based on origin of publication, date ranges, location, “eras in American History,” presidential administrations, and a text explorer. The text explorer was designed as a three-step process: one searches for their topic, selects any relevant documents, and then the user can analyze these documents by frequency of words, people, phrases, and other things of that nature. Once the user has found their documents, they can export them through a built-in email service.

Citations: One can use the following citation styles on the site: MLA, APA, AGLC, ASA, CMS, Harvard, and Turabian. If the preferred citation style is not available on the cite, the user can export the citation information into a different service or tool to edit in order to fit their formatting needs.

Reviews: Based on available reviews, the database is held in high regard. The following is an example of one review out of a University of Oxford blog: https://blogs.bodleian.ox.ac.uk/history/2020/06/18/new-african-american-periodicals-1825-1995/

Access: It is accessible through universities that have opted-in/purchased access to the archives. It seems that it is a paid service for students and researchers through their respective universities.