Category Archives: Reviews

Podcasts & The Digital Humanities

Podcasting as a medium has only increased in prominence and popularity over the course of the 21st century. It serves a variety of niches and interests, whether these be academic, comedy, news, pop culture, and so on. It is arguably the most accessible form of the Digital Humanities out there, having a platform built into most smartphones! This post seeks to explore this phenomenon: how podcasting sets itself apart, and what it has in common with other digital tools I am familiar with.

I want to look first at what sets podcasting apart. Looking at history podcasts as an example, I think that they usually employ certain modes of storytelling that I have not seen replicated in other DH projects. First, podcasts are usually done through an audio-only format (some podcasts have video of the host(s) and sources that require visual aids incorporated into them). This means that there is more room for oral histories, interviews with reputable guests or members of the public, and so on (usually it is the former). I also think the hosts and studios that produce the podcast get to cater their audience through a variety of related topics, rather than focusing in on major themes. If podcasts have more than one episode, then the audience can pick and choose, if there is not a lot of continuity. I believe that the last major thing that sets it apart is its reach. Apple Podcasts and Spotify for instance host an unfathomable amount of content, so one must brand it right, but the potential and accessibility is quite versatile if you capture the audience. I think that overall, it is probably the most popular form of research that comes out of the Digital Humanities in the 21st century.

I think that while podcasting is quite unique in the realm of DH, I do think that it is not so different as to exclude it entirely. One must consider that it has a large reach oriented towards catered audiences, as DH projects tend to do. I do think that there is the most potential for general audiences here, however. Regardless, it still fits within those parameters. I also think that there is great potential for interdisciplinary collaboration, as guests can shape conversations or topics that will be discussed during any given episode based on their expertise and what they can add to the broader conversation. Podcasts also rely on the similar distribution services, only existing in a server. Otherwise, it would sit on one’s hard drive, unable to be accessed by any given listener unless sent the file(s). Lastly, I think that the express purpose of podcasts surrounding the Digital Humanities is educating those who are willing to engage with the content. If I were to describe these elements without categorizing them under the umbrella of podcasting, one could point to many different projects in Omeka, kepler.gl, Palladio, and so on. There is a lot of versatility to the Digital Humanities in general, but podcasting is amongst my and many other’s personal favorites, whether they realizing they are getting something out of it or not!

I would say podcasting is an emerging mode of conducting public engagement and research, but the sheer volume of shows out there proves that the time for such characterization has already past. While there are inherent risks and dangers to podcasting just as there is any other mode of communication, I think that it is quite unique in its approach; and one that I hope sticks around for a long time.

What Can Digital Humanities Do With Crowdsourcing?

Crowdsourcing in the Digital Humanities can entail a lot of different actions and concepts, so this post seeks to explore the effect that the public can have in contributing to the Digital Humanities; whether they realize it or not! Crowdsourcing is done through voluntary means, and is carried out by contributing to some sort of project, in the case of DH. This can mean a variety of things, but I want to focus on areas in which I believe the public can make the most significant difference in the field: the transcription and/or digitization of texts in existing collections run by an educational or non-profit institution, and community-driven collections.

The arduous process of digitizing collections to its fullest extent can be a daunting task for one group to accomplish. Some projects seek the aid of the community; a newer example being the Library of Congress (LOC). Their “By The People” project asks anyone to transcribe text from scanned images of their collections. The type of source varies: this can include handwritten letters, sheets of music (with lyrics and some indicators for the performers), promotional material, and more! Without making an account, one can go in and participate in transcribing texts. After the entire scan is transcribed, it is sent in for review by users and the LOC. Projects liken this process to a “puzzle” in some cases, and it is completely voluntary and non-committal.

The site will be linked below if you wish to explore the collection, or contribute!

Link: https://crowd.loc.gov/

Another way that the community engages in crowdsourcing content is doing it themselves! There are plenty of forums and projects dedicated to involving the community by offering a place to store and showcase collections digitally or traditionally. This can include common narratives, such as family memories being digitized that tell a greater story, lineage, and ties to the military being digitized, to name a few examples. When the public feels like they can contribute to something that honors memory, provides entertainment, or informs others, there is a value to contributing to a body of work or a project.

What seems challenging is selling that to the public. It is all about the framing, in other words. As I mentioned before, formal projects characterizing the transcription of scans as “puzzles” in some instances highlights an interesting approach to contributing to DH, without FEELING like one is contributing to something “boring” or perhaps “nerdy.” There must be an emphasis on non-committal entertainment and the true value of the work. Everyone is different, which makes outreach such a meticulous endeavor.

Overall, crowdsourcing can be full of uncertainty surrounding retention and garnering an adequate audience. When one expects  small contributions from hundreds, they may receive dozens of dedicated users that see the vision of the creator(s). The Digital Humanities needs the public, and can always learn from them and innovate in involving them. Crowdsourcing is a great step towards a sort of hands-off engagement, but more and more steps are being taken to ensure that voices are heard, and contributions are recognized across the Internet!

How to Read Crowdsourced Knowledge

Wikipedia has a mixed reception when it comes to the academic world; creating an interesting conversation about the utility of crowdsourced information and its current iteration in the form of AI. This post will explore the current state of Wikipedia and ChatGPT alike, hopefully coming to a satisfactory conclusion about how we can view them moving forward.

Link: https://en.wikipedia.org/wiki/Digital_humanities

I want to use the Wikipedia page on Digital Humanities as a basis for analysis (linked directly above this paragraph). This page has an interesting history; being created all the way back in 2006! This was during the earlier days of the formalized Digital Humanities field, which is reflected in several different places, including this page. For those unaware, the “Talk” page on Wikipedia articles (located directly under the article title) contains user discussion surrounding the page; highlighting anything from potential additions to the page, places where things should be revised or removed, and other general discussion about the future of the article. On the page for the Digital Humanities, users have discussed a lot of revisions, but the most striking detail that stood out to me is the lack of a clear definition for the Digital Humanities, as one user addresses during earlier iterations of the page, shown in the image below this paragraph.

Although in 2023, the page is quite thorough in highlighting criticisms, definitions, examples of visuals and projects generated by scholars in the Digital Humanities, and so on, it was not always like this! The contributors of this page over the years addressed the vagueness of the Digital Humanities just as the field did within over several decades and iterations. One can see every version of the page by exploring the “View History” tab, located on the right side of the same bar as the “Talk” page. Every edit and reason for editing appears, which can be filtered by user, date, type of edit, and so on. This shows a very precise evolution of the page one is looking at, and one can start to pinpoint where changes occur if one is inclined. Pictured below is the general “View History” page layout, and an example of the DH page when it was first established by user “Elijahmeeks:”

All of this is to showcase the progression and utility of crowdsourcing information, and the level of investment core users and casual editors collectively put into a page over the span of over a decade! I personally use Wikipedia as a starting point for casual or academic questions I may have, looking into the source material they utilize first and foremost! I think that depending on how one utilizes crowdsourced material, there can be a lot of benefits for users across the Internet!

This brings us to AI, which is essentially automating crowdsourced information that the user can interact with in different forms. In the case of ChatGPT, a free service that is only growing in popularity, one cannot see what exactly it is pulling from. When inquiring about this, the bot will say something along the lines of pulling information from a database that includes textbooks, reputable sources, and other such places. It is vague, and there is no clear way to check its work. With Wikipedia, it is all there for the user; one can even see which user made how many edits during specific timeframes! What is interesting is that when asked questions about the Digital Humanities, ChatGPT gave decent answers when compared to Wikipedia and other source material that I have read up on. It really protects its corpus, and results generated by AI are starting to pop up everywhere, with varying degrees of accuracy. Just because there is a decent success rate with this experiment I conducted with it, I am not sure where that leaves its impact throughout the rest of the Internet.

To summarize my feelings concisely, I am skeptical at best, but there is potential if it is developed ethically. I see it being inevitable regardless of my feelings on the matter, so time will only tell where this new age of crowdsourcing takes us, and if we will be begging for the days and uncertainty of Wikipedia ten years from now. It is an interesting thought experiment though, and I cautiously encourage you to try it for yourself on topics that you are passionate and/or well-read in.

Comparing Digital Tools (Voyant, kepler.gl, and Palladio)

Over the last few weeks, I have been exploring a variety of tools that are utilized in the Digital Humanities to read, interpret, map, and visualize data.  Among these tools are “Voyant,” “kepler.gl,” and “Palladio.” This post will explore what each program does, my experiences with all three, and how I will approach utilizing these tools moving forward. As I am gaining a better understanding of digital tools and the Digital Humanities as a field, it has been really interesting to observe the capabilities of these programs!

Starting with Voyant: this is a tool that takes one’s submitted corpus and analyzes it through different, generated visualizations based on the preset parameters and filters that one can modify to their needs. Without modifying any of the windows, this includes five main features: the “cirrus” (essentially a word cloud for simplicity), a “reader” that displays the chosen text and words/phrases that one explores in other sections, “trends” or a relative frequency graph that analyzes the frequencies of most-used words by default (which can be modified to analyze specific texts and/or words), a “summary” tool that identifies distinctive qualities of the texts submitted and general trends throughout the corpus by document, and a “context” tool that will find instances of the word/phrase one has selected (preceding and proceeding words in the sentence in which the word/phrase is used in). Voyant has some of the capabilities of Palladio, which will be discussed briefly.

Kepler.gl is a mapping tool that takes a set of data (traditionally through .csv files generated through programs like Microsoft Excel), and maps it based on coordinates and other relevant metadata. There are a lot of different visualizations one can make with programs like kepler.gl, including heat maps, data clusters, point maps, timelines, and more! The capabilities of kepler.gl–at least in terms of what I explored–provides a lot of variety for visual storytelling. This is best reserved for regional analysis, as plotting points/trends on a global scale can get quite complex, especially when showing broader connections between points and the conclusions made from said points. This is not to say that it is not possible, but rather not the scale that mapping projects typically deal with.

Lastly, Palladio is a tool commonly utilized to make visualizations of data and connect two or more parameters to each other. Palladio does have mapping capabilities just as kepler.gl has, which is quite useful (although quite limited in comparison)! However, its main appeal lies in its variety of visualization techniques. I utilized “Network Maps” and “Network Graphs,” so I will speak on the functionality of both in-depth to represent the technology as a whole. After uploading files (in this case a few .csv files), I was able to create visualizations through two parameters that I set. I did this for several pairings involving interviews from formerly-enslaved people in Alabama. With the info contained in the .csv files, I was able to determine connections between the type of work that enslaved people were forced into relative to the topics that they discussed in their interviews, among other trends with the different data present in the files. Although visualizations can get a bit messy when analyzing many different points of data and themes that it pulls, there are still interesting conclusions one can draw–or at the very least explore–with visual aids.

I believe that all of these tools can all pair well together in some form. Voyant is great for text-based analysis and as a starting point for broader trends in large corpora. Voyant and Palladio for instance could have the potential to pair well together! Although situational, source material that initially is ran through Voyant could pre-emptively identify themes and make parsing through data when converted and transferred to Palladio quite a bit easier!

I believe that kepler.gl and Palladio have the most potential out of these three, however. While kepler.gl is best used for mapping, I believe that Palladio better identifies patterns within the data itself. If nothing else, kepler.gl can be used to map the .csv files, while Palladio provides basic mapping functionality; also creating visual connections that Voyant would not be able to do effectively with this type of format. Voyant could potentially analyze the source material itself however, and that is where I believe its strength lies when comparing these three tools.

Overall, I believe that all three of these tools have great potential and purposes, and should be used together! Although different projects/research will call for different needs, I believe that if one has the time and knowledge, these tools will be essential for a growing digital catalogue of source material and means of digitization. I would assume and hope that these methods will only improve in accessibility and availability over time, so it would not hurt to familiarize oneself with the world of digital tools!

My experiences with these tools make me wonder what else is out there. My professors at George Mason University have encouraged me to familiarize myself with the Digital Humanities, so I decided to take two courses related to the topic this semester. Through these courses, I am utilizing a variety of different tools, and in the process of completing projects using online exhibition tools and other mapping programs. Both traditional means of research and newer ideas (at least to me) revolving around digital representations for the public have been at the front of my mind all of this year. I have found new ways to uncover new angles, narrow my research parameters, and create accessible projects for topics that I care deeply about. Digital tools, while intimidating at first, have allowed me to see the work that goes into research that targets the public, which I am greatly interested in pursuing in some form. The present day is not just museums and websites, but so much more.

George Mason University Database Review (African American Periodicals: Voices of Black society and culture, 1825-1995)

Link to Database: https://infoweb-newsbank-com.mutex.gmu.edu/apps/readex/?p=EAPX

Overview: The African American Periodicals: Voices of Black society and culture, 1825-1995 database features “news, commentary, advertisements, literature, drawings and photographs” from African American society and culture in the United States, as described by their “How to use this database” page. This comes from the curation of “170 periodicals from 26 states” originating from collections at Harvard and the Wisconsin Historical Society. These are digitized texts that have been transcribed for user analysis.

History: This database is based on the work of award-winning historian James P. Danky (1947-Present) at the University of Wisconsin.

Info from Publisher: https://infoweb-newsbank-com.mutex.gmu.edu/apps/readex/product-help/eapx?p=EAPX

Search: Search options include the following: simple and advanced searches with filters based on origin of publication, date ranges, location, “eras in American History,” presidential administrations, and a text explorer. The text explorer was designed as a three-step process: one searches for their topic, selects any relevant documents, and then the user can analyze these documents by frequency of words, people, phrases, and other things of that nature. Once the user has found their documents, they can export them through a built-in email service.

Citations: One can use the following citation styles on the site: MLA, APA, AGLC, ASA, CMS, Harvard, and Turabian. If the preferred citation style is not available on the cite, the user can export the citation information into a different service or tool to edit in order to fit their formatting needs.

Reviews: Based on available reviews, the database is held in high regard. The following is an example of one review out of a University of Oxford blog: https://blogs.bodleian.ox.ac.uk/history/2020/06/18/new-african-american-periodicals-1825-1995/

Access: It is accessible through universities that have opted-in/purchased access to the archives. It seems that it is a paid service for students and researchers through their respective universities.