Brandon Walsh

Collaborative Writing to Build Digital Humanities Praxis

[The following is the rough text of my short paper given at the 2017 Digital Humanities conference in Montréal.]

title slide

Thanks very much for having me today! I’m Brandon Walsh, Head of Graduate Programs in the Scholars’ Lab at the University of Virginia Library. I’ll be talking a bit today about “Collaborative Writing to Build Digital Humanities Praxis.” Since the subject here is collaboration I wanted to spend a few minutes here on my collaborators.

thank you slide

This work was begun at my previous position at Washington and Lee University’s library. My principal collaborator here is and was Professor Sarah Horowitz, from Washington and Lee University. We conceived the project together, co-taught the associated course, and her writing figures prominently on the project I will describe. The other names here are individuals, institutions, or projects who figure explicitly in the talk, whether they know it or not. You can find a Zotero collection with the resources mentioned during the talk here.

So. To begin. Emergent programs like those associated with the Praxis Network have redefined the possibilities for digital humanities training by offering models for project-based pedagogy. These efforts provide innovative institutional frameworks for building up and sharing digital skills, but they primarily focus on graduate or undergraduate education. They tend to think in terms of students. The long-term commitments that programs like these require can make them difficult to adapt for the professional development of other librarians, staff, and faculty collaborators. While members of these groups might share deep interests in undertaking such programs themselves, their institutional commitments often prevent them from committing the time to such professional development, particularly if the outcomes are not immediately legible for their own structures of reporting. I argue that we can make such praxis programs viable for broader communities by expanding the range of their potential outcomes and forms. In particular, I want to explore the potential for collaborative writing projects to develop individual skillsets and, by extension, the capacity of digital humanities programs.

coursebook site

While the example here focuses on a coursebook written for an undergraduate audience, I believe the model and set of pedagogical issues can be extrapolated to other circumstances. By considering writing projects as potential opportunities for project-based development, I argue that we can produce professionally legible outcomes that both serve institutional priorities and prove useful beyond local contexts.

The particular case study for this talk is an open coursebook written for a course on digital text analysis (Walsh and Horowitz, 2016). In fall of 2015, Professor Sarah Horowitz, a colleague in the history department at Washington and Lee University, approached the University Library with an interest in digital text analysis and a desire to incorporate these methods in her upcoming class. She had a growing interest in the topic, and she wanted support to help her take these ideas and make them a reality in her research and teaching. As the Mellon Digital Humanities Fellow working in the University Library, I was asked to support Professor Horowitz’s requests because of my own background working with and teaching text analysis. Professor Horowitz and I conceived of writing the coursebook as a means by which the Library could meet her needs while also building the capacity of the University’s digital humanities resources. The idea was that, rather than offer her a handful of workshops, the two of us would co-author materials together that could then be used by Professor Horowitz later on. The writing of these materials would be the scene of the teaching and learning. Our model in this regard was as an initiative undertaken by the Digital Fellows at the CUNY Graduate Center, where their Graduate Fellows produce documentation and shared digital resources for the wider community. We aimed to expand upon their example, however, by making collaborative writing a centerpiece of our pedagogical experiment.

tech stack

We included Professor Horowitz directly in the creation of the course materials, a process that required her to engage in a variety of technologies central to a certain kind of web publishing workflow: command line, Markdown, Git, and GitHub. We produced the materials on a platform called GitBook, which provides a handy interface for writing that invokes many elements of this tech stack in a non-obtrusive way. Their editor allows you to write in markdown and previews the resultant text for you, but it also responds to the standard slew of MS Word keyboard shortcuts that many writers are familiar with. In this way we were able to keep the focus on the writing even as we slowly expanded Professor Horowitz’s ability to work directly with these technologies. From a writing standpoint, the process also required synthesis of both text analysis techniques and disciplinary material relevant to a course in nineteenth-century history. I provided the former, Professor Horowitz would review and critique as she added the latter, then I would review, etc. The result, I think, is more than either of us could have produced on our own, and we each learned a lot about the other’s subject matter. The result of the collaboration is that, after co-writing the materials and teaching the course together, Professor Horowitz is prepared to offer the course herself in the future without the support of the library. We now also possess course materials that, through careful structuring and selection of platforms, could be reusable in other courses at our own institutions and beyond. In this case, we tried to take special care to make each lesson stand on its own and to compartmentalize each topic according to the various parts of each class workshop. One section would introduce a topic from a theoretical standpoint, the next would offer a case study using a particular tool, and the last would offer course exercises that were particular to our course. We hoped this structuring would make it easy for the work to be excerpted and built upon by others for their own unique needs.

table of contents

Writing collaborations such as these can fit the professional needs of people in a variety of spaces in the university. Course preparation, for example, often takes place behind the scenes and away from the eyes of students and other scholars. You tend to only see the final result as it is performed with students in a workshop or participants in a class. With a little effort, this hidden teaching labor can be transformed into openly available resources capable of being remixed into other contexts. We are following here on the example of Shawn Graham (2016), who has illustrated through his own resources for a class on Crafting Digital History that course materials can be effectively leveraged to serve a wider good in ways that still parse in a professional context. In our case, the collaboration produced public-facing web writing in the form of an open educational resource. The history department regarded the project as a success for its potential to bring new courses, skills, and students into the major as a result of Professor Horowitz’s training. The University Library valued the collaboration for its production of open access materials, development of faculty skills, and exploration of workflows and platforms for faculty collaboration. We documented and managed the writing process in a GitHub repository.

GitHub repository

This versioned workflow was key to our conception of the project, as we hoped to structure the project in such a way that others could copy down and spin up their own versions of the course materials for their own needs. We were careful to compartmentalize the lessons according to their focus on theory, application, or course exercises, and we provided documentation to walk readers through the technical process of adapting the book to reflect their own disciplinary content. We wrote reasonably detailed directions aimed at two different audiences - those with a tech background and those without. We wanted people to be able to pull down, tear apart, and reuse those pieces that were relevant for them. We hoped to create a mechanism by which readers and teachers could iterate using our materials to create their own versions.

Adapting the Book

Writing projects like this one provide spaces for shared learning experiences that position student and teacher as equals. By writing in public and asking students and faculty collaborators to discuss, produce, and revise open educational resources, we can break down distinctions between writer and audience, teacher and student, programmer and non-programmer. In this spirit, work by Robin DeRosa (2016) with the Open Anthology of Earlier American Literature and Cathy Davidson with HASTAC has shown that students can make productive contributions to digital humanities research at the same time that they learn themselves. These contributions offer a more intimate form of pedagogy – a more caring and inviting form of building that can draw newcomers into the field by way of non-hierarchical peer mentoring. It is no secret that academia contains “severe power imbalances” that adversely affect teaching and the lives of instructors, students, and peers (McGill, 2016). I see collaborative writing as helping to create shared spaces of exploration that work against such structures of power. They can help to generate what Bethany Nowviskie (2016) has recently advocated as a turn towards a “feminist ethics of care” to “illuminate the relationships of small components, one to another, within great systems.” By writing together, teams engage in what Nowviskie (2011) calls the “perpetual peer review” of collaborative work. Through conversations about ethical collaboration and shared credit early in the process, we can privilege the voice of the learner as a valued contributor to a wider community of practitioners even before they might know the technical details of the tools or skills under discussion.

Collaborative writing projects can thus serve as training in digital humanities praxis: they can help introduce the skills, tools, and theories associated with the field, and projects like ours do so in public. Productive failure in this space has long been a hallmark of work in the digital humanities, so much so that “Failure” was listed as a keyword in the new anthology Digital Pedagogy in the Humanities (Croxall and Warnick, 2016). Writing in public carries many of the same rewards – and risks. Many of those new to digital work, in particular, rightfully fear putting their work online before it is published. The clearest way in which we can invite people into the rewards of public digital work is by sharing the burdens and risks of such work. In her recent work on generous thinking, Kathleen Fitzpatrick (2016) has advocated for “thinking with rather than reflexively against both the people and the materials with which we work.” By framing digital humanities praxis first and foremost as an activity whose successes and failures are shared, we can lower the stakes for newcomers. Centering this approach to digital humanities pedagogy in the practice of writing productively displaces the very digital tools and methodologies that it is meant to teach. Even if the ultimate goal is to develop a firm grounding in a particular digital topic, focusing on the writing invites students and collaborators into a space where anyone can contribute. By privileging the writing rather than technical skills as the means of engagement and ultimate outcome, we can shape a more inviting and generous introduction to digital humanities praxis.

References

  • Croxall, B. and Warnick, Q. (2016). “Failure.” In Digital Pedagogy in the Humanities: Concepts, Models, and Experiments. Modern Languages Association.
  • DeRosa, R. (2016). “The Open Anthology of Earlier American Literature.” https://openamlit.pressbooks.com/.
  • Fitzpatrick, K. (2016). “Generous Thinking: The University and the Public Good.” Planned Obsolescence. http://www.plannedobsolescence.net/generous-thinking-the-university-and-the-public-good/.
  • Graham, S. (2016). “Crafting Digital History.” http://workbook.craftingdigitalhistory.ca/.
  • McGill, B. (2016). “Serial Bullies: An Academic Failing and the Need for Crowd-Sourced Truthtelling.” Dynamic Ecology. https://dynamicecology.wordpress.com/2016/10/18/serial-bullies-an-academic-failing-and-the-need-for-crowd-sourced-truthtelling/.
  • Nowviskie, B. (2011). “Where Credit Is Due.” http://nowviskie.org/2011/where-credit-is-due/.
  • ———. 2016. “Capacity Through Care.” http://nowviskie.org/2016/capacity-through-care/. Ramsay, S. (2010). “Learning to Program.” http://stephenramsay.us/2012/06/10/learning-to-program/.
  • ———. 2014. “The Hermeneutics of Screwing Around; or What You Do with a Million Books.” In Pastplay: Teaching and Learning History with Technology, edited by Kevin Kee. University of Michigan Press. http://hdl.handle.net/2027/spo.12544152.0001.001.
  • The Praxis Network (2017). University of Virginia Library’s Scholars’ Lab. http://praxis-network.org/. Walsh, Brandon, and Sarah Horowitz. 2016. “Introduction to Text Analysis: A Coursebook.” http://www.walshbr.com/textanalysiscoursebook

Remixing the Sound Archive: Cut-up Poetry Recordings

[Recently I spoke at NEMLA 2017 with Ken Sherwood and Chris Mustazza. The panel was on “Pedagogy and Poetry Audio: DH Approaches to Teaching Recorded Poetry/Archives,” and my own contribution extended some past experiments with using deformance as a mode of analysis for audio recordings. The talk was given from notes, but the following is a rough recreation of what took place.]

Robust public sound archives have made a wide variety of material accessible to students and researchers for the first time, and they provide helpful records of the history of poetic performance throughout the past century. But they can also appear overwhelming in their magnitude, particularly for students: where to begin listening? How to begin analyzing any recording, let alone multiple recordings in relation to each other? This talk argues that we can help students start to explore these archives if we think about them as more than just an account of past performances: sound collections can provide the materials for resonant experiments in audio composition. I want to think about new ways to explore these archives through automatic means, through the use of software that algorithmically explores the sound collection as an object of study by tampering with it, dismantling it, and reassembling it. In the process, we might just uncover new interpretive dimensions.

This talk thus models an approach to poetry recordings founded in the deformance theories of Jerome McGann and Lisa Samuels and the cut-up techniques of the Dadaists. I prototype a pair of class assignments that ask students to slice up audio recordings of a particular poet, reassemble them into their own compositions, and reflect on the process. These acts of playful destruction and reconstruction help students think about poems as constructed sound objects and about poets as sound artists. By diving deeply into the extant record for a particular poet, students might produce performative audio essays that enact a reading of that artist’s sonic patterns. By treating sound archives as the raw ingredients for poetic remixes, we can explore and remake sound objects while also gaining new critical insight into performance practices. In the process of remixing the sound archive, we can encourage students to engage more fully with it. And while I frame this in terms of student work and pedagogy given the topic of the panel, it should become clear that I think of this as a useful research practice in its own right.

I will frame the interventions and theoretical frameworks I am making before proposing two different models for how to approach such an assignment depending on the instructor’s own technical ability and pedagogical goals: one model that uses Audacity and another that uses Python to cut and reassemble poetry recordings. I will demonstrate example compositions from the latter. It will get weird.

provocations for the talk

There are two provocations at the center of my talk founded in an assumption about the way students hear poetry recordings. In my experience, they often hear recordings not as sound artifacts but as representations of text. They might come to these recordings looking to hear the poet herself speak, or they might be looking to get new perspectives on the poem. But they fundamentally are interested in hearing a new version of a printed text, in hearing these things as analogues to print. This is all well and good - the connection to a text is clearly a part of what makes poetry recordings special, but I think our challenge as teachers and thinkers of poetry is to help students surface the sounded quality of the artifacts, to learn to dig deeper into digital sound in particular.

provocations 2 - work in the medium

My approach to this need - the need to get students to look beyond the text and towards the sound - is to get them working with these materials as heard objects. We are going to engage them in the medium. They are going to get their hands dirty. We are going to take sound - which might seem abstract and amorphous - and make it something they can touch, take apart, and reassemble. They are going to think about sound as something concrete and constructed by engaging in that very act of construction.

audacity icon slide

One approach to this might be to use a tool like Audacity. If you’re not familiar, Audacity is an open source tool that lets you input sound clips and then edit them in a pared down interface. If you have an MP3 on your computer, right click it to open it in Audacity, and you will get something like this:

waveform in audacity

A waveform. Already we are a bit alienated from the text because this visualization doesn not really allow you to access the text of the poem as such. Interacting with the poem becomes akin to touching a visual representation of sound waves. Now, you can’t do everything in Audacity, and that’s what I like about it. When I was a music student in college I remember getting introduced to some pretty beefy sound software - Pro Tools and Digital Performer. I also remember feeling pretty overwhelmed by what they had to offer. So many options! Hundreds and thousands of things to click on! What I like about Audacity is that it is a bit more stripped down. Instead of giving you all the potential options for working with sound, it does a smaller subset really well. Record, edit, mix, etc. Audacity is also open source, so it is free while the other ones are quite expensive.

first assignment in audacity

I would suggest having your students engage with recordings using this software. Here is an example assignment you might put together that asks them to put together an audio essay. Using Audacity, I would have them assemble their own sound recording that mixes in examples from other poetry recordings under examination. You might frame the exercise by having them work through a tutorial on editing audio with audacity that I put together for The Programming Historian. The Programming Historian provides tutorials for a variety of digital humanities tools and methods, so this piece on Audacity is meant for absolute beginners. It coaches people through working with the interface, and, over the course of the lesson, readers produce a small podcast.

The lesson asks readers to use a stub Bach recording, but I would adapt it to have students assemble an essay that analyzes a poetic sound recording relevant to the course material. Instead of writing a paper on a recording, the students actually integrate their audible evidence into a new sound object, assembling the podcast by hand. Citation and description can join together in this model, and I can imagine having a student pay close attention to the audible qualities of the sounds they are discussing. The sky is the limit, but, personally, I like to imagine students analyzing TS Eliot’s voice by mimicking his style. Or you could imagine an analysis of his accent that tries to position his sense of locality, nation, and the globe by examining clips from a number of his recordings alongside clips of other speakers from around the world.

pros and cons of audacity

I see several benefits to having students work in this way. Students could learn a lot about Audacity producing these kinds of audio essays, and anyone planning to work with audio in the future should possess some experience with this fundamental tool. The wealth of resources for Audacity mean that it is well-suited for beginners. There are far more substantial tutorials for the software besides my own, so students would be well supported to take on this reasonably intuitive interface.

But there are also limitations here. For one, this type of engagement is a really slow process. Your students, after all, are engaging with a medium that they can only really experience in time. If you are working with four hours of recordings, you really have to have listened to all or most of that sound to work with it in a meaningful way. And to make anything useful, they will probably want to have listened to it multiple times and have made some notes. That is an extraordinary amount of time and energy, and we might be able to do better.

In addition, the assembling process is deliberate. You are asking students to put together clips bit by bit in accordance with a particular reading. And this is the real problem that I want to address - the medium here is unlikely to show you anything new. It is meant to illustrate a reading you already have. You want to produce an interpretation, so you illustrate it with sound. The theory comes first - the praxis second.

So I want to ask: what are some other ways we can work with audio that might show us truly new things? And how can we get around the need to listen slowly? The work by Tanya Clement and HiPSTAS offer compelling examples for distant listening and audio machine learning as answers to these questions. I want to offer an approach based on creativity and play. By embracing chance-based composition techniques at scale, we can start to develop more useful classroom assignments for audio.

deformance quotation

In shifting the interpretative dimensions in this way, I am drawing on an idea that comes from Jerome McGann and Lisa Samuels: deformance. At the heart of their essay on “Deformance and Interpretation” is a quote from Emily Dickinson:

“Did you ever read one of her Poems backward, because the plunge from the front overturned you? I sometimes (often have, many times) have — a Something overtakes the Mind –”

McGann and Samuels take her very literally, and they proceed to model how reading a poem backwards, line by line, can offer generative readings. This process illustrated two main ideas. The first was that reading destructively and deformatively in this way exposes new parts of the poetry that you might not otherwise notice. You get a renewed sense of the constructedness of a poem, and the materiality of it rises to the surface. By reshaping, warping, or demolishing a poem, you actually learned something about its material components and, thus, the original poem itself. The second idea was that all acts of interpretation remade their objects of study in this way. By interpreting, the poem and our sense of it changed. So destructive reading performs this process in the fullest sense by enacting an interpretation that literally changes the shape or nature of the object. For a fuller history and more satisfying explanation of the interpretive dimensions of audio deformance for research, check out “Vocal Deformance and Performative Speech, or In Different Voices!” posted by Marit J. MacArthurt and Lee M. Miller over at Sounding Out. They also work with T.S. Eliot, though they they are working with recordings by him rather than those done by amateur readers.

cut up poetry

In thinking about this performative form of reading, I was struck at how similar it sounded to cut-up poetry, the practice of slicing apart and rearranging the text of a poem so as to create new materials as popularized by the Dadaists and William S. Burroughs. To make the link to the Audacity assignments I was discussing earlier, I became interested in how this kind of performative, random, and destructive form of reading might extend the experiments in listening that I discussed with Audacity. Rather than having students purposefully rearrange a sound recording themselves, perhaps we could release our control over the audio artifact. We would still engage students in the texture of the medium, but we would ask them to let their analysis and their manifestation of that thinking grow a little closer together. We would invite play and the unknown into the process.

provocations

So we will ask them to engage in the material aspects of poetry by interacting with it as a physical, constructed thing. But the engagement will be different. We will have them engage. We will have them warp. We will have them cut up. Rather than using scissors, we will let a computer program do the slicing for us. The algorithm that goes into that program will offer our interpretive intervention, and we will surrender control over it just a bit, with the understanding that doing so will offer up new interpretive dimensions later on.

python as a solution

My approach to this was to use computer programs written in Python - a tool that allowed ways around some of the limitations that I already noted for working with Audacity. By working with a computer program I was able to produce something that could read hours of audio far quicker than I could. Python also allowed me to repurpose extant audio software packages to manipulate the audio according to my own algorithms/interpretations. In this case, I was working with Audiogrep and Pydub. I did not need to reinvent the wheel, as I could let these packages do the heavy lifting for me. In fact, a lot of what I did here was just manipulate the extant documentation and code examples for the tools in ways that felt intellectually satisfying. The programming became an interpretive intervention in its own right that, as I will show, brought with it all sorts of serendipitous problems. All the code I used is available as a gist – it took some tinkering to get running, and it will not run for you out of the box without some manipulation. So feel free to get in touch if you wanted to try these things yourself. I can offer lessons learned!

In working with these tools, it quickly became clear that I needed to spend time exploring their possibilities, playing with to see what I could do. In that spirit, I will do something a bit different for the remainder of this piece. Rather than give an assignment example up front, I will share some of the things you can do to audio with Python and why they might be meaningful from an interpretive standpoint. Then I will offer reflections at the end. My workflow was as follows:

  1. I assembled a small corpus of sound artifacts – in this case, all the recordings of The Waste Land recorded by amateur readers on LibriVox.
  2. I installed and configured the packages to get my Python scripts running.
  3. Then I started playing, exploring all the options these Python packages had.

The first step of this process involves having the script read in all the recordings so that they can be transcribed. To do so, Audiogrep calls another piece of software called Pocketsphinx behind the scenes. The resulting transcriptions look like this:

if it is a litter box recording
<s> 4.570 4.590 0.999800
if 4.600 4.810 0.489886
it 4.820 4.870 0.127322
is 4.880 4.960 0.662690
a 4.970 5.000 0.372315
litter 5.010 5.340 0.939406
box 5.350 5.740 0.992825
recording(2) 5.750 6.360 0.551414

The results show us that audio transcription, obviously, is a vexed process, just as OCR is a troubled way of interacting with print text. What you see here is a segment from the transcription along with a series of words that the program thinks it heard. In this case, the actual audio “This is a librivox recording” becomes heard by the computer as “If it is a litter box recording” Although my cat might be proud, this shows pretty clearly that the process of working algorithmically is inaccurate. In this case, listening with Python exposes what Ryan Cordell or Matthew Kirschenbaum might describe as the traces that the digital methods leave on the artifacts as we work with them. Here is the longer excerpt of the Audiogrep transcription for this particular recording of The Wasteland:

the wasteland
i t. s. eliot
if it is a litter box recording
oliver box recordings are in the public domain
for more information or to volunteer
these visits litter box dot org
according my elizabeth client
the wasteland
i t. s. eliot
section one
ariel of the dead
april is the cruelest month
reading lie lacks out of the dead land mixing memory and designer
staring down roots with spring rain
winter kept us warm
having earth and forgetful snow
feeding a little life with tribes two birds

Lots of problems here. We could say that to listen algorithmically is to do entwine signal with noise, and, personally, I think this is great! From an interpretive standpoint, this exposes artifacts from the remaking process and shows how each intervention in the text remakes it. In a deformance theory of interpretation, you cannot work with a text without changing it, and the same is true of audio. In this case, the object literally transforms. Also note that multiple recordings will be transcribed differently. Every attempt to read the text through Python produces a new text, right in line with the performative interpretations that McGann and Samuels describe. Regional accents would produce new and different texts depending on the program’s ability to map them onto recognized words.

But you can do much more than just transcribe things with Python. When this package transcribes words, it tags each of the words with a timestamp. So you can disassemble and reassemble the text at will, using these timestamps as hooks for guiding the program. Rather than painstakingly assembling readings by hand, you could search across the recording in the same way that you might a text file. Here is an example of what you can do with one of Audiogrep’s baked in functions - you can create supercuts of a single word or cluster of related words:

Sam Lavigne has other examples of similar audio mashups on his site describing what you can do with Audiogrep. In this case, I’ve searched across all the recordings for instances of “sound” and “voice” and mashed up all those instances. You can also use regular expressions to search, allowing for pretty complicated ways of navigating a recording. Keep in mind that this is only searching across the transcriptions, which we already noted were inaccurate. So it is proper to say that this method is not telling you something about the text so much as about the recordings themselves. The program is producing a performative reading of what it understands the texts behind the audio to be. The script allows you to compare multiple recordings in a particular way that would be pretty painstaking to do by hand, but the process is imperfect and prone to error. Still, I find this to be a useful tool for collating the intonations and cadences of different readers. I am particularly interested in how amateur readers perform and re-perform the text in their own unique ways. This method allows me to ask, for example, whether all readers sonically interpret a particular line in the same or different ways.

You can also have the program create your own, new sound artifacts drawing upon the elements of the originals. Since we have all the transcriptions, we can also create performative readings of our own. Rather than getting all instances of one word, we can put together a new text and have it spoken through the individual sound clips drawn from our input. Before playing the result, read through what I wrote.

Approaching sound in this way is a way for our students to reconstitute their own ideas through the very sound artifacts that they are studying. In so doing, they learn to consider them as sound, as material objects that can be turned over, re-examined, disrupted, and reassembled. But look at how much is gone. How much gets lost. The recording is notable for its absences, its gaps.

That should give you a hint about what kind of recording is about to come out. What follows is the program’s best attempt to recreate my passage using only words spoken by LibriVox readers as they perform Eliot’s text.

The recording itself performs the idea, which is that working in Python in this way produces a reading that is somewhat out of control of the user. You cannot really account ahead of time for what will be warped and misshaped, but some distortion is inevitable. By passing the program the passage, it will search through for instances of each word and try to reassemble them into a whole. The reading becomes the recording. But we are asking the computer to do something when it does not have all the elements it needs to complete the task - we have asked it build a house, but we have given it no material for doors or windows. The result is a garbled mess, but you can still make out a few words here and there that are familiar. We hear a few things that are recognizable, but we also get a lot of silences and noise, what we might think of as the frictions produced by the gaps between what the script recognizes correctly and what it does not. The result is sound art as much as sound interpretation.

One last one. This one is a bit frightening.

While trying to mash up all the silences in the recording to get a supercut of people breathing, I made a conversion error. Because of my mistake, I accidentally dropped a millisecond every five or six milliseconds in the recording rather than dropping only the pieces of spoken word. From this I learned how to make any recording sound like a demon. I think moments of serendipity like this are crucial, because they expose the recording as a recording. This effect almost sounds analogous to the sort of artifacts that might get accidentally created in the recording process. The process of approaching the recording leaves nothing unchanged, but, if we are mindful of these transformations, we can use them in the service of discovery.

reviewing what you can do with Python

So, to review: you can use Python to create supercuts of particular words, to perform readings of a text, to expose artifacts from the recording and transcription process, or to create demons. So my assignment for python might go something like this.

sample (joke) assignment with Python audio

Take some recordings and play around. I think you get the most out of a research method like this by letting the praxis generate the theory. Then let the outcomes reflect and revise the theory. Your students can serendipitously learn new things from these sorts of experiments, even if they might seem silly. Instead of shying away from the failures involved in transforming sound recordings into transcriptions and back again, I propose that we instead take a Joycean approach that “errors are volitional and are the portals of discovery.” The exercise could ask you to consider the traces of digital remediation that are present in the artifacts themselves. Or, it could generate a discussion of regionalism and accents of the Librivox participants that threw the transcriber off. To go further (on the excellent question/suggestion of a NEMLA audience member), this process could expose the fact that, in transcribing audio, the program favors particular pronunciations and silences those voices who do not accord with its linguistic sense of “proper” English. You might get a new sense of the particular vocabulary of recorded words in a text, and what it leaves out. Or you might get a renewed sense of how interpretation is a two-way street that changes our texts as we take them in.

pros and cons of using python for this

So Python offers some robust ways of working with audio through some packages that are ready to go. This lets you scale up quickly and try new things, but it is worth noting that these methods require far more technical overhead than using Audacity. For this reason, I mentioned training wheels above. Depending on your course, it might be too much to ask students to program in Python from scratch for an assignment like this. So you might offer them starter functions or detailed guides so they do not need to implement the whole thing themselves. The hands-off approach here might be more than some instructors or researchers are willing to allow. Furthermore, while I do think these methods are appropriate for scaling up an examination of audio recordings to compare many different audio artifacts, there are important limitations in the Python audio packages that, without significant tinkering, limit the size of the audio corpus you can work with.

For me, though, the possibilities of these approaches are generative enough to work around these limitations. Methods like these are useful for exposing students to sound archives as more than just pieces of cultural history but also as materials to be used, re-used, and remixed into their own work. I have deliberately chosen as my examples here only recordings of texts as given by amateur readers to suggest that these materials have always been performed and re-preformed. The assignments above ask students to place themselves in this tradition of recreation, and the approach invites them to view these interventions with a sense of exploration and humor.

What Should You Do in a Week?

[Crossposted to the Scholars’ Lab blog.]

For the past several years, I’ve taught a Humanities Programming course at HILT. The course was piloted by Wayne Graham and Jeremy Boggs, but, these days, I co-teach the course with Ethan Reed, one of our DH fellows in the Scholars’ Lab. The course is a soup-to-nuts introduction to the kinds of methods and technologies that are useful for humanities programming. We’re changing the course a fair amount this year, so I thought I’d offer a few notes on what we’re doing and the pedagogical motivations for doing so. You can find our syllabus, slides, resources, and more on the site.

We broke the course down into two halves:

  • Basics: command line, Git, GitHub, HTML/CSS
    • Project: personal website
  • Programming concepts: Ruby
    • Project: Rails application deployed through Heroku and up on GitHub

In the first half, people learned the basic stack necessary to work towards a personal website, then deploying that site through GitHub pages. In the second half, students took in a series of lessons about Ruby syntax, but the underlying goal was to teach them the programming concepts common to a number of programming languages. Then, we shifted gears and had them work through a series of Rails tutorials that pushed them towards a real-life situation where they’re working through and on a thing (in this case a sort of platform for crowdsourcing transcriptions of images).

I really enjoyed teaching the Rails course, and I think there was a lot of good in it. But over the past few years it has raised a number of pedagogical questions for me:

  • What can you reasonably hope to teach in a week-long workshop?
  • Is it better to do more with less or less with more?
  • What is the upper-limit on the amount of new information students can take in during the week?
  • What will students actually use/remember from the course once the week is over?

To be fair, week-long workshops like this one often raise similar concerns for me. I had two main concerns about our course in particular.

The first was a question of audience. We got people of all different skill levels in the course. Some people were there to get going with programming for the first time. These newcomers often seemed really comfortable with the course during the first half, while the second half of the course could result in a lot of frustration when the difficulty of the material suddenly seemed to skyrocket. Other students were experienced developers with several languages under their belt who were there specifically to learn Rails. The first half of the course seemed to be largely review for this experienced group, while the second half was really what they were there to take on.  It’s great that we were able to pull in students with such diverse experiences, but I was especially concerned for the people new to programming who felt lost during the second half of the course. Those experienced folks looking to learn Rails? I think they can probably find their way into the framework some other way. But I didn’t want our course to turn people off from programming because the presentation of the material felt frustrating. We can fix that. I always feel as though we should be able to explain these methods to anyone, and I wanted our alumni to feel that they were empowered by their new experiences, not frustrated. I wanted our course to reflect that principle by focusing on this audience of people looking for an introduction, not an advanced tutorial.

I also wondered a lot about the outcomes of the course. I wondered how many of the students really did anything with web applications after the course was over. Those advanced students there specifically for Rails probably did, and I’m glad that they had tangible skills to walk away with. But, for the average person just getting into digital humanities programming, I imagine that Rails wasn’t something they were going to use right away. After all, you use what you need to do what you need. And, while Rails gives you a lot of options, it’s not necessarily the tool you need for the thing in front of you - specially when you’re starting out.

So we set about redesigning the course with some of these thoughts in mind and with a few principles:

  • Less is more.
  • A single audience is better than many.
  • If you won’t use it, you’ll lose it.

I wondered how we might redesign the course to better reflect the kinds of work that are most common to humanists using programming for their work. I sat down and thought about common tasks that I use programming for beyond building apps/web services. I made a list of some common tasks that, when they confront me, I go, “I can write a script for that!” The resulting syllabus is on the site, but I’ll reiterate it here. The main changes took place in the second half of the course:

  • Basics: command line, git, GitHub, HTML/CSS
    • Project: personal website
  • Programming concepts: Python
    • Project(s): Applied Python for acquiring, processing, and analyzing humanities data

The switch from Python to Ruby reflects, in part, my own changing practices, but I also find that the Pythonic syntax enforces good stylistic practices in learners. In place of working on a large Rails app, we keep the second half of the course focused on daily tasks that programming is good for. After learning the basic concepts from Python, we introduce a few case studies for applied Python. Like all our materials, these are available on our site. But I’d encourage interested folks to check out the Jupyter notebooks for these units if you’re interested. These are the new units on applications of Python to typical situations:

In the process of working through these materials, the students work with real, live humanities data drawn from Project Gutenberg, the DPLA, and the Jack the Ripper Casebook. We walk the students through a few different options for building a corpus of data and working with it. After gathering data, we talk about problems with it and how to use it. Of course, you could run an entire course on such things. Our goal here is not to cover everything. In fact, I erred on the side of keeping the lessons relatively lightweight, with the assumption that the jump in difficulty level would require us to move pretty slowly. The main goal is to show how situations that appear to be much more complicated still boil down to the same basic concepts the students have just learned. We want to shrink the perceived gap between those beginning exercises and the kinds of scripts that are actually useful for your own day-to-day work. We introduce some slightly more advanced concepts along the way, but hopefully enough of the material will remain familiar that the students can excel. Ideally, the concepts we work through in these case studies will be more immediately useful to someone trying to introduce programming into their workflow for the first time. And, in being more immediately useful, the exercises might be more likely to give a lasting foundation for them to keep building on into the future.

We’ve also rebranded the course slightly. The course description has changed, as we’ve attempted to soften jargon and make it clear that students are meant to come to the course not knowing the terms or technologies in the description (they’re going to learn them with us!). The course name has changed as well, first as a joke but then in a serious way. Instead of simply being called “Humanities Programming,” the course is now “Help! I’m a Humanist! - Programming for Humanists with Python.” The goal there is to expose the human aspect of the course - no one is born knowing this stuff, and learning it means dealing with a load of tough feelings: anxiety, frustration, imposter syndrome, etc. I wanted to foreground all of this right away by making my own internal monologue part of the course title. The course can’t alleviate all those feelings, but I hoped to make it clear that we’re taking them into account and thinking about the human side of what it means to teach and learn this material. We’re in it together.

So. What can you do in a week? Quite a lot. What should you do - that’s a much tougher question. I’ve timed this post to go out right around when HILT starts. If I figure it out in the next week I’ll let you know.

On Co-Teaching and Digital Humanities

[Crossposted on the WLUDH blog.]

For me, co-teaching is the ultimate teaching experience. I’ve been fortunate to find several opportunities for it over the years. During graduate school, I co-taught a number of short courses, several DH classes, and a couple workshops. Here at W&L I’ve been able to teach alongside faculty from the history department and the Library. Each experience has been deeply rewarding. These days I’m spending more time thinking about digital humanities from a curricular and pedagogical standpoint, so I wanted to offer a few quick notes on how co-teaching might play a role in those discussions.

I’m sympathetic to arguments against putting two or more people at the front of the classroom. It’s expensive to use two faculty members to teach a single course when one might do, so I can understand how, in a certain logic, the format seems profoundly inefficient. You have a set number of courses that need to be taught, and you need people to teach them. And I have also heard people say that co-teaching is a lot more work than teaching a course solo. I understand these objections. But I wanted to offer just a few notes on the benefits of co-teaching - why you might want to consider it as a path for growing your digital humanities program even in the face of such hesitations. I’ve found that the co-teaching experience fully compliments the work that we do as digital humanists for a number of reasons. I think of co-teaching as a way to make the teaching of digital humanities more fully reflect the ways we tend to practice it.

Co-teaching allows for more interdisciplinary courses.

Interdisciplinarity is hard. By its very nature, it assumes research, thinking, and teaching that lie at the intersections of at least two fields, usually more. In the case of digital humanities, this is exacerbated because the methodologies of the combined fields often seem to be so distinct from one another. Literary criticism and statistical methods, archival research and computer science, literary theory and web design. These binaries are flawed, of course, and these fields have a lot to say to and about each other. But, in the context of teaching digital humanities, sometimes bringing these fields together requires expertise that one teacher alone might not possess. A second instructor makes it easier to bridge perceived gaps in skills or training. And those skills, if they are meant to be taught, require time and energy from the instructors. On a more practical level, it can be profoundly helpful to have one instructor float in the classroom to offer technical assistance while the other leads discussion so as to prevent troubleshooting from breaking up the class. It is not enough to say that interdisciplinary courses need a second instructor. They often require additional hands on deck.

Co-teaching models collaboration for students.

Digital humanities work often requires multiple people to work together, but I’d wager that students often expect there to be a single person in charge of a class. Students might come into the class expecting a lecture model. Or, at the very least, they might expect the teacher to be an expert on the material. Or, they might expect the instructor to lead discussion. These formats are all well and good, and many instructors thrive on these models. I prefer to position my students as equal collaborators with me in the material of the course. We explore the material together, and, even if I might serve as a guiding hand, their observations are just as important as my own. I try to give my students space to assert themselves as experts, as real collaborators in the course. Co-teaching helps to set the stage for this kind of approach, because the baseline assumption is that no one person knows everything. If that were the case, you would not need a second instructor. There is always a second voice in the room. By unsettling the top-down hierarchy of the classroom, co-teaching helps to disperse authority out into other parts of the group. The co-teacher not in charge on a particular day might even be seated alongside the students, learning with them. This approach to teaching works especially well as a vehicle for digital humanities. After all, most digital humanities projects have many collaborators, each of whom brings a different set of skills to the table. No person operates as an expert in all parts of a collaborative project - not even the project manager. Digital humanities work is, by its nature, collaborative. Students should know this, see this, and feel this, and it can start at the front of the classroom.

Co-teaching transfers skills from one instructor to another.

Digital humanities faculty and staff are often brought in to support courses and projects by teaching particular methods or tools. This kind of training can sometimes happen in one-off workshops or in external labs, but the co-teaching model can offer a deeper, more immersive mentoring experience. Co-teaching can be as much for the instruction of the students as it is for the professional development of the teachers. For the willing faculty member, a semester-long engagement with material that stretches their own technical abilities can set them up to teach the material by themselves in the future. They can learn alongside the students and expand their portfolio of skills. At W&L we have had successes in a number of disciplines with this approach - faculty in history, journalism, and French have expanded their skills with text analysis, multimedia design and storytelling, and textual encoding all while developing and teaching new courses. We’ve even managed, at times, to document this process so that we have demonstrable, professionally legible evidence of the kinds of work possible when two people work together. When both instructors share course time for the entire semester it can help to expand the capacity of a digital humanities program by spreading expertise among many collaborators.

Of course, all of this requires a lot of buy-in, both from the faculty teaching together and from the administration overseeing the development of such courses. You need a lot of people ready to see the value in this process. The particulars of your campus might provide their own limitations or opportunities. Putting together collaborations like these takes time and energy, but it’s worth it. I think of co-teaching as an investment - in the future of the program, the students, and the instructors. What requires two instructors today might, with the right preparation and participation, only require one tomorrow.

In case you want to read more, here are some other pieces on co-teaching from myself and past collaborators (happy to be pointed to others!):

In, Out, Across, With: Collaborative Education and Digital Humanities (Job Talk for Scholars’ Lab)

[Crossposted on the WLUDH blog and the Scholars’ Lab blog]

I’ve accepted a new position as the Head of Graduate Programs in the Scholars’ Lab, and I’ll be transitioning into that role over the next few weeks! As a part of the interview process, we had to give a job talk. While putting together this presentation, I was lucky enough to have past examples to work from (as you’ll be able to tell, if you check out this past job talk by Amanda Visconti). Since my new position will involve helping graduate students through the process of applying for positions like these, it only feels right that I should post my own job talk as well as a few words on the thinking that went into it. Blemishes, jokes, and all, hopefully these materials will help someone in the future find a way in, just as the example of others did for me. And if you’re looking for more, Visconti has a great list of other examples linked from her more recent job talk for the Scholars’ Lab.

For the presentation, I was asked to respond to this prompt:

What does a student (from undergraduate to doctoral levels) need to learn or experience in order to add “DH” to his or her skill set? Is that an end or a means of graduate education? Can short-term digital assignments in discipline-specific courses go beyond “teaching with technology”? Why not refer everyone to online tutorials? Are there risks for doctoral students or the untenured in undertaking digital projects? Drawing on your own experience, and offering examples or demonstrations of digital research projects, pedagogical approaches, or initiatives or organizations that you admire, make a case for a vision of collaborative education in advanced digital scholarship in the arts and humanities.

I felt that each question could be a presentation all its own, and I had strong opinions about each one. Dealing with all of them seemed like a tall order. I decided to spend the presentation close reading and deconstructing that first sentence, taking apart the idea that education and/or digital humanities could be thought of in terms of lists of skills at all. Along the way, my plan was to dip into the other questions as able, but I also assumed that I would have plenty of time during the interview day to give my thoughts on them. I also wanted to try to give as honest a sense as possible of the way I approach teaching and mentoring. For me, it’s all about people and giving them the care that they need. In conveying that, I hoped, I would give the sort of vision the prompt was asking for. I also tried to sprinkle references to the past and present of the Scholars’ Lab programs to ground the content of the talk. When I mention potential career options in the body of the talk, I am talking about specific alumni who came through the fellowship programs. And when I mention graduate fellows potentially publishing on their work with the Twitter API, well, that’s not hypothetical either.

So below find the lightly edited text of the talk I gave at the Scholars’ Lab - “In, Out, Across, With: Collaborative Education and Digital Humanities.” I’ve only substantively modified one piece - swapping out one example for another.

And a final note on delivery: I have heard plenty of people argue over whether it is better to read a written talk or deliver one from notes. My own sense is that the latter is far more common for digital humanities talks. I have seen both fantastic read talks and amazing extemporaneous performances, just as I have seen terrible versions of each. My own approach is, increasingly, to write a talk but deliver that talk more or less from memory. In this case, I had a pretty long commute to work, so I recorded myself reading the talk and listened to it a lot to get the ideas in my head. When I gave the presentation, I had the written version in front of me for reference, but I was mostly moving through my own sense of how it all fit together in real time (and trying to avoid looking at the paper). My hope is that this gave me the best of both worlds and resulted in a structured but engaging performance. Your mileage may vary!

In, Out, Across, With: Collaborative Education and Digital Humanities

Title slide It’s always a treat to be able to talk with the members of the UVA Library community, and I am very grateful to be here. For those of you that don’t know me, I am Brandon Walsh, Mellon Digital Humanities Fellow and Visiting Assistant Professor of English at Washington and Lee University. The last time I was here, I gave a talk that had almost exclusively animal memes for slides. I can’t promise the same robust Internet culture in this talk, but talk to me after and I can hook you up. I swear I’ve still got it.

Zotero slide In the spirit of Amanda Visconti, the resources that went into this talk (and a number of foundational materials on the subject) can all be found in a Zotero collection at the above link. I’ll name check any that are especially relevant, but hopefully this set of materials will allow the thoughts in the talk to flower outwards for any who are interested in seeing its origins and echoes in the work of others.

Thank you slide And a final prefatory note: no person works, thinks or learns alone, so here are the names of the people in my talk whose thinking I touch upon as well as just some – but not all – of my colleagues at W&L who collaborate on the projects I mention. Top tier consists of people I cite or mention, second tier is for institutions or publications important to discussion, and final tier is for direct collaborators on this work.

Today I want to talk to you about how best to champion the people involved in collaborative education in digital research. I especially want to talk about students. And when I mention “students” throughout this talk, I will mostly be speaking in the context of graduate students. But most of what I discuss will be broadly applicable to all newcomers to digital research. My talk is an exhortation to find ways to elevate the voices of people in positions like these to be contributors to professional and institutional conversations from day one and to empower them to define the methods and the outcomes of the digital humanities that we teach. This means taking seriously the messy, fraught, and emotional process of guiding students through digital humanities methods, research, and careers. It means advocating for the legibility of this digital work as a key component of their professional development. And it means enmeshing these voices in the broader network around them, the local context that they draw upon for support and that they can enrich in turn. I believe it is the mission of the Head of Graduate Programs to build up this community and facilitate these networks, to incorporate those who might feel like outsiders to the work that we do. Doing so enriches and enlivens our communities and builds a better and more diverse research and teaching agenda. Title Slide 2 This talk is titled “In, Out, Across, With: Collaborative Education and Digital Humanities,” and I’ll really be focusing on the prepositions of my title as a metaphor for the nature of this sort of position. I see this role as one of connection and relation. The talk runs about 24 minutes, so we should have plenty of time to talk.

When discussing digital humanities education, it is tempting to first and foremost discuss what, exactly, it is that you will be teaching. What should the students walk away knowing? To some extent, just as there is more than one way to make breakfast, you could devise numerous baseline curricula. W&L Skills

This is what we came up with at Washington and Lee for students in our undergraduate digital humanities fellowship program. We tried to hit a number of kinds of skills that a practicing digital humanist might need. It’s by no means exhaustive, but the list is a way to start. We don’t expect one person to come away knowing everything, so instead we aim for students to have an introduction to a wide variety of technologies by the end of a semester or year. They’ll encounter some technologies applicable to project management, some to front-end design, as well as a variety of programming concepts broadly applicable to a variety of situations. Lists like this give some targets to hit. But still, even as someone who helped put this list together, it makes me worry a bit. I can imagine younger me being afraid of it! It’s easy for us to forget what it was like to be new, to be a beginner, to be learning for the first time, but I’d like to return us to that frame of thinking. I think we should approach lists like these with care, because they can be intimidating for the newcomer. So in my talk today I want to argue against lists of skills as ways of thinking.

I don’t mean to suggest that programs need no curriculum, nor do I mean to suggest that no skills are necessary to be a digital humanist. But I would caution against focusing too much on the skills that one should have at the end of a program, particularly when talking about people who haven’t yet begun to learn. I would wager that many people on the outside looking in think of DH in the same way: it’s a big list of unknowns. I’d like to get away from that.

Templates like this are important for developing courses, fellowship, and degree-granting programs, but I worry that the goodwill in them might all too easily seem like a form of gatekeeping to a new student. It is easy to imagine telling a student that “you have to learn GitHub before you can work on this project.” It’s just a short jump from this to a likely student response - “ah sorry - I don’t know that yet.” And from there I can all too easily imagine the common refrain that you hear from students of all levels - “If I can’t get that, then it’s because I’m not a technology person.” From there - “Digital humanities must not be for me.”

Instead of building our curricula out of as-yet-unknown tool chains, I want to float, today, a vision of DH education as an introduction to a series of professional practices. Lists of skills might be ends but I fear they might foreclose beginnings. SCI Slide Instead, I will float something more in line with that of the Scholarly Communication Institute (held here at UVA for a time), which outlined what they saw as the needs of graduate and professional students in the digital age. I’ll particularly draw upon their first point here (last of my slides with tons of text, I swear): graduate students need training in “collaborative modes of knowledge production and sharing.”

I want to think about teaching DH as introducing a process of discovery that collapses hierarchies between expert and newcomer: that’s a way to start. This sort of framing offers digital humanities not as a series of methods one does or does not know, but, rather, as a process that a group can engage in together. Do they learn methods and skills in the process? Of course! Anyone who has taken part in the sort of collaborative group projects undertaken by the Scholars’ Lab comes away knowing more than they came in with. But I want to continue thinking about process and, in particular, how that process can be more inclusive and more engaging. By empowering students to choose what they want to learn and how they want to learn it, we can help to expand the reach of our work and better serve our students as mentors and collaborators. There are a few different in ways in which I see this as taking place, and they’ll form the roadmap for the rest of the talk. Roadmap Slide Apologies - this looks like the sort of slide you would get at a business retreat. All the same - we need to adapt and develop new professional opportunities for our students at the same time that we plan flexible outcomes for our educational programs. These approaches are meant to serve increasingly diverse professional needs in a changing job market, and they need to be matched by deepening support at the institutional level.

So to begin. One of our jobs as mentors is to encourage students to seek out professionally legible opportunities early on in their careers, and as shapers of educational programs we can go further and create new possibilities for them. At W&L, we have been collaborating with the Scholars’ Lab to bring UVA graduate students to teach short-form workshops on digital research in W&L classrooms. Funded opportunities like this one can help students professionalize in new ways and in new contexts while paying it forward to the nearby community. A similar initiative at W&L that I’ve been working on has our own library faculty and undergraduate fellows visiting local high schools to speak with advanced AP computer science students about how their own programming work can apply to humanities disciplines. I’m happy to talk more about these in Q&A.

Student slide We also have our student collaborators present at conferences, both on their own work and on work they have done with faculty members, both independently and as co-presenters. Here is Abdur, one of our undergraduate Mellon DH fellows, talking about the writing he does for his thesis and how it is enriched by and different from the writing he does in digital humanities contexts at the Bucknell Digital Scholarship Conference last fall. While this sort of thing is standard for graduate students, it’s pretty powerful for an undergraduate to present on research in this way. Learning that it’s OK to fail in public can be deeply empowering, and opportunities like these encourage our students to think about themselves as valuable contributors to ongoing conversations long before they might otherwise feel comfortable doing so.

But teaching opportunities and conferences are not the only ways to get student voices out there. I think there are ways of engaging student voices earlier, at home, in ways that can fit more situations. We can encourage students to engage in professional conversations by developing flexible outcomes in which we are equal participants. One approach to this with which I have been experimenting is group writing, which I think is undervalued as a taught skill and possible approach to DH pedagogy. An example: when a history faculty member at W&L approached the library (and by extension, me) for support in supplementing an extant history course with a component about digital text analysis, we could have agreed to offer a series of one-off workshops and be done with it. Gitbook slide Instead, this faculty member – Professor Sarah Horowitz – and I decided to collaborate on a more extensive project together, producing Introduction to Text Analysis: A Coursebook. The idea was to put the materials for the workshops together ahead of time, in collaboration, and to narrativize them into a set of lessons that would persist beyond a single semester as a kind of publication. The pedagogical labor that we put into reshaping her course could become, in some sense, professionally legible as a series of course modules that others could use beyond the term. So for the book, we co-authored a series of units on text analysis and gave feedback on each other’s work, editing and reviewing as well as reconfiguring them for the context of the course. Professor Horowitz provided more of the discipline-specific material that I could not, and I provided the materials more specific to the theories and methods of text analysis. Neither one of us could have written the book without the other.

Professor Horowitz was, in effect, a student in this moment. She was also a teacher and researcher. She was learning at the same time that she produced original scholarly contributions. Even as we worked together, for me this collaborative writing project was also a pedagogical experiment that drew upon the examples of Robin DeRosa, Shawn Graham, and Cathy Davidson, in particular. Davidson Slide Davidson taught a graduate course on “21st Century Literacies” where each of her students wrote a chapter that was then collected and published as an open-access book. For us as for Davidson, the process of knowing, the process of uncovering is something that happens together. In public. And it’s documented so that others can benefit. Our teaching labor could become visible and professionally legible, as could the labor that Professor Horowitz put into learning new research skills. As she adapts and tries out ideas, and as we coalesce them into a whole, the writing product is both the means and the end of an introduction to digital humanities.

Professor Horowitz also wanted to learn technical skills herself, and she learned quite a lot through the writing process. Rather than sitting through lectures or being directed to online tutorials by me, I thought she would learn better by engaging with and shaping the material directly. Her course and my materials would be better for it, as she would be helping to bind my lectures and workshops to her course material. The process would also require her to engage with a list of technologies for digital publishing. Gitbook Toolchain Slide Beyond the text analysis materials and concepts, the process exposed her to a lot of technologies: command line, Markdown, Git for version control, GitHub for project management. In the process of writing this document, in fact, she covered most of the same curriculum as our undergraduate DH fellows. Fellows Skills Slide She’s learning these things as we work together to produce course materials, but, importantly, the technical skills aren’t the focus of the work together. It’s a writing project! Rather than presenting the skills as ends in themselves, they were the means by which we were publishing a thing. They were immediately useful. And I think displacing the technology is helpful: it means that the outcomes and parameters for success are not based in the technology itself but, rather, in the thinking about and use of those methods. We also used a particular platform that allowed Professor Horowitz to engage with these technologies in a light way so that they would not overwhelm our work – I’m happy to discuss more in the time after if you’re interested.

This to say: the outcomes of such collaborative educations can be shaped to a variety of different settings and types of students. Take another model, CUNY’s Graduate Center Digital Fellows program, whose students develop open tutorials on digital tools. Learning from this example, rather than simply direct students or colleagues towards online tutorials like these, why not have them write their own documents, legible for their own positions, that synthesize and remix the materials that they already have found? Programming Historian Slide The learning process becomes something productive in this framing. I can imagine, for example, directing collaboratively authored materials by students like these towards something like The Programming Historian. If you’re not familiar, The Programming Historian offers a variety of lessons on digital humanities methods, and they only require an outline as a pitch to their editorial team, not a whole written publication ready to go. Your graduate students could, say, work with the Twitter API over the course of a semester, blog about the research outcomes, and then pitch a tutorial to The Programming Historian on the API as a result of their work. It’s much easier to motivate yourselves to write something if you know that the publication has already been accepted. Obviously such acceptance is not a given, but working towards a goal like this can offer student researchers something to aim for. Their instructors could co-author these materials, even, so that everyone has skin in the game.

This model changes the shape of what collaborative education can look like: it’s duration and its results. You don’t need a whole fellowship year. You could, in a reasonably short amount of time, tinker and play, and produce a substantial blog post, an article pitch, or a Library Research Guide (more on that in a moment).

Jarvis Quote SlideAs Jeff Jarvis has said, “we need to move students up the education chain.” And trust me - the irony of quoting a piece titled “Lectures are Bullshit” during a lecture to you is not lost on me. But stay with me.

Collaborative writing projects on DH topics are flexible enough to fit the many contexts for the kind of educational work that we do. After all, no one needs or values the same outcomes, and these shared and individual goals need to be worked out in conversation with the students themselves early on. Articulating these desires in a frank, written, and collaborative mode early on (in the genre of the project charter), can help the program directors to better shape the work to fit the needs of the students. But I also want to suggest that collaborative writing projects can be useful end products as well as launching pads, as they can fit the shape of many careers. After all, students come to digital humanities for a variety of different reasons. Some might be aiming to bolster a research portfolio on the path to a traditional academic career. Others might be deeply concerned about the likelihood of attaining such a position and be looking for other career options. Others still might instead be colleagues interested in expanding their research portfolio or skillset but unable to commit to a whole year of work on top of their current obligations. Writing projects could speak to all these situations.

I see someone in charge of shaping graduate programs as needing to speak to these diverse needs. This person is both a steward of where students currently are – the goals and objectives they might currently have – as well as of where they might go – the potential lives they might (or might not!) lead. After all, graduate school, like undergraduate, is an enormously stressful time of personal and professional exploration. If we think simply about a student’s professional development as a process of finding a job, we overlook the real spaces in which help might be most desired. Frequently, those needs are the anxieties, stresses, and pressures of refashioning yourself as a professional. We should not be in the business of creating CV lines or providing lists of qualifications alone. We should focus on creating strong, well-adjusted professionals by developing ethical programs that guide them into the professional world by caring for them as people.

In the graduate context, this involves helping students deal with the academic job market in particular. Rogers Slide To me in its best form, this means helping students to look at their academic futures and see proliferating possibilities instead of a narrow and uncertain route to a single job, to paraphrase the work of Katina Rogers. A sprinkler rather than a pipeline, in her metaphor. As Rogers’s work, in particular, has shown, recent graduate students increasingly feel that, while they experienced strong expectations that they would continue in the professoriate, they received inadequate preparation for the many different careers they might actually go on to have. The Praxis Program and the Praxis Network are good examples of how to position digital humanities education as answers to these issues. Fellowship opportunities like these must be robust enough that they can offer experiences and outcomes beyond the purely technical, so that a project manager from one fellowship year can graduate with an MA and go into industry in a similar role just as well-prepared as a PhD student aiming to be a developer might go on to something entirely different. And the people working these programs must be prepared for the messy labor of helping students to realize that these are satisfactory, laudable professional goals.

It should be clear that this sort of personal and professional support is the work of more than just one person. One of the strengths of a digital humanities center embedded in a library like this one at UVA is that fellows have the readymade potential to brush up against a variety of career options that become revealed when peaking outside of their disciplinary silos: digital humanities developers and project manager positions, sure, but also metadata specialists, archivists, and more. I think this kind of cross-pollination should be encouraged: library faculty and staff have a lot to offer student fellows and vice versa. Developing these relationships brings the fellows further into the kinds of the work done in the library and introduces them to careers that, while they might require further study to obtain, could be real options.

To my mind the best fellowship programs are those fully aware of their institutional context and those that both leverage and augment the resources around them as they are able. We have been working hard on this at W&L. We are starting to institute a series of workshops led by the undergraduate fellows in consultation with the administrators of the fellowship program. The idea is that past fellows lead workshops for later cohorts on the technology they have learned, some of which we selectively open to the broader library faculty and staff. The process helps to solidify the student’s training – no better way to learn than to teach – but it also helps to expand the student community by retaining fellows as committed members. It also helps to fill out a student’s portfolio with a cv-ready line of teaching experience. This process also aims to build our own capacity within the library by distributing skills among a wider array of students, faculty, and staff. After all, student fellows and librarians have much they could learn from one another. I see the Head of Graduate Programs as facilitating such collaborations, as connecting the interested student with the engaged faculty/staff/librarian collaborator, inside their institution or beyond.

But we must not forget that we are asking students and junior faculty to do risky things by developing these new interests, by spending time and energy on digital projects, let alone presenting and writing on them in professional contexts. The biggest risk is that we ask them to do so without supporting them adequately. All the technical training in the world means little if that work is illegible and irrelevant to your colleagues or committee. Fitzpatrick Slide In the words of Kathleen Fitzpatrick, we ask these students to “do the risky thing,” but we must “make sure that someone’s got their back.” I see the Head of Graduate Programs as the key in coordinating, fostering, and providing such care.

Students and junior faculty need support – for technical implementation, sure – but they also need advocates – people who can vouch for the quality of their work and campaign on their behalf in the face of committees and faculty who might be otherwise unable to see the value of their work. Some of this can come from the library, from people able to put this work in the context of guidelines for the evaluation of digital scholarship. But some of this support and advocacy has to come from within their home departments. The question is really how to build up that support from the outside in. And that’s a long, slow process that occurs by making meaningful connections and through outreach programs. At W&L, we have worked to develop an incentive grant program, where we incentivize faculty members who might be new to digital humanities or otherwise skeptical to experiment with incorporating a digital project into their course. The result is a slow burn – we get maybe one or two new faculty each term trying something out. That might seem small, but it’s something, particularly at a small liberal arts college. This kind of slow evangelizing is key in helping the work done by digital humanists to be legible to everyone. Students and junior faculty need advocates for their work in and out of the library and their home departments, and the person in this position is tasked with overseeing such outreach.

So, to return to the opening motif, lists of skillsets certainly have their place as we bring new people into the ever-expanding field: they’re necessary. They reflect a philosophy and a vision, and they’re the basis of growing real initiatives. But it’s the job of the Head of Graduate Programs to make sure that we never lose sight of the people and relationships behind them.

Foremost, then, I see the Head of Graduate Programs as someone who takes the lists, documents, and curricula that I have discussed and connects them to the people that serve them and that they are meant to speak to. This person is one who builds relationships, who navigates the prepositions of my title. Title Slide Last It’s the job of such a person to blast the boundary between “you’re in” and “you’re out” so that the tech-adverse or shy student can find a seat at the table. This is someone who makes sure that the work of the fellows is represented across institutions and in their own departments. This person makes sure the fellows are well positioned professionally. This person builds up people and embeds them to networks where they can flourish. Their job is never to forget what it’s like to be the person trying to learn. Their job is to hear “I’m not a tech person” and answer “not yet, but you could be! and I know just the people to help. Let’s learn together.”

Comments