I won’t be there til closer to 10 so I won’t be able to participate in scheduling unfortunately. I would be interested in a conversation about teaching Intro to DH and courses with DH components.
If there interest in some hands on workshops, I am happy to help lead a workshop on a topic such as “normalizing” data (tidy data), text analysis (voyant/mallet) or networks ( gephi).
The first THATCamp AHA was in Chicago in 2012, and it has been a valuable addition to the conference ever since, but is it something we should continue doing in the future, or are there other better ways to spend our time and energy. There have been numerous sessions at THATCamps over the years asking similar questions, without, as far as I am aware, coming to any particularly useful conclusions. Even so, I think it would be helpful to explore what (if anything) should come next or if this is still a valuable and useful format.
The American Yawp collaboratively edited textbook has been around for a few years now, and Chicago University Press has recently announced the publication of a free history textbook (although it’s not strictly speaking “open”), Open Stax has a US History Textbook, and I’m sure there’s a range of other materials available as open access resources, but it’s my impression that uptake has been slow and quality varies greatly. I would be interested in a talk session where we discuss what kinds of resources are available, how well used they are, and what could be done to facilitate the creation, discovery, and use of these kinds of materials.
The work of documentary editors and the capabilities of digital platforms are highlighted at this year’s AHA meeting in several sessions, including (but not limited to) the an entire track on Primary Sources and the History Profession in the Age of Text Search.
Modern digital editions no longer are reproductions of the book form, but incorporate high-resolution facsimiles (expensive in letterpress) and interactive features letting researchers perform analysis within the edition itself (impossible in letterpress). Simultaneously, the challenge of digital preservation, the rise of mobile-first researchers, and the rapidity of platform obsolescence have demonstrated the apparent superiority of the letterpress edition in terms of “shelf-life”.
I propose a session to discuss the challenges and opportunities facing digital editions and the people who work on them. Depending on the interest, this may be a meet-and-greet to connect digital edition personnel, a show-and-tell about neat visualizations, a gripe session about sustainability, a high-level discussion of theory, a combination of the four, or something totally unforeseen.
I wonder if it wouldn’t be interesting to just pick a data set and try, collaboratively, to learn something from it? It would be interesting to see how others approach a problem from the process perspective: which questions asked, which tools used, etc. I don’t have a particular data set in mind, except that the APS just released “Benjamin Franklin Post Office Records,” which looks intriguing. Here: https://diglib.amphilsoc.org/islandora/compound/franklin-post-office-book-1748-1752#page/1/mode/1up and Here: https://github.com/AmericanPhilosophicalSociety/Historic-Postal-Data .
I realize this is a talk session but I want to discuss ways to “operationalize” (the latest buzz word in my library) this document — so some planned doing. A number of such white papers come out, but I think without plans to implement the suggestions and to build around and go beyond reading them, these are doomed to archival obscurity. And I would like to discuss ways to make sure that does not happen with this one.
Arguing with Digital History working group, “Digital History and Argument,” white paper, Roy Rosenzweig Center for History and New Media (November 13, 2017): rrchnm.org/argument-white-paper/.
This used to come up a lot, but I haven’t seen it lately. Anyone want an intro to git and/or GitHub, or want to share tricks and tips from git/hub experience?
Yeah, that brief description is all about the distinction between git and GitHub, so that might make a good starting point!
Just some reminders about THATCamp AHA tomorrow:
We’ll be starting with coffee and mingling at 8:15am on Wednesday, January 3rd, just before AHA itself, at Funger Hall, 2201 G Street NW, Washington, DC. Travel information is on the site at aha2018.thatcamp.org/travel/, as is the (still blank of course) schedule information at aha2018.thatcamp.org/schedule/
And speaking of blank schedules, why not propose something to fill it in? If you haven’t been to an unconference before, you can read about proposing a session at aha2018.thatcamp.org/propose/. If you don’t get around to proposing something before January 3rd, or if you’re unsure about the process, no worries: you can always suggest an idea on the morning of the unconference or even after it’s underway.
Most THATCamp sessions in my experience tend to be discussions (Talk sessions), which are plenty valuable in themselves, but we *strongly* encourage you to propose hands-on collaborative writing or coding sessions (Make sessions) or digital skills workshops (Teach sessions) that will let everyone learn and work together productively. You’d be surprised at how useful even a spontaneously organized workshop can be: I’ve been at THATCamps where someone mentions a tool in one session and then by popular demand agrees to teach it in the next: unconferences are great at determining what the people in the room really want to learn and do.
To propose a session, click the “Log in” link on the site’s home page, then choose Posts –> Add New, write notes in the text box, and hit “Publish” when you’re done. See codex.wordpress.org/Writing_Posts for more help, or just play around in the THATCamp AHA website itself.
Finally, a word about food. We’ll provide coffee, but you’re on your own for breakfast and lunch. There are plenty of great places to eat around Funger Hall. See you tomorrow!
Omeka S is a complete rewrite of Omeka. If anyone wants to talk/see about it, here’s a good chance.
I could do a brief overview of the LOD and multisite principles built into it, and will ask for feedback and conversation on our first official release.
There’s a sandbox site that we can play in at omeka.org/s/download/#sandbox
In this session, I will discuss options for acquiring social media data, including collecting it yourself, locating and re-using existing datasets, purchasing data, or using a social media service provider.
For collecting and re-using datasets, I will demonstrate Social Feed Manager and TweetSets. Social Feed Manager is software that harvests social media data and web resources from Twitter, Tumblr, Flickr, and Sina Weibo. TweetSets supports creating custom Twitter datasets from existing datasets. Both are open-source software developed at George Washington University Libraries.
Depending on participant interest, additional topics for discussion might include research ethics and privacy considerations, collecting Facebook data, an introduction to APIs, comparing web archiving and API-based social media archiving, or anything else social media-related that participants would like to discuss.