andromeda yelton andromeda yelton I haven’t failed, I’ve just tried a lot of ML approaches that don’t work “Let’s blog every Friday,” I thought. “It’ll be great. People can see what I’m doing with ML, and it will be a useful practice for me!” And then I went through weeks on end of feeling like I had nothing to report because I was trying approach after approach to this one problem that simply … Continue reading I haven’t failed, I’ve just tried a lot of ML approaches that don’t work → this time: speaking about machine learning No tech blogging this week because most of my time was taken up with telling people about ML instead! One talk for an internal Harvard audience, “Alice in Dataland”, where I explained some of the basics of neural nets and walked people through the stories I found through visualizing HAMLET data. One talk for the … Continue reading this time: speaking about machine learning → archival face recognition for fun and nonprofit In 2019, Dominique Luster gave a super good Code4Lib talk about applying AI to metadata for the Charles “Teenie” Harris collection at the Carnegie Museum of Art — more than 70,000 photographs of Black life in Pittsburgh. They experimented with solutions to various metadata problems, but the one that’s stuck in my head since 2019 … Continue reading archival face recognition for fun and nonprofit → sequence models of language: slightly irksome Not much AI blogging this week because I have been buried in adulting all week, which hasn’t left much time for machine learning. Sadface. However, I’m in the last week of the last deeplearning.ai course! (Well. Of the deeplearning.ai sequence that existed when I started, anyway. They’ve since added an NLP course and a GANs … Continue reading sequence models of language: slightly irksome → Adapting Coursera’s neural style transfer code to localhost Last time, when making cats from the void, I promised that I’d discuss how I adapted the neural style transfer code from Coursera’s Convolutional Neural Networks course to run on localhost. Here you go! Step 1: First, of course, download (as python) the script. You’ll also need the nst_utils.py file, which you can access via … Continue reading Adapting Coursera’s neural style transfer code to localhost → Dear Internet, merry Christmas; my robot made you cats from the void Recently I learned how neural style transfer works. I wanted to be able to play with it more and gain some insights, so I adapted the Coursera notebook code to something that works on localhost (more on that in a later post), found myself a nice historical cat image via DPLA, and started mashing it … Continue reading Dear Internet, merry Christmas; my robot made you cats from the void → this week in my AI After visualizing a whole bunch of theses and learning about neural style transfer and flinging myself at t-SNE I feel like I should have something meaty this week but they can’t all be those weeks, I guess. Still, I’m trying to hold myself to Friday AI blogging, so here are some work notes: Finished course … Continue reading this week in my AI → Though these be matrices, yet there is method in them. When I first trained a neural net on 43,331 theses to make HAMLET, one of the things I most wanted to do is be able to visualize them. If word2vec places documents ‘near’ each other in some kind of inferred conceptual space, we should be able to see some kind of map of them, yes? … Continue reading Though these be matrices, yet there is method in them. → Of such stuff are (deep)dreams made: convolutional networks and neural style transfer Skipped FridAI blogging last week because of Thanksgiving, but let’s get back on it! Top-of-mind today are the firing of AI queen Timnit Gebru (letter of support here) and a couple of grant applications that I’m actually eligible for (this is rare for me! I typically need things for which I can apply in my … Continue reading Of such stuff are (deep)dreams made: convolutional networks and neural style transfer → Let’s visualize some HAMLET data! Or, d3 and t-SNE for the lols. In 2017, I trained a neural net on ~44K graduate theses using the Doc2Vec algorithm, in hopes that doing so would provide a backend that could support novel and delightful discovery mechanisms for unique library content. The result, HAMLET, worked better than I hoped; it not only pulls together related works from different departments (thus … Continue reading Let’s visualize some HAMLET data! Or, d3 and t-SNE for the lols. →