Life in the Devonian Era
I am a data hoarder. I have personal emails going back to 2004. Some papers from undergrad, lots of stuff from my time in Austria, and more stuff besides. This is all to say nothing of my grad school work, thesis, etc. etc. etc. There’s a reason my 8TB NAS only has 1.5TB still free. I also have a lot of current stuff to manage and in a growing number of text and non-text formats.
Until recently, I relied pretty heavily on a 200gb Google Drive for storage of “current” materials. For notes I used Joplin or Markor as appropriate. But my increasing determination to liberate myself from the FAANGS meant that it was time shake things up.1
DevonThink has some powerful tools baked in, e.g., ABBYY FineReader for OCR. It does a really good job indexing and identifying connections between documents and emails, between tags and keywords. It can capture local copies of web pages, RSS feeds, etc. Neat, right?
But there’s something missing: web history.
Send in the [History] Hounds
History Hound creates a searchable index of your web history across multiple browsers. If you sync this stuff e.g., via Firefox Sync, it gets that stuff too.2 It also integrates with NetNewsWire to index your RSS feeds! This is a brilliant idea.
But if you’ve got DevonThink on one hand and HistoryHound on the other, how can you make them talk to one another? Well, a quick email to History Hound’s developer reveals there is a way to get it to dump out your web history index. Hooray! But wait, the next issue is that History Hound’s output is in UTF-16LE and the resulting file doesn’t render well. My first export was after a month of using it and the file was ~50mb of text. Three or so months in and the export was 120mb. Yikes!
So I noticed two things that should’ve been obvious at first, 1. HH exports to HTML and 2. the file encoding is wrong or at least it’s not UTF-8. But there’s a solution.
Bless This Mess
Here’s how to export then cleanup your History Hound index for archiving:
Index Status(or just
- Save the file as foo.html
file foo.htmlshould produce something like
HTML document text, Little-endian UTF-16 Unicode text, with very long lines
iconv -f UTF-16LE -t UTF-8 foo.html >> foo_utf8.html
Et voila! Dump into DevonThink. The resulting file should be ~50% smaller and opening it in DevonThink (or a plain old browser) will give you a delightful and searchable plain-text index you can file away in DevonThink.
2020-01-22 Update: UTF-8!
As noted by @StClairSoft after this was posted a few weeks ago, the new version now uses UTF-8 so you can save a bunch of steps above. Give their stuff a look. It’s good stuff. And indie developers like them need/deserve all the support they can get.