Thinking about this further, this might also be a really useful tie-in for Wikipedia. It also increasingly relies on Web citations, many of which fail.
If cited material were automatically archived and submitted to TIA, this could be further useful. The fact that this is an inherent archival of information which is deemed relevant and citable is also worth noting.
We have started crawling the outlinks for every new article and update as they are made – about 5 million new URLs are archived every day. Now we have to figure out how to get archived pages back in to Wikipedia to fix some of those dead links. Kunal Mehta, a Wikipedian from San Jose, recently wrote a protoype bot that can add archived versions to any link in Wikipedia so that when those links are determined to be dead the links can be switched over automatically and continue to work. It will take a while to work this through the process the Wikipedia community of editors uses to approve bots, but that conversation is under way.
If cited material were automatically archived and submitted to TIA, this could be further useful. The fact that this is an inherent archival of information which is deemed relevant and citable is also worth noting.