Wikipedia is undeniably the most readily available encyclopedia, not to mention the fact that it's free, but despite being readily available it isn't always available -- no internet access, no wikipedia. Which is why Wikipedia periodically dumps its content so you can load it on your laptop and have a local copy. But building a local copy is a time consuming process involving the need for a local database and server set up. If you want to build a search index on that database it can take several days -- surely there's a better way. In fact, now there is. Wikipedia fan Thanassis Tsiodras has come up with a much more efficient way of installing and indexing a local Wikipedia dump. As tsiodras writes: Wouldn't it be perfect, if we could use the wikipedia "dump" data JUST as they arrive after the download? Without creating a much larger (space-wize) MySQL database? And also be able to search for parts of title names and get back lists of titles with "similarity percentages"? Why yes it would. And fortunately Tsiodras has already done the heavy lifting. Using Python, Perl, or Php, along with the Xapian search engine and Tsiodras' package, you can have a local install of Wikipedia (2.9 GB) with a lightweight web interface for searching and reading entries from anywhere. Complete instructions can be found [here][2]. I should note that this does require some command line tinkering, but the size and speed more than warrant wading through the minimal code necessary to get it up and running. Also, if you're a big Wikipedia fan, be sure to check out [our review of WikipediaFS][3] from earlier this year. [via [Hackzine][1]] [1]: http://www.hackszine.com/blog/archive/2007/08/wikipedia_offline_reader_put_a.html?CMP=OTC-7G2N43923558 [2]: http://www.softlab.ntua.gr/~ttsiod/buildWikipediaOffline.html [3]: http://blog.wired.com/monkeybites/2007/05/mount_wikipedia.html