summaryrefslogtreecommitdiff
path: root/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
diff options
context:
space:
mode:
authorluxagraf <sng@luxagraf.net>2015-10-25 08:45:11 -0400
committerluxagraf <sng@luxagraf.net>2015-10-25 08:45:11 -0400
commit0531523b372cc251a8391f5a12447d62f53916a9 (patch)
tree7e9c54c11f6d0283accdf10028966ceeb8e9a2bf /published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
initial commit
Diffstat (limited to 'published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt')
-rw-r--r--published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt20
1 files changed, 20 insertions, 0 deletions
diff --git a/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt b/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
new file mode 100644
index 0000000..705f3a0
--- /dev/null
+++ b/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
@@ -0,0 +1,20 @@
+
+Wikipedia is undeniably the most readily available encyclopedia, not to mention the fact that it's free, but despite being readily available it isn't always available -- no internet access, no wikipedia. Which is why Wikipedia periodically dumps its content so you can load it on your laptop and have a local copy.
+
+But building a local copy is a time consuming process involving the need for a local database and server set up. If you want to build a search index on that database it can take several days -- surely there's a better way.
+
+In fact, now there is. Wikipedia fan Thanassis Tsiodras has come up with a much more efficient way of installing and indexing a local Wikipedia dump. As tsiodras writes:
+
+ Wouldn't it be perfect, if we could use the wikipedia "dump" data JUST as they arrive after the download? Without creating a much larger (space-wize) MySQL database? And also be able to search for parts of title names and get back lists of titles with "similarity percentages"?
+
+Why yes it would. And fortunately Tsiodras has already done the heavy lifting. Using Python, Perl, or Php, along with the Xapian search engine and Tsiodras' package, you can have a local install of Wikipedia (2.9 GB) with a lightweight web interface for searching and reading entries from anywhere.
+
+Complete instructions can be found [here][2]. I should note that this does require some command line tinkering, but the size and speed more than warrant wading through the minimal code necessary to get it up and running.
+
+Also, if you're a big Wikipedia fan, be sure to check out [our review of WikipediaFS][3] from earlier this year.
+
+[via [Hackzine][1]]
+
+[1]: http://www.hackszine.com/blog/archive/2007/08/wikipedia_offline_reader_put_a.html?CMP=OTC-7G2N43923558
+[2]: http://www.softlab.ntua.gr/~ttsiod/buildWikipediaOffline.html
+[3]: http://blog.wired.com/monkeybites/2007/05/mount_wikipedia.html \ No newline at end of file