summaryrefslogtreecommitdiff
path: root/wired/old/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
diff options
context:
space:
mode:
authorluxagraf <sng@luxagraf.net>2020-04-28 10:24:02 -0400
committerluxagraf <sng@luxagraf.net>2020-04-28 10:24:02 -0400
commitf343ef4d92352f9fc442aeb9c8b1abee27d74c62 (patch)
tree4df5c497e7caeab1f8932df98ad3d00fef228a3e /wired/old/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
parenta222e73b9d352f7dd53027832d04dc531cdf217e (diff)
cleaned up wired import
Diffstat (limited to 'wired/old/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt')
-rw-r--r--wired/old/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt20
1 files changed, 20 insertions, 0 deletions
diff --git a/wired/old/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt b/wired/old/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
new file mode 100644
index 0000000..705f3a0
--- /dev/null
+++ b/wired/old/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
@@ -0,0 +1,20 @@
+
+Wikipedia is undeniably the most readily available encyclopedia, not to mention the fact that it's free, but despite being readily available it isn't always available -- no internet access, no wikipedia. Which is why Wikipedia periodically dumps its content so you can load it on your laptop and have a local copy.
+
+But building a local copy is a time consuming process involving the need for a local database and server set up. If you want to build a search index on that database it can take several days -- surely there's a better way.
+
+In fact, now there is. Wikipedia fan Thanassis Tsiodras has come up with a much more efficient way of installing and indexing a local Wikipedia dump. As tsiodras writes:
+
+ Wouldn't it be perfect, if we could use the wikipedia "dump" data JUST as they arrive after the download? Without creating a much larger (space-wize) MySQL database? And also be able to search for parts of title names and get back lists of titles with "similarity percentages"?
+
+Why yes it would. And fortunately Tsiodras has already done the heavy lifting. Using Python, Perl, or Php, along with the Xapian search engine and Tsiodras' package, you can have a local install of Wikipedia (2.9 GB) with a lightweight web interface for searching and reading entries from anywhere.
+
+Complete instructions can be found [here][2]. I should note that this does require some command line tinkering, but the size and speed more than warrant wading through the minimal code necessary to get it up and running.
+
+Also, if you're a big Wikipedia fan, be sure to check out [our review of WikipediaFS][3] from earlier this year.
+
+[via [Hackzine][1]]
+
+[1]: http://www.hackszine.com/blog/archive/2007/08/wikipedia_offline_reader_put_a.html?CMP=OTC-7G2N43923558
+[2]: http://www.softlab.ntua.gr/~ttsiod/buildWikipediaOffline.html
+[3]: http://blog.wired.com/monkeybites/2007/05/mount_wikipedia.html \ No newline at end of file