summaryrefslogtreecommitdiff
path: root/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
diff options
context:
space:
mode:
authorluxagraf <sng@luxagraf.net>2019-05-04 15:48:55 -0500
committerluxagraf <sng@luxagraf.net>2019-05-04 15:48:55 -0500
commit79fafe2f44f5e31522dd93013950474342bfdfb0 (patch)
treebc9ccf5b4eadeebf3a2f86b21f9b382edfa41735 /published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
parent62167091560c908db0613bcb35ff9ae8292f5961 (diff)
archived all the stuff from freelancing for wired
Diffstat (limited to 'published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt')
-rw-r--r--published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt20
1 files changed, 0 insertions, 20 deletions
diff --git a/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt b/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
deleted file mode 100644
index 705f3a0..0000000
--- a/published/Webmonkey/Monkey_Bites/2007/08.20.07/Mon/wikipedialocal.txt
+++ /dev/null
@@ -1,20 +0,0 @@
-
-Wikipedia is undeniably the most readily available encyclopedia, not to mention the fact that it's free, but despite being readily available it isn't always available -- no internet access, no wikipedia. Which is why Wikipedia periodically dumps its content so you can load it on your laptop and have a local copy.
-
-But building a local copy is a time consuming process involving the need for a local database and server set up. If you want to build a search index on that database it can take several days -- surely there's a better way.
-
-In fact, now there is. Wikipedia fan Thanassis Tsiodras has come up with a much more efficient way of installing and indexing a local Wikipedia dump. As tsiodras writes:
-
- Wouldn't it be perfect, if we could use the wikipedia "dump" data JUST as they arrive after the download? Without creating a much larger (space-wize) MySQL database? And also be able to search for parts of title names and get back lists of titles with "similarity percentages"?
-
-Why yes it would. And fortunately Tsiodras has already done the heavy lifting. Using Python, Perl, or Php, along with the Xapian search engine and Tsiodras' package, you can have a local install of Wikipedia (2.9 GB) with a lightweight web interface for searching and reading entries from anywhere.
-
-Complete instructions can be found [here][2]. I should note that this does require some command line tinkering, but the size and speed more than warrant wading through the minimal code necessary to get it up and running.
-
-Also, if you're a big Wikipedia fan, be sure to check out [our review of WikipediaFS][3] from earlier this year.
-
-[via [Hackzine][1]]
-
-[1]: http://www.hackszine.com/blog/archive/2007/08/wikipedia_offline_reader_put_a.html?CMP=OTC-7G2N43923558
-[2]: http://www.softlab.ntua.gr/~ttsiod/buildWikipediaOffline.html
-[3]: http://blog.wired.com/monkeybites/2007/05/mount_wikipedia.html \ No newline at end of file