WikiExtractor.py is a Python script that extracts and cleans text from a Wikipedia database dump.
The tool is written in Python and requires Python 2.7 or Python 3.3+ but no additional library.
For further information, see the project Home Page or the Wiki.
cirrus-extractor.py
is a version of the script that performs extraction from a Wikipedia Cirrus dump.
Cirrus dumps contain text with already expanded templates.
Cirrus dumps are available at: cirrussearch.
WikiExtractor performs template expansion by preprocessing the whole dump and extracting template definitions.
In order to speed up processing:
- multiprocessing is used for dealing with articles in parallel
- a cache is kept of parsed templates (only useful for repeated extractions).
The script may be invoked directly, however it can be installed by doing:
(sudo) python setup.py install
The script is invoked with a Wikipedia dump file as an argument. The output is stored in several files of similar size in a given directory. Each file will contains several documents in this document format.
usage: WikiExtractor.py [-h] [-o OUTPUT] [-b n[KMG]] [-c] [--json] [--html]
[-l] [--all_links] [-s] [--lists] [-ns ns1,ns2]
[-xns ns1,ns2] [--templates TEMPLATES]
[--no-templates] [-r]
[--min_text_length MIN_TEXT_LENGTH]
[--filter_disambig_pages] [-it abbr,b,big]
[-de gallery,timeline,noinclude] [--keep_tables]
[--processes PROCESSES] [--redirect_insert] [-q]
[--debug] [-a] [-v]
input
Wikipedia Extractor:
Extracts and cleans text from a Wikipedia database dump and stores output in a number of files
of similar size in a given directory.
Each file will contain several documents in the format:
<doc id="" revid="" url="" title="">
...
</doc>
If the program is invoked with the --json flag, then each file will contain several documents
formatted as json ojects, one per line, with the following structure
{"id": "", "revid": "", "url":"", "title": "", "text": "..."}
Template expansion requires preprocesssng first the whole dump and collecting template definitions.
positional arguments:
input XML wiki dump file
optional arguments:
-h, --help show this help message and exit
--processes PROCESSES
Number of processes to use (default 3)
Output:
-o OUTPUT, --output OUTPUT
directory for extracted files (or '-' for dumping to stdout)
-b n[KMG], --bytes n[KMG]
maximum bytes per output file (default 1M)
-c, --compress compress output files using bzip
--json write output in json format instead of the default one
Processing:
--html produce HTML output, subsumes --links
-l, --links preserve links
--all_links preserve all links (with links to all namespaces)
-s, --sections preserve sections
--lists preserve lists
-ns ns1,ns2, --namespaces ns1,ns2
accepted namespaces in links
-xns ns1,ns2, --xml_namespaces ns1,ns2
accepted page xml namespaces -- 0 for main/articles
--templates TEMPLATES
use or create file containing templates
--no-templates Do not expand templates
-r, --revision Include the document revision id (default=False)
--min_text_length MIN_TEXT_LENGTH
Minimum expanded text length required to write document (default=0)
--filter_disambig_pages
Remove pages from output that contain disabmiguation markup (default=False)
-it abbr,b,big, --ignored_tags abbr,b,big
comma separated list of tags that will be dropped, keeping their content
-de gallery,timeline,noinclude, --discard_elements gallery,timeline,noinclude
comma separated list of elements that will be removed from the article text
--keep_tables Preserve tables in the output article text (default=False)
--redirect_insert Insert redirect pages to output
Special:
-q, --quiet suppress reporting progress info
--debug print debug info
-a, --article analyze a file containing a single article (debug option)
-v, --version print program version
Saving templates to a file will speed up performing extraction the next time, assuming template definitions have not changed.
Option --no-templates significantly speeds up the extractor, avoiding the cost of expanding MediaWiki templates.
For further information, visit the documentation.
-
Make dump of your mediawiki:
php5 /var/lib/mediawiki/maintenance/dumpBackup.php --current > /tmp/dump.xml
-
Extract data from dump to one big file (param --html is necessary):
./WikiExtractor.py -o - -xns 0,2,14,100,102,104,106,108,110,112,114,116 --links --all_links -b 200M --html --redirect_insert --keep_tables -q /tmp/dump.xml > /tmp/wiki.data
-
Create single html pages and index.html:
cat /tmp/wiki.data | ./makehtmlfiles.pl /tmp/offline
-
Optional you can copy images:
cd /var/lib/mediawiki/images find 0 1 2 3 4 5 6 7 8 9 a b c d e f -type f -exec cp -f {} /tmp/offline/images/ \;
-
View offline content ONLY (without skins, templates, scripts, ...) of your wikimedia:
links file:https:///tmp/offline/index.html