Skip to content

A tool for extracting plain text from Wikipedia dumps

License

Notifications You must be signed in to change notification settings

cifkao/wikiextractor

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

82 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WikiExtractor

WikiExtractor.py is a Python script that extracts and cleans text from a Wikipedia database dump.

The tool is written in Python and requires Python 2.7 but no additional library. Warning: problems have been reported on Windows with the use of multiprocessing.

For further information, see the project Home Page or the Wiki.

Wikipedia Cirrus Extractor

cirrus-extractor.py is a version of the script that perfomrs extraction from a Wikipedia Cirrus dump. Cirrus dumps contain text with already exmpanded templates.

Cirrus dumps are avilable at: http:https://dumps.wikimedia.org/other/cirrussearch/.

Details

WikiExtractor performs template expansion by preprocesssng the whole dump and extracting template definitions.

The latest version includes the following performance improvements:

  • multiprocessing is used for dealing with articles in parallel (this requires a Python installation with proper implementation of the StringIO library)
  • a cache is kept of parsed templates.

Usage

The script is invoked with a Wikipedia dump file as an argument. The output is stored in several files of similar size in a given directory. Each file will contains several documents in this document format.

usage: WikiExtractor.py [-h] [-o OUTPUT] [-b n[KMG]] [-c] [--html] [-l]
		    [-ns ns1,ns2] [-s] [--templates TEMPLATES]
		    [--no-templates] [--processes PROCESSES] [-q] [--debug]
		    [-a] [-v]
		    input

positional arguments:
  input                 XML wiki dump file

optional arguments:
  -h, --help            show this help message and exit
  --processes PROCESSES number of processes to use (default: number of CPU cores)

Output:
  -o OUTPUT, --output OUTPUT
		    a directory where to store the extracted files (or '-' for dumping to
                        stdout)
  -b n[KMG], --bytes n[KMG]
                        maximum bytes per output file (default 1M)
  -c, --compress        compress output files using bzip

Processing:
  --html                produce HTML output, subsumes --links
  -l, --links           preserve links
  -ns ns1,ns2, --namespaces ns1,ns2
		    accepted namespaces
  --templates TEMPLATES
		    use or create file containing templates
  --no-templates        Do not expand templates
  --escapedoc           use to escape the contents of the output
                        <doc>...</doc>

Special:
  -q, --quiet           suppress reporting progress info
  --debug               print debug info
  -a, --article         analyze a file containing a single article (debug option)
  -v, --version         print program version

Saving templates to a file will speed up performing extraction the next time, assuming template definitions have not changed.

Option --no-templates significantly speeds up the extractor, avoiding the cost of expanding MediaWiki templates.

For further information, visit the documentation.

About

A tool for extracting plain text from Wikipedia dumps

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 100.0%