Skip to content

A simple but opinionated metadata quality checker and fixer designed to work with CSVs in the DSpace ecosystem

License

Notifications You must be signed in to change notification settings

ilri/csv-metadata-quality

Repository files navigation

DSpace CSV Metadata Quality Checker

Build and Test Code style: black

A simple, but opinionated metadata quality checker and fixer designed to work with CSVs in the DSpace ecosystem (though it could theoretically work on any CSV that uses Dublin Core fields as columns). The implementation is essentially a pipeline of checks and fixes that begins with splitting multi-value fields on the standard DSpace "||" separator, trimming leading/trailing whitespace, and then proceeding to more specialized cases like ISSNs, ISBNs, languages, unnecessary Unicode, AGROVOC terms, etc.

Requires Python 3.9 or greater. CSV support comes from the Pandas library.

If you use the DSpace CSV metadata quality checker please cite:

Orth, A. 2019. DSpace CSV metadata quality checker. Nairobi, Kenya: ILRI. https://hdl.handle.net/10568/110997.

Functionality

  • Validate dates, ISSNs, ISBNs, and multi-value separators ("||")
  • Validate languages against ISO 639-1 (alpha2) and ISO 639-3 (alpha3)
  • Experimental validation of titles and abstracts against item's Dublin Core language field
  • Validate subjects against the AGROVOC REST API (see the --agrovoc-fields option)
  • Validation of licenses against the list of SPDX license identifiers
  • Fix leading, trailing, and excessive (ie, more than one) whitespace
  • Fix invalid and unnecessary multi-value separators (|)
  • Fix problematic newlines (line feeds) using --unsafe-fixes
  • Perform Unicode normalization on strings using --unsafe-fixes
  • Remove unnecessary Unicode like non-breaking spaces, replacement characters, etc
  • Check for "suspicious" characters that indicate encoding or copy/paste issues, for example "foreˆt" should be "forêt"
  • Check for "mojibake" characters (and attempt to fix with --unsafe-fixes)
  • Check for countries with missing regions (and attempt to fix with --unsafe-fixes)
  • Remove duplicate metadata values
  • Check for duplicate items, using the title, type, and date issued as an indicator
  • Normalize DOIs to https://doi.org URI format

Installation

The easiest way to install CSV Metadata Quality is with poetry:

$ git clone https://github.com/ilri/csv-metadata-quality.git
$ cd csv-metadata-quality
$ poetry install
$ poetry shell

Otherwise, if you don't have poetry, you can use a vanilla Python virtual environment:

$ git clone https://github.com/ilri/csv-metadata-quality.git
$ cd csv-metadata-quality
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt

Usage

Run CSV Metadata Quality with the --help flag to see available options:

$ csv-metadata-quality --help

To validate and clean a CSV file you must specify input and output files using the -i and -o options. For example, using the included test file:

$ csv-metadata-quality -i data/test.csv -o /tmp/test.csv

Invalid Multi-Value Separators

While it is theoretically possible for a single | character to be used legitimately in a metadata value, in my experience it is always a typo. For example, if a user mistakenly writes Kenya|Tanzania when attempting to indicate two countries, the result will be one metadata value with the literal text Kenya|Tanzania. This utility will correct the invalid multi-value separator so that there are two metadata values, ie Kenya||Tanzania.

This will also remove unnecessary trailing multi-value separators, for example Kenya||Tanzania||.

Unsafe Fixes

You can enable several "unsafe" fixes with the --unsafe-fixes option. Currently this will remove newlines, perform Unicode normalization, attempt to fix "mojibake" characters, and add missing UN M.49 regions.

Newlines

This is considered "unsafe" because some systems give special importance to vertical space and render it properly. DSpace does not support rendering newlines in its XMLUI and has, at times, suffered from parsing errors that cause the import process to fail if an input file had newlines. The --unsafe-fixes option strips Unix line feeds (U+000A).

Unicode Normalization

Unicode is a standard for encoding text. As the standard aims to support most of the world's languages, characters can often be represented in different ways and still be valid Unicode. This leads to interesting problems that can be confusing unless you know what's going on behind the scenes. For example, the characters and é look the same, but are not — technically they refer to different code points in the Unicode standard:

  • é is the Unicode code point U+00E9
  • is the Unicode code points U+0065 + U+0301

Read more about Unicode normalization.

Encoding Issues aka "Mojibake"