A Python library for DHIS2 wrapping requests for general-purpose API interaction with DHIS2. It attempts to be useful for any data/metadata import and export tasks including various utilities like file loading, UID generation and logging. A strong focus is on JSON.
Supported and tested on Linux/macOS, Windows and DHIS2 versions >= 2.25. Python 3.6+ is required.
Contents
Python 3.6+ is required.
pip install dhis2.py
For instructions on installing Python / pip for your operating system see realpython.com/installing-python.
Create an Api
object:
from dhis2 import Api
api = Api('play.dhis2.org/demo', 'admin', 'district')
Then run requests on it:
r = api.get('organisationUnits/Rp268JB6Ne4', params={'fields': 'id,name'})
print(r.json())
# { "name": "Adonkia CHP", "id": "Rp268JB6Ne4" }
r = api.post('metadata', json={'dataElements': [ ... ] })
print(r.status_code) # 200
api.get()
api.post()
api.put()
api.patch()
api.delete()
see below for more methods.
They all return a Response object from requests
except noted otherwise. This means methods and attributes are equally available
(e.g. Response.url
, Response.text
, Response.status_code
etc.).
Create an API object
from dhis2 import Api
api = Api('play.dhis2.org/demo', 'admin', 'district')
optional arguments:
api_version
: DHIS2 API versionuser_agent
: submit your own User-Agent header. This is useful if you need to parse e.g. Nginx logs later.
Load from a auth JSON file in order to not store credentials in scripts. Must have the following structure:
{
"dhis": {
"baseurl": "https://localhost:8080",
"username": "admin",
"password": "district"
}
}
from dhis2 import Api
api = Api.from_auth_file('path/to/auth.json', api_version=29, user_agent='myApp/1.0')
If no file path is specified, it tries to find a file called dish.json
in:
- the
DHIS_HOME
environment variable - your Home folder
API version as a string:
print(api.version)
# '2.30'
API version as an integer:
print(api.version_int)
# 30
API revision / build:
print(api.revision)
# '17f7f0b'
API URL:
print(api.api_url)
# 'https://play.dhis2.org/demo/api/30'
Base URL:
print(api.base_url)
# 'https://play.dhis2.org/demo'
system info (this is persisted across the session):
print(api.info)
# {
# "lastAnalyticsTableRuntime": "11 m, 51 s",
# "systemId": "eed3d451-4ff5-4193-b951-ffcc68954299",
# "contextPath": "https://play.dhis2.org/2.30",
# ...
Normal method: api.get()
, e.g.
r = api.get('organisationUnits/Rp268JB6Ne4', params={'fields': 'id,name'})
data = r.json()
Parameters:
- timeout: to override the timeout value (default: 5 seconds) in order to prevent the client to wait indefinitely on a server response.
Paging for larger GET requests via api.get_paged()
Two possible ways:
- Process every page as they come in:
for page in api.get_paged('organisationUnits', page_size=100):
print(page)
# { "organisationUnits": [ {...}, {...} ] } (100 organisationUnits)
- Load all pages before proceeding (this may take a long time) - to do this, do not use
for
and addmerge=True
:
all_pages = api.get_paged('organisationUnits', page_size=100, merge=True):
print(all_pages)
# { "organisationUnits": [ {...}, {...} ] } (all organisationUnits)
Note: Returns directly a JSON object, not a requests.Response object unlike normal GETs.
Get SQL View data as if you'd open a CSV file, optimized for larger payloads, via api.get_sqlview()
# poll a sqlView of type VIEW or MATERIALIZED_VIEW:
for row in api.get_sqlview('YOaOY605rzh', execute=True, criteria={'name': '0-11m'}):
print(row)
# {'code': 'COC_358963', 'name': '0-11m'}
# similarly, poll a sqlView of type QUERY:
for row in api.get_sqlview('qMYMT0iUGkG', var={'valueType': 'INTEGER'}):
print(row)
# if you want a list directly, cast it to a ``list`` or add ``merge=True``:
data = list(api.get_sqlview('qMYMT0iUGkG', var={'valueType': 'INTEGER'}))
# OR
# data = api.get_sqlview('qMYMT0iUGkG', var={'valueType': 'INTEGER'}, merge=True)
Note: Returns directly a JSON object, not a requests.response object unlike normal GETs.
Beginning of 2.26 you can also use normal filtering on sqlViews. In that case, it's recommended
to use the stream=True
parameter of the Dhis.get()
method.
Usually defaults to JSON but you can get other file types:
r = api.get('organisationUnits/Rp268JB6Ne4', file_type='xml')
print(r.text)
# <?xml version='1.0' encoding='UTF-8'?><organisationUnit ...
r = api.get('organisationUnits/Rp268JB6Ne4', file_type='pdf')
with open('/path/to/file.pdf', 'wb') as f:
f.write(r.content)
Normal methods:
api.post()
api.put()
api.patch()
api.delete()
If you have such a large payload (e.g. metadata imports) that you frequently get a HTTP Error:
413 Request Entity Too Large
response e.g. from Nginx you might benefit from using
the following method that splits your payload in partitions / chunks and posts them one-by-one.
You define the amount of elements in each POST by specifying a number in thresh
(default: 1000
).
Note that it is only possible to submit one key per payload (e.g. dataElements
only, not additionally organisationUnits
in the same payload).
api.post_partitioned()
import json
data = {
"organisationUnits": [
{...},
{...} # very large number of org units
]
{
for response in api.post_partitioned('metadata', json=data, thresh=5000):
text = json.loads(response.text)
print('[{}] - {}'.format(text['status'], json.dumps(text['stats'])))
If you need to pass multiple parameters to your request with the same key, you may submit as a list of tuples instead when e.g.:
r = api.get('dataValueSets', params=[
('dataSet', 'pBOMPrpg1QX'), ('dataSet', 'BfMAe6Itzgt'),
('orgUnit', 'YuQRtpLP10I'), ('orgUnit', 'vWbkYPRmKyS'),
('startDate', '2013-01-01'), ('endDate', '2013-01-31')
]
)
alternatively:
r = api.get('dataValueSets', params={
'dataSet': ['pBOMPrpg1QX', 'BfMAe6Itzgt'],
'orgUnit': ['YuQRtpLP10I', 'vWbkYPRmKyS'],
'startDate': '2013-01-01',
'endDate': '2013-01-31'
})
from dhis2 import load_json
json_data = load_json('/path/to/file.json')
print(json_data)
# { "id": ... }
Via a Python generator:
from dhis2 import load_csv
for row in load_csv('/path/to/file.csv'):
print(row)
# { "id": ... }
Via a normal list, loaded fully into memory:
data = list(load_csv('/path/to/file.csv'))
Create a DHIS2 UID:
uid = generate_uid()
print(uid)
# 'Rp268JB6Ne4'
To create a list of 1000 UIDs:
uids = [generate_uid() for _ in range(1000)]
Check if something is a valid DHIS2 UID:
uid = 'MmwcGkxy876'
print(is_valid_uid(uid))
# True
uid = 25329
print(is_valid_uid(uid))
# False
uid = 'MmwcGkxy876 '
print(is_valid_uid(uid))
# False
Useful for deep-removing certain keys in an object,
e.g. remove all sharing by recursively removing all user
and userGroupAccesses
fields.
from dhis2 import clean_obj
metadata = {
"dataElements": [
{
"name": "ANC 1st visit",
"id": "fbfJHSPpUQD",
"publicAccess": "rw------",
"userGroupAccesses": [
{
"access": "r-r-----",
"userGroupUid": "Rg8wusV7QYi",
"displayName": "HIV Program Coordinators",
"id": "Rg8wusV7QYi"
},
{
"access": "rwr-----",
"userGroupUid": "qMjBflJMOfB",
"displayName": "Family Planning Program",
"id": "qMjBflJMOfB"
}
]
}
],
"dataSets": [
{
"name": "ART monthly summary",
"id": "lyLU2wR22tC",
"publicAccess": "rwr-----",
"userGroupAccesses": [
{
"access": "r-rw----",
"userGroupUid": "GogLpGmkL0g",
"displayName": "_DATASET_Child Health Program Manager",
"id": "GogLpGmkL0g"
}
]
}
]
}
cleaned = clean_obj(metadata, ['userGroupAccesses', 'publicAccess'])
pretty_json(cleaned)
Which would eventually recursively remove all keys matching to userGroupAccesses
or publicAccess
:
{
"dataElements": [
{
"name": "ANC 1st visit",
"id": "fbfJHSPpUQD"
}
],
"dataSets": [
{
"name": "ART monthly summary",
"id": "lyLU2wR22tC"
}
]
}
Print easy-readable JSON objects with colors, utilizes Pygments.
from dhis2 import pretty_json
obj = {"dataElements": [{"name": "Accute Flaccid Paralysis (Deaths < 5 yrs)", "id": "FTRrcoaog83", "aggregationType": "SUM"}]}
pretty_json(obj)
... prints (in a terminal it will have colors):
{
"dataElements": [
{
"aggregationType": "SUM",
"id": "FTRrcoaog83",
"name": "Accute Flaccid Paralysis (Deaths < 5 yrs)"
}
]
}
Logging utilizes logzero.
- Color output depending on log level
- DHIS2 log format including the line of the caller
- optional
logfile=
specifies a rotating log file path (20 x 10MB files)
from dhis2 import setup_logger, logger
setup_logger(logfile='/var/log/app.log')
logger.info('my log message')
logger.warning('missing something')
logger.error('something went wrong')
logger.exception('with stacktrace')
* INFO 2018-06-01 18:19:40,001 my log message [script:86] * ERROR 2018-06-01 18:19:40,007 something went wrong [script:87]
Use setup_logger(include_caller=False)
if you want to remove [script:86]
from logs.
There are two exceptions:
RequestException
: DHIS2 didn't like what you requested. See the exception'scode
,url
anddescription
.ClientException
: Something didn't work with the client not involving DHIS2.
They both inherit from Dhis2PyException
.
- Real-world script examples can be found in the
examples
folder. - dhis2.py is used in dhis2-pk (dhis2-pocket-knife)
Versions changelog
Feedback welcome!
- Add issue
- Install the dev environment (see below)
- Fork, add changes to master branch, ensure tests pass with full coverage and add a Pull Request
pip install pipenv
git clone https://github.com/davidhuser/dhis2.py
cd dhis2.py
pipenv install --dev
pipenv run tests
# install pre-commit hooks
pipenv run pre-commit install
# run type annotation check
pipenv run mypy dhis2
# run flake8 style guide enforcement
pipenv run flake8
dhis2.py's source is provided under MIT license. See LICENCE for details.
- Copyright (c), 2020, David Huser