This folder contains various items that to run jmeter load tests on datatools-server.
Install jmeter with this nifty script:
./install-jmeter.sh
The jmeter test plan can be ran from the jmeter GUI or it can be ran without a GUI. In each of these cases, it is assumed that a datatools-server instance can be queried at http:https://localhost:4000.
This script starts the jmeter gui and loads the test script.
./run-gui.sh
The test plan can be ran straight from the command line. A helper script is provided to assist in running jmeter from the command line. This script has 3 required and 1 optional positional arguments:
# | argument | possible values | description |
---|---|---|---|
1 | test plan mode | batch , fetch , query or upload |
which test plan mode to use when running the jmeter script. (see notes below for more explanation of these test plan modes) |
2 | number of threads | an integer greater than 0 | The number of simultaneous threads to run at a time. The threads will have staggered start times 1 second apart. |
3 | number of loops | an integer greater than 0 | the number of loops to run. This is combined with the number of threads, so if the number of threads is 10 and the number of loops is 8, the total number of test plans to run will be 80. |
4 | project name or batch csv file | string of the project name or string of file path to batch csv file | This argument is required if running the script with the batch test plan mode, otherwise, this argument is optional.If in fetch or upload mode, the jmeter script will create new projects with a the provided project name (or "test project" if a name is not provided) plus the current iteration number. In fetch or upload mode, the feed url and upload file is not configurable. In fetch mode, the url http:https://documents.atlantaregional.com/transitdata/gtfs_ASC.zip will be used to fetch the feed to create the feed version. In upload mode, the file fixtures/gtfs.zip will be uploaded to create the feed version.If in query mode, jmeter will try to find the project matching the provided name (as long as the project name is not "test project") or a random project will be picked if this argument is not provided. |
5 | s3 bucket | string of an s3 bucket | OPTIONAL. If provided, the script will tar up the output folder and attempt to upload to the specified s3 bucket. This assumes that aws credentials have been setup for use by the aws command line tool. If not running in batch mode and a project name has been specified, the name of this file will be {project name}.tar.gz . Otherwise, the name will be output.tar.gz . |
Examples:
Run the test plan in upload mode 1 total times in 1 thread running 1 loop.
./run-tests.sh upload 1 1
Run the test plan in query mode 80 total times in 10 threads each completing 8 loops.
./run-tests.sh query 10 8 my-project-name my-s3-bucket
Run in batch mode. Note that all feeds in the csv file will be processed in each loop. So in the following command, each feed in the batch.csv file would be processed 6 times. See the section below for documentation on the csv file and also see the fixtures folder for an example file.
./run-tests.sh batch 3 2 batch.csv my-s3-bucket
As noted above, the jmeter script can be run in batch
mode. The provded csv file must contain the following headers and data:
header | description |
---|---|
project name | name of project to be created |
mode | Must be either fetch or upload |
location | The path to the file if the mode is upload or the http address if the mode is fetch |
There is also a helper python script that can be used to run the jmeter script in batch
mode using all files stored within an s3 bucket. This script requires that aws credentials have been setup for use by the aws command line tool.
# | argument | possible values | description |
---|---|---|---|
1 | test plan mode | fetch or upload |
The test plan mode to use. This will be written to each row of the csv file described above. |
2 | s3 bucket of gtfs feeds | the string of an s3 bucket | An s3 bucket that is accessbile with the credentials setup for the aws cli. Place zip files within the bucket. Each zip file will be downloaded to the local machine and the jmeter test plan will be ran in upload mode for each gtfs zip file. |
3 | s3 bucket for output reports | the string of an s3 bucket | OPTIONAL. After each test run, the script will tar up the output folder and attempt to upload to the specified s3 bucket. |
Example:
python run-upload-tests.py fetch gtfs-test-feeds datatools-jmeter-results
A single test plan file is used for maintainablility. By default, the test plan runs 1 thread in 1 loop and will upload a feed and then perform various checks on the uploaded feed version. As noted in the above section, it is possible to run different variations of the test plan. There are 4 types of test plans that can be initiated: batch
, fetch
, query
or upload
.
When the test plan is run in batch mode, a csv file must be provided that contains rows of test plans of either fetch
or upload
types. Each row is then ran the with specified number of threads and loops.
- For Each Row: Run either the
fetch
orupload
test plan according to the configuration in the row.
This section is run under the upload
test plan mode or for a feed marked for uploading in the batch csv file.
- Create Project
- Create Feedsource
- Upload zip to create new Feed Version
- Loop until job to upload feed is complete (making http requests to job status)
- Save a record of the amount of time it took from the completion of the feed upload until receiving a status update that the feed version processing has completed
- Continue to API Integrity Script Steps
This section is run under the fetch
test plan mode or for a feed marked for fetching in the batch csv file.
- Create Project
- Create Feedsource
- Create new Feed Version (which initiates a download of a feed from datatools-server)
- Loop until job to fetch and process the feed is complete (making http requests to job status)
- Save a record of the amount of time it took from the completion of the feed version creation request until receiving a status update that the feed version processing has completed
- Continue to API Integrity Script Steps
This section is run under the query
test plan mode. This script assumes that each project has a feed source that has a valid feed version.
- Fetch all projects
- Pick a random project
- Fetch all feed sources from the selected project
- Pick a random feed source
- Fetch all feed versions from the selected feed source
- Pick a random feed version
- Continue to API Integrity Script Steps
This section is run in all test plan modes.
- Fetch stops and a row count of stops
- Make sure the number of stops matches the row count of stops
- Fetch all routes
- Pick a random route
- Fetch all trips on selected route
- Check that all trips have same route_id as route
- Fetch all patterns on selected route
- Check that all patterns have same route_id
- Fetch embedded stop_times from trips from a random pattern
- Check that all stop_times have proper trip_id
- Check that all stop_times in trips on pattern have same stop sequence as pattern
- Make a GraphQL request that contains a nested query of routes, patterns and stops
- Make sure that each route is present in the route within the list of patterns
If running this script in GUI mode, it is possible to see all results in real-time by viewing the various listeners at the end of the thread group.
When running the test plan from the command line in non-gui mode, reports will be saved to the output
folder. The outputs will contain a csv file of all requests made and an html report summarizing the results. If the test plan mode was batch
, fetch
or upload
than another csv file will be written that contains a list of the elapsed time for processing the creation of a new gtfs feed version.
The csv files can be loaded into a jmeter GUI to view more details.