- Install node v8.10.0 (At the moment of building this code AWS Lambda was using this version)
- Recomendation: Use nvm
- .nvmrc file specified the current node version used in this project.
- Install last version of serverless globally
npm install -g serverless
- Actually we are using v1.x release
- Install Java JDK (This is needed to run DynamoDB locally)
- Run
npm i
to install all NodeJS dependencies + DynamoDB
Copy the file .env-default
to .env-dev
Run npm start
to get a simulated API GW + AWS Lambda + DynamoDB environment in your local machine.
If you need to enable extended message logging into your console for debugging purposes just run:
npm run start:debug
Default path for API: https://localhost:3000/*
Shell path for DynamoDB: https://localhost:8000/shell
In order to make the service work you'll need to create manually a KMS entry on AWS. The alias or ID should be informed into the .env or AWS lambda as well as the KMS region.
POST https://localhost:3000/v1/tokenservice/tokenize POST https://localhost:3000/v1/tokenservice/detokenize POST https://localhost:3000/v1/tokenservice/validate POST https://localhost:3000/v1/tokenservice/delete GET https://localhost:3000/v1/backoffice/operations
You can create new stages from existing ones or from scrach.
Follow the steps by running: npm run stages
To deploy custom stages just run:
npm run deploy
Additionally you can pass AWS_PROFILE to the npm command to select a AWS profile you already have in your computer.
And follow the instructions generated by the wizard
To remove a deployment from AWS provider just run:
npm run deploy:remove
Please take into account that stg, staging, prod and production stages are blocked from removal. If you want to do so, then you will need to do it manually from AWS.
Run `AWS_PROFILE=<Your_AWS_Profile> sls logs -f --stage
Right now logging is done via console.log in order to have the logs stored under CloudWatch.
For Development: By default npm start
forces to use DEBUG log level.
For Production: Recommended log level is WARN.
The following log levels could be applied via KTOKENIZER_LOG_LEVEL env variable:
- OFF (Default log level if none specified)
- FATAL (Super fatal errors, this should never happen)
- ERROR (Errors on the system)
- WARN (Some warnings making not crash the system)
- INFO (Default information logging, logs some process started and some info, etc)
- DEBUG (With CARE! This should not show sensitive data, but used to debug it shows function executions and how different processes works)
- TRACE (With CARE! Not activate in production, sensitive data will be shown this will show everything)
- ALL (With CARE! Not activate in production, shows everything)
To run unit tests: npm t
We made some test scenarios (end 2 end testing) running the whole flow to ensure
everything is working properly.
To run e2e tests to a deployed scenario just run: KTOKENIZER_APIKEY=[A_valid_API_KEY] KTOKENIZER_ENDPOINT=https://secure-test.ktokenizer.com npm run tests:e2e
;
If no KTOKENIZER_APIKEY passed it defaults to value: 222222 If no KTOKENIZER_ENDPOINT passed it defaults to value: https://localhost:3000
- Autogenerate IAM ROLE templates
AWS_PROFILE=<Profile> sls puresec gen-roles --function <functionName>
- We needed to use Dynogels version forked since the original Dynogels package does not have Joi, Lodash, AWS-SDK peer dependencies integrated and that generates the final artifact to increase their size 12MB.
- TODO tasks are mantained in in Trello.
- Anyway there is a TODO.md with some pending tasks.
Enjoy! @sortegam