- Aggregate order book data from multiple cryptocurrency exchanges
- Provide aggregated data via REST API
- openssl to generate TLS certificates or existing certificate and private key
- Docker
See solution-container-diagram.svg
(diagrams.net format).
C4-style, container level diagram.
generate TLS certificates
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout api/cert/server.key -out api/cert/server.crt
start everything:
docker-compose up -d --build
API interactive documentation:
https://localhost:443/swagger-ui/index.html
OpenAPI spec:
https://localhost:443/v3/api-docs
Currently only integration with https://api.blockchain.com/v3/ is implemented.
To add a new exchange, create a new service from com.aggregator.fetch
.
You'll only need a new implementation com.aggregator.domain.ExchangeFetchService
, specific to the exchange you're integrating with.
The solution is covered with unit, webMvc and integration tests.
Solution is horizontally scalable.
-
multiple
api
instances should be deployed in different regions, matching user's geography. CDN won't help here as TTL of the data provided is very short. -
fetch
instances require only multiple instances for achieving required availability characteristics. -
Horizontal scalability of data storage - Redis currently - is possible, but not necessarily required for performance characteristics: Redis load from single
api
instance is constant regardless requests count API receives.
Latency is limited by the update frequency/latency of the cryptocurrency exchange API we integrate with.
Single instance of api
is expected to handle ~6000-7000 requests per second, 25ms response time average. No additional improvements are made on this characteristic as target is unclear.
Environment: 3 services - `fetch`, `api`, `redis` inside docker-compose with 6 CPU/8GB RAM assigned, data from 1 exchange available.
Peak resources usage by `api` service is 4CPU, ~660Mb RAM.
No restrictions on deployment strategy (canary or blue-green deployment) of API nodes makes deployment without downtime possible.
Having multiple instances of fetch
instances for the same exchange will allow to achieve the same for fetch
functionality.
Deployment in multiple availability zones is advised. Regions depend on regions where API servers of particular exchanges are located.
Due to decoupled design of api
and fetch
modules, all the parts of the solution could be deployed separately.
Redis is available as fully-managed service in each mainstream public cloud, as well as load balancer for api
instances.
Operational risks are low - Redis keeps only data the latest order book data, which becomes obsolete in seconds, so losing/clearing it is not an issue. Api instances relies on their own local cache, which lowers availability requirements for Redis operations like restarts, change depoloyment options etc.
Monitoring and logging are not implemented.