Breaking Down Microservices Silos: Building a Real-Time API with Dozer across multiple Postgres databases
In this example, Dozer fetches data from multiple Postgres tables and combines them in real time based on the queries and produces fast READ APIs to be used in a flight booking application.
This pattern is very much applicable even when the data is being fetched across micro services and even from different types of data stores. Check out Dozer documentation for all supported data sources.
Please check out our blog for a full explanation
Refer to the Installation section for installing on different operating systems.
NOTE: Git LFS is needed when cloning this sample.
# Bring up the postgres server using `docker-compose`
docker-compose up
# Run it with a single command
dozer
# Help
dozer -h
To start, let's consider an example of a flight tickets booking website. The entire service is split across two main microservices:
A booking microservice: handling all the bookings, tickets and boarding passes A flight master microservice: maintaining all the flights master data including routes, aircraft, etc.
Each service maintains its own database. BOOKINGS, TICKETS, TICKET_FLIGHTS, BOARDING_PASSES are part of the booking microservice database, and AIRPORTS, FLIGHTS, AIRCRAFTS, SEATS are part of flight master microservice database. Below is a comprehensive ER diagram of all the data.
Path | Source | Notes |
---|---|---|
GET /bookings | Derived | Booking Listing API. Filters are automatically generated on single columns. Eg: passenger_id |
GET /bookings/details | Derived | Detailed information about a booking including flight information across several stops |
GET /routes | Derived | All routes per day of the week based on all ticket bookings made |
Every endpoint generates a Count
and a Query
method. Both of these support filter and sort operations.
REST
APIs are available on 8080
port and gRPC on 50051
by default.
grpcurl -plaintext localhost:50051 dozer.generated.bookings_details.BookingsDetails/count
{
"count": "185270"
}
grpcurl -plaintext localhost:50051 dozer.generated.routes.Routes/count
{
"count": "3798"
}
grpcurl -plaintext localhost:50051 dozer.generated.routes.Routes/query
{
"records": [
{
"id": "3093",
"record": {
"flightNo": "PG0001",
"departureAirport": "UIK",
"arrivalAirport": "SGC",
"aircraftCode": "CR2",
"duration": "8400000",
"daysOfWeek": "5",
"DozerRecordVersion": 1
}
},
....
{
"id": "406",
"record": {
"flightNo": "PG0013",
"departureAirport": "AER",
"arrivalAirport": "SVO",
"aircraftCode": "773",
"duration": "6300000",
"daysOfWeek": "5",
"DozerRecordVersion": 1
}
}
]
}
grpcurl -plaintext localhost:50051 dozer.generated.bookings_details.BookingsDetails/query
[{
"records": [
{
"id": "3682",
"record": {
"passengerId": "3986 620108",
"passengerName": "IGOR KARPOV",
"bookRef": "0002E0",
"bookDate": "2017-07-11T13:09:00Z",
"totalAmount": {
"lo": 8960000
},
"ticketNo": "0005434407173",
"flightId": "26920",
"fareConditions": "Economy",
"amount": {
"lo": 1640000
},
"flightNo": "PG0678",
"scheduledArrival": "2017-08-01T13:45:00Z",
"scheduledDeparture": "2017-08-01T11:30:00Z",
"departureAirport": "MCX",
"arrivalAirport": "SVO",
"actualArrival": "2017-08-01T13:51:00Z",
"actualDeparture": "2017-08-01T11:33:00Z",
"DozerRecordVersion": 1
}
}
...
]
# Filter by passenger_id
grpcurl -d '{"query":"{\"$filter\": {\"passenger_id\": \"3986 620108\"}}"}' \
-plaintext localhost:50051 \
dozer.generated.bookings_details.BookingsDetails/query
{
"records": [
{
"id": "3682",
"record": {
"passengerId": "3986 620108",
"passengerName": "IGOR KARPOV",
"bookRef": "0002E0",
"bookDate": "2017-07-11T13:09:00Z",
"totalAmount": {
"lo": 8960000
},
"ticketNo": "0005434407173",
"flightId": "26920",
"fareConditions": "Economy",
"amount": {
"lo": 1640000
},
"flightNo": "PG0678",
"scheduledArrival": "2017-08-01T13:45:00Z",
"scheduledDeparture": "2017-08-01T11:30:00Z",
"departureAirport": "MCX",
"arrivalAirport": "SVO",
"actualArrival": "2017-08-01T13:51:00Z",
"actualDeparture": "2017-08-01T11:33:00Z",
"DozerRecordVersion": 1
}
},
...
]
}
Dozer transforms all the queries in dozer-config.yaml into a DAG (Directed Acyclic Graph). The DAG defines the streaming execution of the query where each node is a source, a processor or a sink. Below is, for instance, the generated DAG for the BOOKING DETAILS query.
Refer to the configuration for this example here
You can download a bigger data set following the instructions on the above page.