Skip to content
This repository has been archived by the owner on Nov 18, 2019. It is now read-only.

Commit

Permalink
Persistence decided #8
Browse files Browse the repository at this point in the history
  • Loading branch information
DanielFroehlich committed Jun 19, 2019
1 parent 0133d9d commit 1873e15
Show file tree
Hide file tree
Showing 2 changed files with 86 additions and 16 deletions.
25 changes: 19 additions & 6 deletions components/spotify-provider-boundary/doc/architecture_thoughts.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,8 @@

## Problem Description
Situation: Track is playing



We have multiple pods running the spotify provider for a single event for HA reasons.



## Option A: All pods poll spotify
### Description
Every provider pod polls spotify API every second to check if track is still playing (for every active vent)
Expand Down Expand Up @@ -76,7 +71,11 @@ TOPIC -> PART 4 -> CG 1



## Option X: Title

# Problem Statement
Problem desription here

## Option #1: Title
### Description
text

Expand All @@ -86,3 +85,17 @@ text
### Cons
1.
1.

## Option #2: Title
### Description
text

### Pros
1.
1.
### Cons
1.
1.

## Decision
Who decided for which option for what reasons on which date?
77 changes: 67 additions & 10 deletions docs/20architecture/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,29 +56,86 @@ Investigation did not really happen with [ticket#61](https://github.com/sa-mw-da
### Decision
Daniel decided on 2019-06-19 to go with Option A - Ionic/Angular. New frontend-impl by Ortwin proved to be small and fast, and the available skill in the team is a killer argument. We don't have resource to go through a learning curve.

# How to handle persistent state
In the long term, we will need to decide on how/where we persist states (Playlists, Auth Tokens).
Discussion took place with [ticket#8](https://github.com/sa-mw-dach/OpenDJ/issues/8)

## Option #1: JSON File on RWX PersistentVolume
### Pros
1. easy to implement
1. changes can be easily propagted between pods by obersving the file and re-loading it.
1. easy to debug / fix / change schema (simply look into the file)
### Cons
1. requires RWX PVC which is not always available (esp. in pub cloud)

## Option #2: In Memory DataBase
Use some in memory databases like Redis, Red Hat JBoss DataGrid etc.

### Pros
1. no disk needed
1. lightnig fast
1. changes can be subscribed to
### Cons
1. - complex to deploy / monitor /operate/debug
1. skills required

## Option 3: Use Database on Platform
Use a database like [mongo](https://github.com/sa-mw-dach/OpenDJ/issues/56) / [psql](https://github.com/sa-mw-dach/OpenDJ/issues/55) with corresponding operator.
*OpenQuestion*: would we use one central deployment instance with schema for each service, or each service with state its own instance?)
### Pros
1. can work with RWO Storage
1. more familiar stuff
### Cons
1. if singleton database (PSQL) could be a single point of failure. Even with operator, the fail over takes several seconds up to minutes.

## Option #4: Use external Database as a Service
For example AWS RDS.
### Pros
1. Very convienet
### Cons
1. No experience
1. Cost
1. Offline development capabilities?

## Option #5: Use Event Stream Database
Deploy Kafka/AMQ Streams. Each service can emit events on it own topic to store data, either relative delta changes, or absolute state as "high water mark" message.
### Pros
1. Easy to deploy
1. Scales well
1. Worked well in [experiments](https://github.com/sa-mw-dach/OpenDJ/issues/53)
1. Real Cloud Native Design
1. new red hat technology (AMQ Streams)
1. works with rwo
### Cons
1. new technology for most hackers (learning curve)

## Decision
Daniel decided 2019-06-19 for a combination of Option#5 (Kafka) and Option#3 (database):
1. We use Kafka Events as persistence layer as long as possible, because we need async events anyway and works fine for simple key/value persistence as proven by the [experiment](https://github.com/sa-mw-dach/OpenDJ/issues/53)
1. If Kafka is not suited we use a MongoDB. The deployment is shared by all services, but each service has it's on schema. This simplifies deployment and operation. MongoDB instead of PSQL, because it horizontally scales and does exhibit a single point of failure.

<!--- Template for new Architectural Decision to copy:
## Problem Statement
# Problem Statement
Problem desription here
### Option #1: Title
#### Description
## Option #1: Title
### Description
text
#### Pros
### Pros
1.
1.
#### Cons
### Cons
1.
1.
### Option #2: Title
#### Description
text
## Option #2: Title
Description text
#### Pros
### Pros
1.
1.
#### Cons
### Cons
1.
1.
Expand Down

0 comments on commit 1873e15

Please sign in to comment.