Skip to content

Addi-11/system-design-excercises

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

80 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sytem Design Excercises

To run any excercise, go the folder and run:

go mod init example.com/main
go mod tidy
go run .

Week-1

  1. Implement a simple connection pool using Bounded Blocking Queue
  2. Implement Database Sharding and Routing (from API server)
  3. Setup Read-replica from a MySQL locally
  4. Implement fair multi-threaded program
  5. Implement server-sent events
  6. Implement server-sent events using Message Broker
  7. Implement server-sent events on React Components on a web-page.
  8. Setup RabbitMQ and Kafka locally.Write producer and consumer for them.
    1. Setup RabbitMQ
    2. Setup Kafka
  9. Implement real-time chat using socket IO: Slack-Realtime Text Chat Reference
  10. Mock EC2 creation & implement Short Polling and Long Polling

Week-2

  1. Implement Airline Check-in System
  2. Hit deadlock in database by cn top of MySQL.
  3. Implement a toy KV store on top of MySQL
  4. Implement simple sharding with a hash or range based routing strategy in above KV store
  5. Implement flag driven consistent reads.
  6. Implement Distributed Transactions using 2PC.
  7. Ingest data in Neo4j and try paginating it.
  8. Ingest data in MongDB and write aggregation pipeline.
  9. Implement Message Broadcast across servers using Star Topology leveraging Redis PubSub.

Week-3

  1. Implement a load-balancer
  2. Implement a simple blogging application where you shard by user id; and try to provide a unique ID to each blog. The idea is to understand the need to ID generation when database is sharded.
  3. Build a simple atomically incrementing integer ID
  4. Implement the "Amazon's Way" of central ID generation service
  5. Implement ths sturcutre of MongoDB Object ID
  6. Benchmark the impact of UUID on relational database as Primary Key
  7. Benchmark MySQL's UPSERT using ON DUPLICATE KEY UPDATE and REPLACE INTO
  8. Implement Flickr's Odd-Even based ID generation
  9. Implement Snowflake on
    1. API, and
    2. Database as stored procedure
  10. Benchmark Pagination approaches.
  11. Implement Zomato Ordering Service using Distributed Transactions using 2PC

Week-4

  1. Implement a Toy CDN
  2. Mimick CDN Failover - on Toy CDN
  3. Implement pre-signed URL based upload on S3
  4. Configure CDN to serve Popular Searches JSON response
  5. Implement JWT based auhthentication
  6. Build GitHub like OG image and server it via CDN
    1. Key learning: generating images in backend server and putting it behind a CDN
  7. Measure the impact of denormalization
    1. Define a user collection in MongoDB with blogs as its attribute
    2. Store blogs object in the user document demonting all blogs that a person wrote.
    3. Store the entire object intead of reference.
    4. Now benchmark and find out how slow the response times gets as we increase the number of elements in the blogs array
  8. Implement Lazy Loading of images on frontend
  9. Implement 5 approaches to count post per hashtag
    1. Naive (count++) for every event
    2. Naive batching (batch on server and then write to database)
    3. Efficient batching with minimizing stop-the-world usng deep-copy
    4. Efficient batching with minimizing stop-the-world using two-maps
    5. Kafka adapter pattern to re-ingest the post hashtags partitioned by hashtag
      1. Measure the number of writes on the database in each of the above approaches
  10. Populate on_msg_event while using websocket.
    1. Try to identify when the connection breaks and use that opportunity to write event to Kafka
  11. Configure Redis in cluster mode and figure out how data is distrubuted
  12. Implement newly unread message indicator on database
    1. Compute on the fly
    2. Creates messages table with 1 million rows
    3. Add one indexes for each column part of the where clause that is queried and measure the time taken
    4. Compute with mentioned composite indexes, and measure the performance
    5. Re-arrange the columns and mesure the performance impact

Week-5

  1. Implement Consistent Hashing
  2. Implement consistent hashing as a load balancer algorithm
  3. Solve skewness problem in consistent hashing with Virtual Nodes
  4. Implement a simple in-memory single-node cache like Redis as discussed in the session
  5. Implement the word dictionary on local machine
    1. using CSV format
    2. using Bitcask format
  6. Partial data write problem by writing 100mb file and killing the process in between
  7. Implement Checksum based
    1. Identify if data in WAL or Bitcask is corrupt using Checksum
    2. Implement database recovery as discussed in the session
  8. Implement Bitcask
    1. Basic KV operations
    2. Merge and compaction
  9. Benchmark sequential IO vs random IO

Week-6

  1. Implement LSM Trees
  2. Implement B+ Trees
  3. LSM Tree Based Key-Value Store. Reference
  4. Implement Bloom Filters and measure: FPR vs Size Vs Num of Hash Func
  5. Implement Deletable Bloom Filters
  6. Setup HLS Streaming following Akamai’s Documentation
  7. Video HLS Streaming Server in Go
  8. Implement a TCP server that accepts 1GB file
  9. Transfer the file via one POST request
  10. Stream the file from client to server from scratch
  11. Implement GFS

Week-7

  1. Implement recent search as discussed during the session
  2. Capture search logs and make them queryable
    1. From an HTTP request, extract all possible meta info
    2. Ingest them in ES
    3. Plot different graphs, segmenations, and gain insights using Kibana
  3. Implement Full Text Search on your phone contacts
    1. implement fuzzy searching
    2. implement spell correction
    3. implement synonymic query expansion
    4. add support for phonetic search
  4. Cache API responses on Akamai for very short duration
    1. Option 1: Set TTL on Akamai Console
    2. Option 2: Drive TTL using response headers from the origin
  5. Stream some dummy logs from local machine to S3
    1. Query them using Athena
  6. Implement Task Scheduler as discussed in the session
    1. Fixed Time Execution and Cron Schedule
    2. Implement Job Puller
    3. Make Jobs Puller Fault Tolerant
    4. For your machine, compute Unit Tech Economics for Job Puller
    5. Define a format that allows user to specify any task
    6. Build capability to run it - Docker Images a simple solution but overskill for simple tasks
    7. Induce failures in your scheduler and set up alerts if you breach SLA
  7. Implement Team Relabance feature in Task scheduler
    1. Do it for Fault Tolerance
    2. Do it if you want to auto scale
  8. Implement Brokers in all 3 flavours
    1. SQS like broker using MySQL as backend
    2. Kafka like broker using MySQL as backend
    3. SQS like broker using Bitcask as backend
  9. Create an account on Razorpay and build simple payment system using their API
    1. use their “Test Mode”
    2. use Webhooks to receive Payment Notifications

Excercises that can be extended:

  1. Zomato Delivery System
  2. Airline Checkin System
  3. Load Balancer
  4. TODO: React loading using Server Sent Events