You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Background
We use Olive Branch on an API that serves 1+ million requests per hour.
We noticed Olive Branch middleware was consuming 200-300 milliseconds for some of our large response bodies.
Optimizations for Oj
After benchmarking the implementation, we noticed two major opportunities for optimization:
Calling Oj.load and Oj.dump directly instead of via MultiJson -- This resulted in a decrease of 13.2 seconds to 4.77 seconds in a synthetic benchmark serializing a problem JSON response 100 times.
Replacing the recursive, ruby based inflector to one based on Oj's SC Parser -- This resulted in an additional speed up from 4.77 seconds to 2.40 seconds
Our implementation:
module OurCompany
class FastOliveBranchMiddleware
class OliveBranchHandler < Oj::ScHandler
def initialize(inflection)
@inflection = inflection || :camel
end
def hash_key(key)
return FastCamel.camelize(key) if @inflection == :camel
return key.underscore if @inflection == :snake
return key.dasherize if @inflection == :dash
return key.underscore.camelize(:upper) if @inflection == :pascal
key
end
def hash_start
{}
end
def hash_set(h, k, v)
h[k] = v
end
def array_start
[]
end
def array_append(a, v)
a << v
end
end
def initialize app
@app = app
end
def call env
underscore_params(env)
status, headers, response = @app.call(env)
[status, headers, format_responses(env, response)]
end
private
def underscore_params(env)
req = ActionDispatch::Request.new(env)
req.request_parameters
req.query_parameters
env["action_dispatch.request.request_parameters"].deep_transform_keys!(&:underscore)
env["action_dispatch.request.query_parameters"].deep_transform_keys!(&:underscore)
end
def format_responses(env, response)
new_responses = []
handler = OliveBranchHandler.new(env["HTTP_X_KEY_INFLECTION"]&.to_sym)
response.each do |body|
begin
new_response = Oj.sc_parse(handler, body)
rescue JSON::ParserError
new_responses << body
next
end
new_responses << Oj.dump(new_response)
end
new_responses
end
end
end
This has resulted in an incredible reduction in time spent applying the correct inflection to our JSON documents -- Cutting out ~200+ms on responses with large payloads.
Goals
We recognize this is a general purpose library and our optimizations are really specific to Oj. We don't expect changes in this project to incorporate our optimizations.
That being said, I did want to post these in case someone else ran into performance issues w/ this library in the future and wanted some ideas to remediate ✌️.
The text was updated successfully, but these errors were encountered:
Background
We use Olive Branch on an API that serves 1+ million requests per hour.
We noticed Olive Branch middleware was consuming 200-300 milliseconds for some of our large response bodies.
Optimizations for Oj
After benchmarking the implementation, we noticed two major opportunities for optimization:
Oj.load
andOj.dump
directly instead of viaMultiJson
-- This resulted in a decrease of 13.2 seconds to 4.77 seconds in a synthetic benchmark serializing a problem JSON response 100 times.Our implementation:
This has resulted in an incredible reduction in time spent applying the correct inflection to our JSON documents -- Cutting out ~200+ms on responses with large payloads.
Goals
We recognize this is a general purpose library and our optimizations are really specific to Oj. We don't expect changes in this project to incorporate our optimizations.
That being said, I did want to post these in case someone else ran into performance issues w/ this library in the future and wanted some ideas to remediate ✌️.
The text was updated successfully, but these errors were encountered: