Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected behavior with Minimal Plugin Codebase Pattern #5853

Closed
jeremyjpj0916 opened this issue May 5, 2020 · 6 comments · Fixed by #5873
Closed

Unexpected behavior with Minimal Plugin Codebase Pattern #5853

jeremyjpj0916 opened this issue May 5, 2020 · 6 comments · Fixed by #5873
Labels

Comments

@jeremyjpj0916
Copy link
Contributor

jeremyjpj0916 commented May 5, 2020

Summary

Following docs on minimal plugin logic, I rewrote a nice little simple tx debugger tool that helps log request and response headers, query parameters, and bodies.

Docs: https://docs.konghq.com/1.4.x/plugin-development/custom-logic/#handlerlua-specifications

Example modern handler.lua as well:
https://github.com/Kong/kong/blob/1.4.3/kong/plugins/jwt/handler.lua

Seems after converting it(and updating to modern schema) a bunch of pdk behavior isn't working anymore. Is this expected behavior or do I need to add the base plugin boiler plate logic back in? Plugin has just a handler.lua and schema.lua file

Noted behaviors:

  1. kong.log() calls in the access phase are not printing anything. See "kong.log("Does it exist ever or log? " .. kong.ctx.plugin.request_body)" line in the handler.lua file. Never prints.

  2. kong.ctx.plugin.* does not seem to persist and work across phases, all these references are nil between access to log phase etc.

Old code should be fully functional.

New Schema:

local typedefs = require "kong.db.schema.typedefs"

return {
  name = "kong-tx-debugger",
  fields = {
    { consumer = typedefs.no_consumer },
    { protocols = typedefs.protocols_http },
    { config = {
        type = "record",
        fields = {
          { log_request_headers = { type = "boolean", required = false, default = true }, },
          { log_request_query_params = { type = "boolean", required = false, default = false }, },
          { log_request_body = { type = "boolean", required = false, default = false }, },
          { log_response_headers = { type = "boolean", required = false, default = true }, },
          { log_response_body = { type = "boolean", required = false, default = false }, },
          { max_response_body_size = { type = "number", default = 65536 }, },
          { log_on_response_status = { type = "string", required = false}, },
          { ngx_log_buffer_size = { type = "number", default = 4012 }, },
    }, }, },
  },
}

New Code:

local kong = kong
local KongTxDebugger = {}

KongTxDebugger.PRIORITY = 12
KongTxDebugger.VERSION = "1.0.0"

--Help break request and response bodies into loggable chunks(seems actually ~4096 byte limitation)
local function splitByChunk(text, chunkSize)
    local s = {}
    for i=1, #text, chunkSize do
        s[#s+1] = text:sub(i,i+chunkSize - 1)
    end
    return s
end

function KongTxDebugger:access(conf)
  if conf.log_request_body then
    kong.ctx.plugin.request_body = kong.request.get_raw_body()
  end
  kong.log("Does it exist ever or log? " .. kong.ctx.plugin.request_body)
  
  if conf.log_response_body then
    kong.ctx.plugin.response_body = ""
  end
end

function KongTxDebugger:body_filter(conf)  
  if conf.log_response_body and conf.max_response_body_size > 0 then
   local chunk = ngx.arg[1]
   local res_body = kong.ctx.plugin.response_body .. (chunk or "")
   kong.ctx.plugin.response_body = string.sub(res_body, 0, conf.max_response_body_size)
  end
end

--Plugin assumes right now a single header key:value pair is less than the ngx error buffer size limit(2048 bytes) otherwise ngx will truncate.
function KongTxDebugger:log(conf)
  if conf.log_on_response_status == nil or tonumber(conf.log_on_response_status) == kong.response.get_status() then
    kong.log("-------- START OF LOGGED TX --------") 

    if conf.log_request_headers then
      local i = 1
      for k,v in pairs(kong.request.get_headers(1000)) do 
        kong.log("Request Header #",tostring(i)," ", k," : ",v)
        i = i + 1
      end
    end

    if conf.log_request_query_params then
      local i = 1
      for k,v in pairs(kong.request.get_query(1000)) do 
        kong.log("Query Parameter #",tostring(i)," ", k," : ",v)
        i = i + 1
      end
    end

    if conf.log_request_body and kong.ctx.plugin.request_body then
      if conf.ngx_log_buffer_size - 14 < string.len(kong.ctx.plugin.request_body) then
        kong.log("-------- START OF LARGE REQUEST BODY TX --------")
        local i = 1
        local request_body_array = splitByChunk(kong.ctx.plugin.request_body, conf.ngx_log_buffer_size - 25)
        for _,v in ipairs(request_body_array) do
           kong.log("Request Body Chunk #",tostring(i)," : ",v)
           i = i + 1
        end
        kong.log("-------- END OF LARGE REQUEST BODY TX --------")
      else
        kong.log("Request Body: ", kong.ctx.plugin.request_body)
      end
    end

    if conf.log_response_headers then
      local i = 1
      for k,v in pairs(kong.response.get_headers(1000)) do 
        kong.log("Response Header #",tostring(i)," ", k," : ",v)
        i = i + 1
      end
    end

    if conf.log_response_body and kong.ctx.plugin.response_body then
      if conf.ngx_log_buffer_size - 15 < string.len(kong.ctx.plugin.response_body) then
        kong.log("-------- START OF LARGE RESPONSE BODY TX --------")
        local i = 1
        local response_body_array = splitByChunk(kong.ctx.plugin.response_body, conf.ngx_log_buffer_size - 26)
        for _,v in ipairs(response_body_array) do
           kong.log("Response Body Chunk #",tostring(i)," : ",v)
           i = i + 1
        end
        kong.log("-------- END OF LARGE RESPONSE BODY TX --------")
      else
        kong.log("Response Body: ", kong.ctx.plugin.response_body)
      end
    end
    kong.log("-------- END OF LOGGED TX --------")    
  end
end

return KongTxDebugger

Old Schema:

return {
  no_consumer = true,
  fields = {
    log_request_headers = { type = "boolean", required = false, default = true },
    log_request_query_params = { type = "boolean", required = false, default = false },
    log_request_body = { type = "boolean", required = false, default = false },
    log_response_headers = { type = "boolean", required = false, default = true },
    log_response_body = { type = "boolean", required = false, default = false },
    max_response_body_size = { type = "number", default = 65536 },
    log_on_response_status = { type = "string", default = ""},
    ngx_log_buffer_size = { type = "number", default = 4012 }
  }
}

Old Code:

local BasePlugin = require "kong.plugins.base_plugin"
local KongTxDebugger = BasePlugin:extend()

KongTxDebugger.PRIORITY = 12
KongTxDebugger.VERSION = "1.0.0"

function KongTxDebugger:new()
  KongTxDebugger.super.new(self, "kong-tx-debugger")
end

--Help break request and response bodies into loggable chunks(seems actually ~4096 byte limitation)
local function splitByChunk(text, chunkSize)
    local s = {}
    for i=1, #text, chunkSize do
        s[#s+1] = text:sub(i,i+chunkSize - 1)
    end
    return s
end

function KongTxDebugger:access(conf)
  KongTxDebugger.super.access(self)
  
  if conf.log_request_body then
    kong.ctx.plugin.request_body = kong.request.get_raw_body()
  end
  
  if conf.log_response_body then
    kong.ctx.plugin.response_body = ""
  end
end

function KongTxDebugger:body_filter(conf)
  KongTxDebugger.super.body_filter(self)
  
  if conf.log_response_body and conf.max_response_body_size > 0 then
   local chunk = ngx.arg[1]
   local res_body = kong.ctx.plugin.response_body .. (chunk or "")
   kong.ctx.plugin.response_body = string.sub(res_body, 0, conf.max_response_body_size)
  end
end

--Plugin assumes right now a single header key:value pair is less than the ngx error buffer size limit(2048 bytes) otherwise ngx will truncate.
function KongTxDebugger:log(conf)
  KongTxDebugger.super.log(self)
  
  if conf.log_on_response_status == "" or tonumber(conf.log_on_response_status) == kong.response.get_status() then
    kong.log("-------- START OF LOGGED TX --------") 

    if conf.log_request_headers then
      local i = 1
      for k,v in pairs(kong.request.get_headers(1000)) do 
        kong.log("Request Header #",tostring(i)," ", k," : ",v)
        i = i + 1
      end
    end

    if conf.log_request_query_params then
      local i = 1
      for k,v in pairs(kong.request.get_query(1000)) do 
        kong.log("Query Parameter #",tostring(i)," ", k," : ",v)
        i = i + 1
      end
    end

    if conf.log_request_body then
      if conf.ngx_log_buffer_size - 14 < string.len(kong.ctx.plugin.request_body) then
        kong.log("-------- START OF LARGE REQUEST BODY TX --------")
        local i = 1
        local request_body_array = splitByChunk(kong.ctx.plugin.request_body, conf.ngx_log_buffer_size - 25)
        for _,v in ipairs(request_body_array) do
           kong.log("Request Body Chunk #",tostring(i)," : ",v)
           i = i + 1
        end
        kong.log("-------- END OF LARGE REQUEST BODY TX --------")
      else
        kong.log("Request Body: ", kong.ctx.plugin.request_body)
      end
    end

    if conf.log_response_headers then
      local i = 1
      for k,v in pairs(kong.response.get_headers(1000)) do 
        kong.log("Response Header #",tostring(i)," ", k," : ",v)
        i = i + 1
      end
    end

    if conf.log_response_body then
      if conf.ngx_log_buffer_size - 15 < string.len(kong.ctx.plugin.response_body) then
        kong.log("-------- START OF LARGE RESPONSE BODY TX --------")
        local i = 1
        local response_body_array = splitByChunk(kong.ctx.plugin.response_body, conf.ngx_log_buffer_size - 26)
        for _,v in ipairs(response_body_array) do
           kong.log("Response Body Chunk #",tostring(i)," : ",v)
           i = i + 1
        end
        kong.log("-------- END OF LARGE RESPONSE BODY TX --------")
      else
        kong.log("Response Body: ", kong.ctx.plugin.response_body)
      end
    end
  end

  kong.log("-------- END OF LOGGED TX --------")
end

return KongTxDebugger

Steps To Reproduce

  1. Try the new vs old plugin code on a given route. You will see the response vs request bodies don't work anymore with the pdk ctx logic in place with new code. the kong.log does not seem to work in the access phase etc.

Additional Details & Logs

  • Kong version 1.4.3
@thibaultcha
Copy link
Member

thibaultcha commented May 5, 2020

I have not looked at the plugin’s code you pasted yet, but from just reading your report, this seems to be related to #5851. Not sure what happened here but it seems that kong.ctx.plugin and kong.log aren’t working when plugins yield during their execution anymore. #5851 fixes the kong.ctx.plugin issue already.

@bungle bungle added the task/bug label May 5, 2020
@jeremyjpj0916
Copy link
Contributor Author

jeremyjpj0916 commented May 5, 2020

I remember this plugin working fine in the past though, so I can either assume:

1.) Stripping a plugin of the base extender and removing the .super() boilerplate calls at the beginning of the phases are what hurt prior working behavior. I also added local kong = kong, but that should not be problematic I would think as other bundled plugins do it.

2.) Either a Kong pdk change or Kong version change hurt existing behavior somewhere in the 1.x.x series?

Or maybe 1 and 2 both play a role together heh.

Will apply the patch you linked to my dev env and will retest and confirm on if kong.ctx.plugin can be used as intended.

@jeremyjpj0916
Copy link
Contributor Author

jeremyjpj0916 commented May 8, 2020

Hoping to test Friday now, got bogged down with other priorities. I see PR for the fix is still pending(and it may only fix the kong.ctx.plugin issue sounds like so kong.log() in non log phases may still be facing an issue).

Edit Edit - I see a new PR has been done that should fix both these issues, will give them a go later tonight after dinner!

thibaultcha added a commit that referenced this issue May 8, 2020
* Store namespace keys in `ngx.ctx` to ensure all dynamically generated
  namespaces are isolated on a per-request basis.
* Introduce a new global API (which is a private API for use by core
  only) to explicitly delete a namespace.

A note on the benefits of this new implementation:

* By using a table to keep track of namespaces in `ngx.ctx`, we ensure
  that users of `ngx.ctx` cannot access the table namespaces, thus
  properly isolating it and avoiding polluting `ngx.ctx` for core and/or
  plugins. We can think of it as a double-pointer reference to the
  namespace. Each request gets a pre-allocated table for 4 namespaces
  (of course not limited to 4), with the assumption that most instances
  do not execute more than 4 plugins using `kong.ctx.plugin` at a time.
* We ensure that namespace key references cannot be `nil` to ensure a
  safe usage from the core.
* All namespace key references are still weak when assigned to a
  namespace, thus tying the lifetime of the namespace to that of its key
  reference. Similarly to the previous implementation, this is done to
  ensure we avoid potential memory leaks. That said, the
  `kong.ctx.plugin` namespace does use the plugin's conf for that
  purpose anymore (See 40dc146), which
  alleviates such concerns in the current usage of this API.
* All tables allocated for namespace key references and namespaces keys
  themselves will be released when `ngx.ctx` will be GC'ed.
* We also ensure than `kong.ctx.*` returns `nil` when `ngx.ctx` is not
  supported ("init" phase).

Co-Authored-By: tdelaune <[email protected]>

Fix #4379
Fix #5853
See #5851
See #5459
@thibaultcha
Copy link
Member

#5873 should fix the issues here.

@jeremyjpj0916
Copy link
Contributor Author

Yeah started looking at 1.4.3 vs 2.x.x src to test your PR but so much has changed and refactored in the last year I don't want to invest backporting all these changes right now to go to prod as a fix(as only a few not critical feats of my plugins are impacted by the pdk kong.ctx.plugin.* + kong.log() it seems while running 1.4.3). Rather going to make a team effort push in the next 4-6 weeks to get on the latest 2.x.x series of Kong, been too far behind for too long I been building up so many patch files and lists of differences. fixed in newer versions vs my own. Planning on going to 1.5.x here shortly as a springboard to whatever 2.x.x version there will be. Sorry I could not assist here right now, will retest when I get there.

thibaultcha added a commit that referenced this issue May 14, 2020
* Store namespace keys in `ngx.ctx` to ensure all dynamically generated
  namespaces are isolated on a per-request basis.
* Introduce a new global API (which is a private API for use by core
  only) to explicitly delete a namespace.

A note on the benefits of this new implementation:

* By using a table to keep track of namespaces in `ngx.ctx`, we ensure
  that users of `ngx.ctx` cannot access the table namespaces, thus
  properly isolating it and avoiding polluting `ngx.ctx` for core and/or
  plugins. We can think of it as a double-pointer reference to the
  namespace. Each request gets a pre-allocated table for 4 namespaces
  (of course not limited to 4), with the assumption that most instances
  do not execute more than 4 plugins using `kong.ctx.plugin` at a time.
* We ensure that namespace key references cannot be `nil` to ensure a
  safe usage from the core.
* All namespace key references are still weak when assigned to a
  namespace, thus tying the lifetime of the namespace to that of its key
  reference. Similarly to the previous implementation, this is done to
  ensure we avoid potential memory leaks. That said, the
  `kong.ctx.plugin` namespace does use the plugin's conf for that
  purpose anymore (See 40dc146), which
  alleviates such concerns in the current usage of this API.
* All tables allocated for namespace key references and namespaces keys
  themselves will be released when `ngx.ctx` will be GC'ed.
* We also ensure than `kong.ctx.*` returns `nil` when `ngx.ctx` is not
  supported ("init" phase).

Co-Authored-By: tdelaune <[email protected]>

Fix #4379
Fix #5853
See #5851
See #5459
thibaultcha added a commit that referenced this issue May 14, 2020
* Store namespace keys in `ngx.ctx` to ensure all dynamically generated
  namespaces are isolated on a per-request basis.
* Introduce a new global API (which is a private API for use by core
  only) to explicitly delete a namespace.

A note on the benefits of this new implementation:

* By using a table to keep track of namespaces in `ngx.ctx`, we ensure
  that users of `ngx.ctx` cannot access the table namespaces, thus
  properly isolating it and avoiding polluting `ngx.ctx` for core and/or
  plugins. We can think of it as a double-pointer reference to the
  namespace. Each request gets a pre-allocated table for 4 namespaces
  (of course not limited to 4), with the assumption that most instances
  do not execute more than 4 plugins using `kong.ctx.plugin` at a time.
* We ensure that namespace key references cannot be `nil` to ensure a
  safe usage from the core.
* All namespace key references are still weak when assigned to a
  namespace, thus tying the lifetime of the namespace to that of its key
  reference. Similarly to the previous implementation, this is done to
  ensure we avoid potential memory leaks. That said, the
  `kong.ctx.plugin` namespace does use the plugin's conf for that
  purpose anymore (See 40dc146), which
  alleviates such concerns in the current usage of this API.
* All tables allocated for namespace key references and namespaces keys
  themselves will be released when `ngx.ctx` will be GC'ed.
* We also ensure than `kong.ctx.*` returns `nil` when `ngx.ctx` is not
  supported ("init" phase).

Co-Authored-By: tdelaune <[email protected]>

Fix #4379
Fix #5853
See #5851
See #5459
@jeremyjpj0916
Copy link
Contributor Author

Can confirm 2.0.5 series did fix all this now that we are on it. Glad to be caught up on Kong versions finally and looking to push onward to 2.1 here in the next 1-2 weeks now!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants