Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid request Thin error #54

Closed
GarPit opened this issue Jun 20, 2013 · 22 comments
Closed

Invalid request Thin error #54

GarPit opened this issue Jun 20, 2013 · 22 comments

Comments

@GarPit
Copy link

GarPit commented Jun 20, 2013

!! Invalid request 
HTTP element QUERY_STRING is longer than the (1024 * 10) allowed 
length. 
    /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/request.rb: 
82:in `execute' 
    /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/request.rb: 

It seems it doesn't work for more complicated and larger partials. Do you have any thoughts on this?

@chrismccord
Copy link
Owner

Can I get a full trace or is this it? Is the error being thrown when publishing to the pubsub server? Are you using Faye or Pusher?

@chrismccord
Copy link
Owner

A full trace from your rails log starting with the request would be helpful if you have it.

@GarPit
Copy link
Author

GarPit commented Jun 20, 2013

Yes, when publishing to pubsub Faye server

@chrismccord
Copy link
Owner

Faye does a "batch publish" which groups all the pubsub requests into a single POST for efficiency. I'm thinking we will need to detect the size of the batched request and split it up given thin's limit. To quickly verify this, can you temporarily monkey patch the Faye client in an initializer with the following snippet, re-run the offending action, and report if you still receive a thin error?

# place in config/initializers/faye_patch.rb
module Sync
  module Clients
    class Faye
      class Message
        def self.batch_publish(messages)
          messages.each do |message|
            message.publish
          end
        end
      end
    end
  end
end

The monkey-patch will post each publish individually instead of batching them together.

@GarPit
Copy link
Author

GarPit commented Jun 21, 2013

It doesn't work, same issue "Invalid request" in Thin output

@chrismccord
Copy link
Owner

Oops. I believe I had you place the patch in the wrong file. Can you confirm that the error is being thrown by the sync.ru rack app (running under thin) and not the rails app? I'm assuming the rack app is throwing the error, so you will need to place the patch in the sync.ru file, not the rails initializer. Please try that and report back. Thanks!

Your sync.ru should be:

# Run with: rackup sync.ru -E production
require "bundler/setup"
require "yaml"
require "faye"
require "sync"

module Sync
  module Clients
    class Faye
      class Message
        def self.batch_publish(messages)
          messages.each do |message|
            message.publish
          end
        end
      end
    end
  end
end


Faye::WebSocket.load_adapter 'thin'

Sync.load_config(
  File.expand_path("../config/sync.yml", __FILE__),
  ENV["RAILS_ENV"] || "development"
)

run Sync.pubsub_app

@chrismccord
Copy link
Owner

I was able to recreate this and I am working on a patch that gzip's the payloads and sends batches in chunks which should accommodate most use cases. Local experiments show a 90% reduction in payload size which should give us some breathing room within Thin's http request limit. Stay tuned.

@GarPit
Copy link
Author

GarPit commented Jun 24, 2013

Awesome! 👍

@chrismccord
Copy link
Owner

I have a branch with decent progress on this, but I'm having issue decompressing the payload once it makes it to Faye. I think something gets lost in translation when converting from url params to hash attributes. I'll keep this updated as I progress.

@blackxored
Copy link

What's the progress on this issue? I'm still getting Invalid request on create requests.

@chrismccord
Copy link
Owner

@blackxored I haven't had time to tackle the gzip decompression losing data in route. The current solution is to keep your partials under the 10K limit for Faye. A more flexible solution would be to find a pubsub server that allowed larger payloads, but that's another project in waiting. I would love to tackle this once I find the time, but I'm maxed out at the moment.

@blackxored
Copy link

I'm then facing a different issue because my partial is 4.0k (just checked) I can't get it to work on creation or update, it only works on destroy (removes the element from both browsers). Any clues?

@chrismccord
Copy link
Owner

@blackxored since messages are "batched" with faye (sent as a single request), it's possible your are still exceeding the limit. Can you try this monkey-patch to see if this is indeed the cause:

# place in config/initializers/faye_patch.rb
module Sync
  module Clients
    class Faye
      class Message
        def self.batch_publish(messages)
          messages.each do |message|
            message.publish
          end
        end
      end
    end
  end
end

@blackxored
Copy link

I did that, no luck, I actually had to switch to Pusher, and yet still I'm
having some issue with the partial for new items, the only one that works
is the destroy one.

On Tue, Aug 27, 2013 at 3:11 PM, Chris McCord [email protected]:

@blackxored https://github.com/blackxored since messages are "batched"
with faye (sent as a single request), it's possible your are still
exceeding the limit. Can you try this monkey-patch to see if this is indeed
the cause:

place in config/initializers/faye_patch.rbmodule Sync

module Clients
class Faye
class Message
def self.batch_publish(messages)
messages.each do |message|
message.publish
end
end
end
end
endend


Reply to this email directly or view it on GitHubhttps://github.com//issues/54#issuecomment-23334206
.

Best Regards,

Adrian Perez
http://www.adrianperez.org
Twitter: @blackxored | Skype: blackxored | Google Voice: (559) 744 3201

@chrismccord
Copy link
Owner

Are you sure your sync scopes are set up properly?

@blackxored
Copy link

Yes I have tried with scope: @stream (our equivalent to project) and
without it, I also tried with refetch: true with no luck...

On Tue, Aug 27, 2013 at 3:18 PM, Chris McCord [email protected]:

Are you sure your sync scopes are set up properly?


Reply to this email directly or view it on GitHubhttps://github.com//issues/54#issuecomment-23334733
.

Best Regards,

Adrian Perez
http://www.adrianperez.org
Twitter: @blackxored | Skype: blackxored | Google Voice: (559) 744 3201

@chrismccord
Copy link
Owner

Including the scope when performing sync's from the controller/model depend on what you have included in your views. Please open a new issue detailing the issue, including example sync calls from your views/controllers.

@blackxored
Copy link

Ok, but still when it comes to the actual Thin issue is still a valid one and all I get from thin output is !!! Invalid request or something like that.

@wursttheke
Copy link
Contributor

@chrismccord I'm suddenly running into this as well. Sorry to attack you on so many threads :)

HTTP element QUERY_STRING is longer than the (1024 * 10) allowed

I was asking myself, why it is complaining about the query string being too long? If at all, it should be complaining about the body content being too large. It seems to be coming from this line

https://github.com/chrismccord/sync/blob/master/lib/sync/clients/faye.rb#L49

It looks like the message is sent as a query parameter instead of inside the request body where it normally should be and where it can get as big as we want (even without compression). So, I replaced query with body and it started working like a charm:

def self.batch_publish_asynchronous(messages)
  Sync.reactor.perform do
    EM::HttpRequest.new(Sync.server).post(body: {
      message: batch_messages_query_hash(messages).to_json
    })
  end
end

I believe the synchronous calls would also have to be changed, because as far as I know Net::HTTP.post_form(...) only uses the query string to pass data.

What do you think? Can you confirm this could be the solution? Then I would get a PR ready.

@chrismccord
Copy link
Owner

Myself (and others) have run into this issue and the only solution so far has been to sync less data. If the request body approach works then I owe you some serious thanks! I'm less concerned about making the synchronous approach work in this case since it should be async in prod anyway. Please submit a PR!

@wursttheke
Copy link
Contributor

I just ran a test with 200kB of partial data without a problem. PR coming...

@chrismccord
Copy link
Owner

This is fixed on master thanks to @wursttheke

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants