-
Notifications
You must be signed in to change notification settings - Fork 347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
thread 'main' panicked at 'lambda runtime failed: Error("failed to convert header to a str", line: 0, column: 0)' #786
Comments
if you set |
Actually, now that I look at it, it might be something caused by Lambda sending the runtime an unexpected internal value 🤔 I might take a look at surfacing all that information better. |
I cannot find anything in the runtime that could have thrown that panic. |
Thanks for your comments. So you're suggestion would be to set |
It just increases the log volume, there is no difference performance wise. We don't print payloads be default because there might be sensitive information in them, and we don't want to be responsible for what your cloudwatch logs include. |
Had to wait for almost 12 hours until another event occurred. I exported the CloudWatch logs to a gist https://gist.github.com/andreas-venturini/d68aa1266795b17f2ec623b16757e622 I anonymized the user IP and our AWS account id and replaced the Lambda domain name w/ XXX, other than that the logs are unchanged. We have a CloudFront origin group w/ automatic failover to GCP Cloud Run (as we're still trialing Lambda for our use case) and Cloud Run was able to process the request without error (only difference is that Cloud Run uses x86_64 architecture). So it seems to me the problem is either caused by the ARM Docker container service or the Lambda Rust runtime/AWS Lambda web adapter? |
I can get this to fail consistently w/ Lambda. Maybe the UTF char in the following header trips up the Rust runtime @calavera
|
@andreas-venturini I will take a look. Could you share the version of Lambda Web Adapter you are using? |
It might be related to this: #509 See this specific comment: #509 (comment) |
@andreas-venturini I couldn't reproduce the error with that filename.
Could you share the original filename? It would be best to share the original file if possible. |
@bnusunny here is a link to the source file (12 hours valid) |
Got it. Thanks! |
I reproduced the issue with |
The issue happend at this line. let body = serde_json::to_vec(&body)?; Runtime uses @calavera any suggestions on how to handle non UTF-8 charactores in the headers? |
Nope. Serde just doesn't support it, as explained in #509 (comment) |
@andreas-venturini Can you switch on response streaming mode? It should work. I need to figure out why. |
Wait, |
I think I found the root cause. This line here uses let map_value = headers[key].to_str().map_err(S::Error::custom)?;
let map_value = String::from_utf8(headers[key].as_bytes().to_vec()).map_err(S::Error::custom)?; I will send a PR soon. |
@bnusunny thanks for figuring this out 👍
I briefly tried that, however, in response streaming mode Lambda metrics report numerous errors during function executions. What's strange: request processing does not seem to be affected in any way, nor are there any errors reported in CloudWatch or X-Ray. Are these internal Lambda errors? If so, is there a way to gain visibility into this? On the chart one can clearly see when buffered mode was changed to response streaming and back (for buffered mode function execution errors are 0): |
This issue is now closed. Comments on closed issues are hard for our team to see. |
@andreas-venturini I didn't see such error rate with response streaming. The error means either the lambda function throw an exception or the lambda service got errors (such as function timeout or wrong response data format). Could you open an issue in Lambda Web Adapter repo for this reponse streaming issue? If you can provide the original files which cause this issue, it will be very helpful. Search through the CloudWatch Logs for |
We use the AWS Lambda Web Adapter in combination with a Docker container image to resize images on the fly. This works great, but occasionally (though very rarely) we observe the following error in CloudWatch:
Any pointers on how we might debug this would be appreciated!
The text was updated successfully, but these errors were encountered: