Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WARN] tls handshake with 127.0.0.1:XXXXX failed: received corrupt message of type InvalidContentType #4112

Closed
rwjack opened this issue Nov 28, 2023 · 8 comments · Fixed by #4143
Labels
bug Something isn't working good first issue Good for newcomers low priority Won't fix anytime soon, but will accept PR if provided troubleshooting There might be bug or it could be user error, more info needed

Comments

@rwjack
Copy link

rwjack commented Nov 28, 2023

Subject of the issue

Container shows as unhealthy on portainer, even though everything is working. Unsure why I'm getting these WARN logs, the curl healthcheck works within the container.

[2023-11-28 13:04:14.301][rocket_http::tls::listener][WARN] tls handshake with 127.0.0.1:39154 failed: received corrupt message of type InvalidContentType
[2023-11-28 13:05:14.348][rocket_http::tls::listener][WARN] tls handshake with 127.0.0.1:51004 failed: received corrupt message of type InvalidContentType
[2023-11-28 13:05:30.237][rocket_http::tls::listener][WARN] tls handshake with 127.0.0.1:34490 failed: cannot decrypt peer's message
[2023-11-28 13:06:14.389][rocket_http::tls::listener][WARN] tls handshake with 127.0.0.1:43980 failed: received corrupt message of type InvalidContentType
[2023-11-28 13:07:14.432][rocket_http::tls::listener][WARN] tls handshake with 127.0.0.1:51626 failed: received corrupt message of type InvalidContentType

Deployment environment

  • vaultwarden version:
    1.30.1
  • Install method:
    Docker

  • Clients used:
    Irellevant

  • Reverse proxy and version:
    Irellevant

  • Other relevant details:

version: "3"

volumes:
  data:

services:
  bitwarden:
    image: vaultwarden/server:latest
    restart: unless-stopped

    container_name: bitwarden
    hostname: bitwarden

    environment:
      - TZ=[redacted]

    ports:
      - "[redacted]:80"
      #- "[redacted]:3012" WSS disabled due to bw extension data leakage via GET request

    volumes:
      - data:/data/
      - ./certs/:/etc/ssl/custom/
      - ./.env:/.env:ro

Relevant .env changes:

DOMAIN=https://[redacted]

## Rocket specific settings
## See https://rocket.rs/v0.4/guide/configuration/ for more details.
# ROCKET_ADDRESS=0.0.0.0
# ROCKET_PORT=80  # Defaults to 80 in the Docker images, or 8000 otherwise.
# ROCKET_WORKERS=10
ROCKET_TLS={certs="/etc/ssl/custom/[redacted].pem",key="/etc/ssl/custom/[redacted]-key.pem"}

Steps to reproduce

Expected behaviour

Container to show as healthy

Actual behaviour

Container shows as unhealthy, even though everything is working.

Troubleshooting data

root@bitwarden:/# curl --insecure --fail --silent --show-error https://localhost:80/alive || exit 1
"2023-11-28T12:06:47.277011Z"root@bitwarden:/# echo $?
0
@BlackDex
Copy link
Collaborator

BlackDex commented Dec 1, 2023

It seems to work fine for me.
Try setting the LOG_LEVEL to debug and see if you get some more information.
I tested it with curl versions, 7.68.0, 7.81.0, 7.88.1 and 8.4.0, they all worked fine.

Tested both the FQDN and localhost nothing breaks.

It could be your host platform which causes some (Open)SSL client settings to be different host-wide, and causes this to happen.
The certs might be unsupported in some way, or not have a full chain available?

Also, i see you use port 80, while it looks like you do a port-proxy via Docker. Not sure if you use a reverse proxy in-front of Vaultwarden, if so, it could also be your reverse proxy which is causes an issue.

I'm not able to reproduce this on my side.

@BlackDex BlackDex added the troubleshooting There might be bug or it could be user error, more info needed label Dec 1, 2023
@rwjack
Copy link
Author

rwjack commented Dec 1, 2023

Try setting the LOG_LEVEL to debug and see if you get some more information.

Let me give it a shot.


It could be your host platform which causes some (Open)SSL client settings to be different host-wide, and causes this to happen.

Debian 11 with the latest version of Docker, I doubt that's the issue.


The certs might be unsupported in some way, or not have a full chain available?

I don't think they do have a full chain, it's just a regular certificate generated by mkcert


Also, i see you use port 80, while it looks like you do a port-proxy via Docker. Not sure if you use a reverse proxy in-front of Vaultwarden, if so, it could also be your reverse proxy which is causes an issue.

Port 80 is the default for the container. I expose another port on the Debian host, which is open only to the reverse proxy, but I don't see how that's related, as the healthcheck should curl "localhost", since ROCKET_ADDRESS (by default I hope) is configured to 0.0.0.0.

addr="${ROCKET_ADDRESS}"
if [ -z "${addr}" ] || [ "${addr}" = '0.0.0.0' ] || [ "${addr}" = '::' ]; then
    addr='localhost'
fi

@rwjack
Copy link
Author

rwjack commented Dec 1, 2023

@BlackDex Though I didn't just try to run the healthcheck manually.

This is the error:

root@bitwarden:/# ./healthcheck.sh
curl: (1) Received HTTP/0.9 when not allowed

And the timestamp matches a corresponding error in the container logs:

[2023-12-01 12:41:08.865][rocket_http::tls::listener][WARN] tls handshake with 127.0.0.1:34488 failed: received corrupt message of type InvalidContentType

Relevant issue: curl/curl#12183

@BlackDex
Copy link
Collaborator

BlackDex commented Dec 1, 2023

Ah! Looks like ROCKET_TLS is not seen as a env variable to the healthcheck.sh script.
Probably because you are using a .env file there which isn't seen by the script.

That scenario isn't covered it looks like.

@BlackDex BlackDex added bug Something isn't working low priority Won't fix anytime soon, but will accept PR if provided good first issue Good for newcomers labels Dec 1, 2023
@rwjack
Copy link
Author

rwjack commented Dec 1, 2023

Okay, definitely a minor bug, I fixed it by adding this in my docker-compose:

    env_file:
      - .env

Not sure why I didn't set it like that in the first place, that's how I do it everywhere. Anyways, all good now.

This is probably the reason:

## By default, Vaultwarden expects for this file to be named ".env" and located
## in the current working directory.

@BlackDex
Copy link
Collaborator

BlackDex commented Dec 1, 2023

That is if not running via Docker. But that could be cleared up. Since there also is a ENV variable which can relocate the .env file location ;). Which doesn't make it more clear haha.

@rwjack
Copy link
Author

rwjack commented Dec 1, 2023

Gotcha, thanks for the help!

Even though my issue is resolved, do you still want to keep this open as a reminder?

@BlackDex
Copy link
Collaborator

BlackDex commented Dec 1, 2023

Yes please, this is something we need to fix i think.

BlackDex added a commit to BlackDex/vaultwarden that referenced this issue Dec 6, 2023
If someone is using a `.env` file or configured the `ENV_FILE` variable
to use that as it's configuration, this was missed by the healthcheck.

So, `DOMAIN` and `ROCKET_TLS` were not seen, and not used in these cases.

This commit fixes this by checking for this file and if it exists, then
it will load those variables first.

Fixes dani-garcia#4112
dani-garcia pushed a commit that referenced this issue Dec 9, 2023
* Fix BWDC when re-run with cleared cache

Using the BWDC with a cleared cache caused invited users to be converted
to accepted users.

The problem was a wrong check for the `restore` function.

Fixes #4114

* Remove useless variable

During some refactoring this seems to be overlooked.
This variable gets filled but isn't used at all afterwards.

Fixes #4105

* Check some `.git` paths to force a rebuild

When a checked-out repo switches to a specific tag, and that tag does
not have anything else changed in the files except the tag, it could
happen that the build process doesn't see any changes, while it could be
that the version string needs to be different.

This commit ensures that if some specific paths are changed within the
.git directory, cargo will be triggered to rebuild.

Fixes #4087

* Do not delete dir on file delete

Previously during a `delete_file` check we also tried to delete the
parent directory and ignored all errors, like not being empty for
example.

Since this function is called `delete_file` and does not mention
anything in regards to a directory i have removed that code and it will
now only delete the file and leave the rest as-is.

If this somehow is still needed or wanted, which i do not think we want,
then we should create a new function.

Fixes #4081

* Fix healthcheck when using an ENV file

If someone is using a `.env` file or configured the `ENV_FILE` variable
to use that as it's configuration, this was missed by the healthcheck.

So, `DOMAIN` and `ROCKET_TLS` were not seen, and not used in these cases.

This commit fixes this by checking for this file and if it exists, then
it will load those variables first.

Fixes #4112

* Add missing route

While there was a function and a derive, this endpoint wasn't part of
the routes. Since Bitwarden does have this endpoint ill add the route
instead of deleting it.

Fixes #4076
Fixes #4144

* Update crates to update the openssl crate

Because of a bug in the openssl-sys crate we pinned the version to an
older version. This issue has been fixed and was released 2 days ago.

This commit updates the openssl crates including others.
This should also fix the issues with building Vaultwarden using newer
versions of LibreSSL.

Fixes #4051
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers low priority Won't fix anytime soon, but will accept PR if provided troubleshooting There might be bug or it could be user error, more info needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants