Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Synapse should not require TLS certificate files #4554

Closed
spantaleev opened this issue Feb 4, 2019 · 9 comments
Closed

Synapse should not require TLS certificate files #4554

spantaleev opened this issue Feb 4, 2019 · 9 comments
Assignees

Comments

@spantaleev
Copy link
Contributor

Testing with 0.99rc4, it seems like:

  • regardless of the no_tls setting's value (False or True), Synapse would still try to load tls_certificate_path or tls_private_key_path (I'm not sure which one now, but it does try to read it and fails with an exception if it's missing or not a valid file)

  • the comment in homeserver.yaml about no_tls seems backward ("Setting no_tls to False will do so (and avoid the need to give synapse a TLS private key).") I guess this should say True, rather than False


I wish to use Synapse with a reverse-proxy and this issue is a blocker for that. I'd rather not provide any certificate files to Synapse at all.

Now, looking at #2840, unless something changed, it seems like Synapse may really need to access the certificate file.

This leads to all sorts of pain, especially when automated renewal is involved.
Not only do we need to renew the certificate and reload the reverse-proxy (no downtime there), but we also need to restart Synapse at the same time (reloading doesn't seem possible as explained in #1180).

@richvdh
Copy link
Member

richvdh commented Feb 4, 2019

this is a thing we should fix before 1.0

@richvdh richvdh added the v1.0 label Feb 4, 2019
@spantaleev
Copy link
Contributor Author

Is it just a problem of wishing to read some file, or does it actually use it for something?

If this is to be fixed for 1.0 (and not 0.99), how should we do the 0.99 transition? By supplying any certificate file (self-signed even) just to prevent parsing from breaking.. or by supplying a proper working certificate (the actual one used for reverse-proxying)?

@richvdh
Copy link
Member

richvdh commented Feb 4, 2019

Is it just a problem of wishing to read some file, or does it actually use it for something?

It exposes it in a field in the s2s api which is no longer used.

If this is to be fixed for 1.0 (and not 0.99), how should we do the 0.99 transition?

Bear in mind that there is no need to change anything to use 0.99.

By supplying any certificate file (self-signed even) just to prevent parsing from breaking.. or by supplying a proper working certificate (the actual one used for reverse-proxying)?

any certificate should do

@spantaleev
Copy link
Contributor Author

I'm trying to prepare matrix-docker-ansible-deploy for 0.99 (and 1.0).

Existing installations (that run from before v0.99) do have a self-signed certificate which can be left in place, so there's no need to do anything.

On the other hand, new installations would not have a self-signed certificate generated in /data, because 0.99's generate command doesn't do that anymore. So I do need to do something sensible in that case -- either make the playbook generate a self-signed certificate just so we can make 0.99 happy.. or pass along the real certificate into the container.

Thank you for your answers!

I now know that both solutions are feasible.
They do have pros and cons, so I'll figure out which one to choose, so we can migrate people over to Synapse 0.99.

richvdh added a commit that referenced this issue Feb 5, 2019
@hawkowl
Copy link
Contributor

hawkowl commented Feb 5, 2019

I think that this still needs to be exposed for pre-0.99 Synapses, which will verify the TLS fingerprint? But for 1.0, it can def. be removed.

@richvdh
Copy link
Member

richvdh commented Feb 5, 2019

meh. pre-0.99 Synapses will only check it when fetching the server keys, which is normally delegated via matrix.org, which is on 0.99, so doesn't care. We might as well make it optional for now.

@spantaleev
Copy link
Contributor Author

I was about to generate a temporary self-signed certificate to make Synapse v0.99 happy, but it looks like once #4566 lands in 0.99rc5 (or 0.99 final), I'll be able to keep things simpler and use tls_certificate_path: /conf/dummy.tls.crt without doing anything extra.

Thanks!

richvdh added a commit that referenced this issue Feb 5, 2019
Also:

* Fix wrapping in docker readme
* Clean up some docs on the docker image
* a workaround for #4554
@neilisfragile neilisfragile added v0.99.1 and removed v1.0 labels Feb 7, 2019
@hawkowl hawkowl self-assigned this Feb 7, 2019
@Joshndroid
Copy link

This seems to be the issue i was having when trying to set up a fresh server. the server would not begin to listen on the ports (unable to see python when running netstat) checking further it seems it was not able to find the tls and would error out. I tried changing the true to a false but that didn't seem to help me. Luckily i had a backup of my hyper-v instance i could restore.

Then when upgrading the previous instance I see that it tried to overwrite my homeserver.yaml with all the new tls changes, etc. At this time i kept mine, but i'm assuming this will be a requirement to be merged with my local homeserver.yaml in the future (ie v1) when it no longer breaks the server?

richvdh added a commit that referenced this issue Feb 11, 2019
Rather than have to specify `no_tls` explicitly, infer whether we need to load
the TLS keys etc from whether we have any TLS-enabled listeners.

Fixes #4554
richvdh added a commit that referenced this issue Feb 11, 2019
If TLS is disabled, it should not be an error if no cert is given.

Fixes #4554.
richvdh added a commit that referenced this issue Feb 12, 2019
If TLS is disabled, it should not be an error if no cert is given.

Fixes #4554.
@richvdh
Copy link
Member

richvdh commented Feb 12, 2019

fixed in #4618

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants