Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

uwsgi backtrace with cryptography-2.0 #3804

Closed
hwoarang opened this issue Jul 21, 2017 · 27 comments
Closed

uwsgi backtrace with cryptography-2.0 #3804

hwoarang opened this issue Jul 21, 2017 · 27 comments

Comments

@hwoarang
Copy link

Hi,

We are seeing the following failure in the OpenStack CI ever since cryptography-2.0 was released and installed by pip. Reverting back to cryptography-1.9 fixes the issue.

*** Starting uWSGI 2.0.15 (64bit) on [Thu Jul 20 16:41:28 2017] ***
compiled with version: 4.8.5 on 20 July 2017 16:38:12
os: Linux-4.4.57-18.3-default #1 SMP Thu Mar 30 06:39:47 UTC 2017 (39c8557)
nodename: keystone1
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /
detected binary path: /openstack/venvs/keystone-testing/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
your processes number limit is 1850
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: enabled
uWSGI http bound on :37359 fd 3
uwsgi socket 0 bound to TCP address 127.0.0.1:5001 fd 6
Python version: 2.7.13 (default, Mar 22 2017, 12:31:17) [GCC]
Set PythonHome to /openstack/venvs/keystone-testing
Python main interpreter initialized at 0x1c47670
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 402621 bytes (393 KB) for 2 cores
*** Operational MODE: preforking ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 18180)
spawned uWSGI worker 1 (pid: 18183, cores: 1)
spawned uWSGI worker 2 (pid: 18184, cores: 1)
spawned uWSGI http 1 (pid: 18185)
!!! uWSGI process 18184 got Segmentation Fault !!!
*** backtrace of 18184 ***
/openstack/venvs/keystone-testing/bin/uwsgi(uwsgi_backtrace+0x2e) [0x46bd7e]
/openstack/venvs/keystone-testing/bin/uwsgi(uwsgi_segfault+0x21) [0x46c111]
/lib64/libc.so.6(+0x34950) [0x7f7e9d2f2950]
/lib64/libc.so.6(+0x90e5a) [0x7f7e9d34ee5a]
/lib64/libcrypto.so.1.0.0(+0x1201c9) [0x7f7e9e5621c9]
/lib64/libcrypto.so.1.0.0(lh_insert+0x42) [0x7f7e9e5624a2]
/lib64/libcrypto.so.1.0.0(OBJ_NAME_add+0x6f) [0x7f7e9e4b28ff]
/openstack/venvs/keystone-testing/lib/python2.7/site-packages/cryptography/hazmat/bindings/../../.libs/libssl-abb4988e.so.1.1(+0x2cbb5) [0x7f7e979debb5]
/lib64/libpthread.so.0(+0x6c13) [0x7f7e9f225c13]
/openstack/venvs/keystone-testing/lib/python2.7/site-packages/cryptography/hazmat/bindings/../../.libs/libcrypto-2ae7ec4c.so.1.1(CRYPTO_THREAD_run_once+0x9) [0x7f7e976c6399]
/openstack/venvs/keystone-testing/lib/python2.7/site-packages/cryptography/hazmat/bindings/../../.libs/libssl-abb4988e.so.1.1(OPENSSL_init_ssl+0x73) [0x7f7e979ded43]
/openstack/venvs/keystone-testing/lib/python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so(+0x5a51d) [0x7f7e97c8151d]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5547) [0x7f7e9d98dac7]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x24c) [0x7f7e9d99330c]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x59d0) [0x7f7e9d98df50]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x24c) [0x7f7e9d99330c]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x59d0) [0x7f7e9d98df50]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x24c) [0x7f7e9d99330c]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCode+0x32) [0x7f7e9d9e8252]
/usr/lib64/libpython2.7.so.1.0(PyImport_ExecCodeModuleEx+0xb0) [0x7f7e9d9ec940]
/usr/lib64/libpython2.7.so.1.0(+0x150b5c) [0x7f7e9d9ecb5c]
/usr/lib64/libpython2.7.so.1.0(+0x10bdc4) [0x7f7e9d9a7dc4]
/usr/lib64/libpython2.7.so.1.0(PyImport_ImportModuleLevel+0x483) [0x7f7e9d9a84b3]
/usr/lib64/libpython2.7.so.1.0(+0xeb8cb) [0x7f7e9d9878cb]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x46) [0x7f7e9d91f986]
/usr/lib64/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x36) [0x7f7e9d988096]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x3310) [0x7f7e9d98b890]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x24c) [0x7f7e9d99330c]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCode+0x32) [0x7f7e9d9e8252]
/usr/lib64/libpython2.7.so.1.0(PyImport_ExecCodeModuleEx+0xb0) [0x7f7e9d9ec940]
/usr/lib64/libpython2.7.so.1.0(+0x150b5c) [0x7f7e9d9ecb5c]
/usr/lib64/libpython2.7.so.1.0(+0x10bdc4) [0x7f7e9d9a7dc4]
/usr/lib64/libpython2.7.so.1.0(PyImport_ImportModuleLevel+0x3c7) [0x7f7e9d9a83f7]
/usr/lib64/libpython2.7.so.1.0(+0xeb8cb) [0x7f7e9d9878cb]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x46) [0x7f7e9d91f986]
/usr/lib64/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x36) [0x7f7e9d988096]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x3310) [0x7f7e9d98b890]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x24c) [0x7f7e9d99330c]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCode+0x32) [0x7f7e9d9e8252]
/usr/lib64/libpython2.7.so.1.0(PyImport_ExecCodeModuleEx+0xb0) [0x7f7e9d9ec940]
/usr/lib64/libpython2.7.so.1.0(+0x150b5c) [0x7f7e9d9ecb5c]
/usr/lib64/libpython2.7.so.1.0(+0x1511d3) [0x7f7e9d9ed1d3]
/usr/lib64/libpython2.7.so.1.0(+0x10ba10) [0x7f7e9d9a7a10]
/usr/lib64/libpython2.7.so.1.0(PyImport_ImportModuleLevel+0x32e) [0x7f7e9d9a835e]
/usr/lib64/libpython2.7.so.1.0(+0xeb8cb) [0x7f7e9d9878cb]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x46) [0x7f7e9d91f986]
/usr/lib64/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x36) [0x7f7e9d988096]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x3310) [0x7f7e9d98b890]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x24c) [0x7f7e9d99330c]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCode+0x32) [0x7f7e9d9e8252]
/usr/lib64/libpython2.7.so.1.0(PyImport_ExecCodeModuleEx+0xb0) [0x7f7e9d9ec940]
/usr/lib64/libpython2.7.so.1.0(+0x150b5c) [0x7f7e9d9ecb5c]
/usr/lib64/libpython2.7.so.1.0(+0x6b249) [0x7f7e9d907249]
/usr/lib64/libpython2.7.so.1.0(+0x1511d3) [0x7f7e9d9ed1d3]
/usr/lib64/libpython2.7.so.1.0(+0x10bc02) [0x7f7e9d9a7c02]
/usr/lib64/libpython2.7.so.1.0(PyImport_ImportModuleLevel+0x2b7) [0x7f7e9d9a82e7]
/usr/lib64/libpython2.7.so.1.0(+0xeb8cb) [0x7f7e9d9878cb]
/usr/lib64/libpython2.7.so.1.0(PyObject_Call+0x46) [0x7f7e9d91f986]
/usr/lib64/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x36) [0x7f7e9d988096]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x3310) [0x7f7e9d98b890]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x24c) [0x7f7e9d99330c]
/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCode+0x32) [0x7f7e9d9e8252]
/usr/lib64/libpython2.7.so.1.0(PyImport_ExecCodeModuleEx+0xb0) [0x7f7e9d9ec940]
/usr/lib64/libpython2.7.so.1.0(+0x150b5c) [0x7f7e9d9ecb5c]
*** end of backtrace ***

The downstream bug report is at https://bugs.launchpad.net/openstack-ansible/+bug/1705521

Let me know if you need me to provide more information

@alex
Copy link
Member

alex commented Jul 21, 2017

Are there steps we could follow to be able to reproduce this ourselves?

@alex
Copy link
Member

alex commented Jul 21, 2017

Also, are you installing from the wheels we provide on PyPI, or building cryptography yourself?

@alex
Copy link
Member

alex commented Jul 21, 2017

One more follow up, assuming you're using our wheels (it looks like you are), can you try installing with --no-binary cryptography so we can see if the issue is in the code or with the wheels.

@hwoarang
Copy link
Author

Hi @alex

I haven't found a simple reproducer yet. The one that triggers the problem reliably takes quite a bit of time but here it is nevertheless

git clone https://git.openstack.org/openstack/openstack-ansible-os_keystone
cd openstack-ansibel-os_keystone
vagrant up opensuse422
[wait for it to fail]
vagrant ssh opensuse422
lxc-attach --name keystone1
[look at /var/log/keystone/ uwsgi logs for the backtraces]

There is a venv installed in /openstack/venvs/keystone-testing that uwsgi uses. uwsgi is installed in openstack/venvs/keystone-testing/bin/uwsgi

I will see if I can create a simpler reproducer next week

Regarding the wheels question, I will have to bring up that environment back so I will let you know early next week as well.

@dirkmueller
Copy link

dirkmueller commented Jul 24, 2017

-no-binary solves the issue. the conflict appears to be happening when two modules are loading two different versions of libssl.so into the same process context (the manylinux1 wheel from cryptography is 1.1 based) and libssl.so from python-2.7 (openssl 1.0 based in the case of openSUSE Leap 42.2/3).

I believe actually this should be reproducible on other distributions as well (though I haven't tried), it just isn't exposed in infra because for all other distributions a wheel mirror is being provided that has a dst-local-rebuilt version of the wheels, avoiding the incompatibility.

@reaperhulk
Copy link
Member

reaperhulk commented Jul 24, 2017 via email

@hwoarang
Copy link
Author

I can also confirm that using --no-binary fixes the problem in my testcase as well

@njsmith
Copy link
Contributor

njsmith commented Jul 24, 2017

Yeah, the wheels have countermeasures that are supposed to prevent this from happening, but clearly they aren't working somehow – we should not see libssl-abb4988e.so.1.1 calling into the system libcrypto. It's also weird that you're saying it's load dependent and hard to reproduce, given that the crash is happening in OPENSSL_init_ssl.

Can you try reproducing with the env vars LD_DEBUG=all LD_DEBUG_OUTPUT=/some/path and then upload /some/path.pid-of-crashing-process somewhere?

@njsmith
Copy link
Contributor

njsmith commented Jul 24, 2017

Oh, maybe I misread and it's not load dependent, just slow to reproduce because you have to install opensuse?

@hwoarang
Copy link
Author

yeah the currently reproducer (at least for me, maybe @dirkmueller has a simpler one) requires to use the vagrantfile to bring up a bunch of LXC containers with openstack services and wait for it to fail. I am not sure if opensuse is a hard requirements or maybe other distros with openssl-1.0 may suffer from the same issue

@njsmith
Copy link
Contributor

njsmith commented Jul 24, 2017

Oh here's a thought – is the main uwsgi binary linked to libcrypto? What does ldd uwsgi say?

@tiran
Copy link
Contributor

tiran commented Jul 24, 2017

It's hard to see what is actually going on. The shared libraries in wheel lack debug information. The small offset in the function call OPENSSL_init_ssl+0x73 suggests that it may fail in the path OPENSSL_init_crypto or RUN_ONCE(&ssl_base, ossl_init_ssl_base). The function ossl_init_ssl_base initializes all algorithms like ciphers, digests etc. Ciphers are registered by calling EVP_add_cipher() function. That function starts with a call to OBJ_NAME_add(). This still does not explain why we see a call into /lib64/libcrypto.so

OPENSSL_init_crypto may be an explanation. Depending on your installation and configuration, it may load additional shared libraries with dynamic engines for OpenSSL. These dynamic libraries are linked against your OS' libcrypto.so. It might explain why we see libcrypto.so here. If I'm right, then a OpenSSL build with OPENSSL_NO_DYNAMIC_ENGINE would solve the issue.

@hwoarang
Copy link
Author

@njsmith yes it s

readelf -d /openstack/venvs/keystone-testing/bin/uwsgi | grep NEEDED 
 0x0000000000000001 (NEEDED)             Shared library: [libpthread.so.0]
 0x0000000000000001 (NEEDED)             Shared library: [libm.so.6]
 0x0000000000000001 (NEEDED)             Shared library: [libdl.so.2]
 0x0000000000000001 (NEEDED)             Shared library: [libz.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libssl.so.1.0.0]
 0x0000000000000001 (NEEDED)             Shared library: [libcrypto.so.1.0.0]
 0x0000000000000001 (NEEDED)             Shared library: [libxml2.so.2]
 0x0000000000000001 (NEEDED)             Shared library: [liblzma.so.5]
 0x0000000000000001 (NEEDED)             Shared library: [libutil.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libpython2.7.so.1.0]
 0x0000000000000001 (NEEDED)             Shared library: [libcrypt.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]

uwsgi is installed from pip

~$ pip freeze | grep -i uWSGI
uWSGI==2.0.15

@robvdl
Copy link

robvdl commented Jul 24, 2017

Also noticed something odd when cryptography 2.0 came out, our Debian packaging of a Python project failed to build (on Ubuntu Trusty), might be related. I tried for hours to get it going but eventually reverted to cryptography 1.9

@njsmith
Copy link
Contributor

njsmith commented Jul 24, 2017

0x0000000000000001 (NEEDED) Shared library: [libcrypto.so.1.0.0]

Ugh, I was right. I hate being right.

So here's what's going on: because the top level uwsgi executable is linked to libcrypto, it's effectively doing an LD_PRELOAD=/.../libcrypto.so.1.0.0 on every module that ever gets loaded into that uwsgi. The cryptography wheel is properly loading its vendored openssl libraries, and since it uses an explicit dlsym to find the first symbol then that goes to the right place, and then it calls CRYPTO_THREAD_run_once which is a new symbol that doesn't exist in the old system libcrypto, so it gets found in the vendored libcrypto like we want, and then it calls a function in the vendored libssl that it got passed by an explicit function pointer... but eventually we attempt to make a regular linker-resolved call to a symbol that's present in both the system libcrypto and the vendored libcrypto, and because the system libcrypto got quasi-LD_PRELOADed, it wins, and we lose.

This is really bad. Basically uwsgi can't work with any manylinux wheel that uses openssl (or libxml2 or liblzma for that matter).

I can see two possible solutions.

Option 1: right now, auditwheel tries to avoid this kind of problem by patching vendored libraries to have unique names. This works great until someone clobbers is by jamming stuff into the global symbol namespace like this. We could avoid this if auditwheel also patched libraries to give every vendored symbol a unique name. Upside: this would fix this class of problems for everyone, forever. Downside: auditwheel leans on patchelf to do the heavy lifting here, and patchelf does not have the ability to rename individual symbols. In fact, AFAICT there does not currently exist any code out there that can rename symbols in ELF files. I've actually implemented this for Mach-O (don't ask) and in principle it's doable for ELF, but ELF is a much more complicated format than Mach-O; for example, ELF symbols are stored in a hash table that would need to be rebuilt. It's possible patchelf already solves the hardest parts here – I'm genuinely not sure – but it's definitely not an off-the-shelf solution.

Option 2: convince uwsgi to linking stuff at the top level like this. I have no idea how amenable the authors would be to fixing this. I guess you could put the actual code in a libuwsgi.so and then have the main uwsgi binary just do handle = dlopen("libuwsgi.so", RTLD_LOCAL); real_main = dlsym(handle, "real_main"); real_main(argc, argv);? Anything that moves those DT_NEEDED entries off of the top level executable. (Though as a nice wrinkle, you do have to link against libpython or load it with RTLD_GLOBAL, because python extension modules expect to find the python C API symbols magically injected into their namespace whether they link to libpython or not.)

@reaperhulk
Copy link
Member

So there needs to be an issue filed with uwsgi to discuss whether they'd be interested in changing their shared library loading behavior to accommodate how manylinux1 needs things to work. Should we also file an issue on the manylinux repo to discuss symbol name mangling (and see if anybody wants to heroically level up patchelf's capabilities)? It appears the only action cryptography can take at this time is to add a FAQ entry discussing the problems with uwsgi and manylinux1 unfortunately.

@njsmith
Copy link
Contributor

njsmith commented Jul 24, 2017

@robvdl: hard to know if that's related from your description – I'd suggest filing a separate bug with more details?

@reaperhulk: yeah, that all sounds right, though probably the auditwheel repo would be a better place than manylinux.

@reaperhulk
Copy link
Member

@njsmith I went ahead and filed the auditwheel issue -- do you want me to try to summarize your findings to file with uwsgi or do it yourself?

@alex
Copy link
Member

alex commented Jul 24, 2017

I'm not a linker wizard, but is it really correct that the parent binary having already loaded a library with a given name "supersedes" the local names in the manylinux-produced .so?

Would this be solved by statically linking OpenSSL into our wheel, instead of RTLD_LOCAL-linking it?

openstack-gerrit pushed a commit to openstack/openstack-ansible-tests that referenced this issue Jul 24, 2017
cryptography may bundle openssl in the wheel and that causes symbol
conflicts if a different openssl is provided by the distribution.
As such, it's probably safer to re-build cryptography ourselves just
to be sure that the correct distro libraries are used. We may want to
revert that once we start building wheel packages for openSUSE and
distribute them in the OpenStack mirrors since every distribution
well then get a proper wheel file for its needs. See related review
https://review.openstack.org/#/c/486305/

Closes-Bug: 1705521
Link: pyca/cryptography#3804
Change-Id: I7e88935acda580d8522a1e6927ea498431d78bda
@reaperhulk
Copy link
Member

It seems like statically linking might resolve it? That would be relatively easy to confirm as well assuming we can construct an environment that replicates this. I'll take a look when I'm back home in the next few days.

@njsmith
Copy link
Contributor

njsmith commented Jul 24, 2017

@alex:

is it really correct that the parent binary having already loaded a library with a given name "supersedes" the local names in the manylinux-produced .so?

The canonical reference here is Drepper's DSO howto, but basically the ELF symbol resolution rules are:

  • When looking for a symbol there are exactly two relevant namespaces, global and local, no nested scopes.

  • global is LD_PRELOAD, and symbols exported by the top level binary (the thing passed to exec), the transitive closure of libraries that it's linked to (DT_NEEDED entries) in breadth-first order, and any .so loaded with RTLD_GLOBAL plus their transitive closure.

  • local only exists for libraries loaded with RTLD_LOCAL, and contains the symbols exported by the library being loaded, plus its transitive closure of DT_NEEDED libraries.

  • Search order is global before local.

Yes, this is bass-ackwards from everything else in computing. Yes, this is really confusing because things that seem like they shouldn't matter (linking libcrypto vs dlopening libcrypto, linking libcrypto from the binary versus linking from a dlopened library) actually matter a lot.

AFAICT the underlying principle here is that the ELF designers at each point asked "which of these options gives us more opportunities for monkey-patching shared libraries?" and then that's what they did. I guess this is what happens when your infrastructure is maintained by long-term support companies.

Would this be solved by statically linking OpenSSL into our wheel, instead of RTLD_LOCAL-linking it?

Yes, this would work, though obviously this would require manual hacks to your build system and would only fix this one case, rather than being a general solution.

@alex
Copy link
Member

alex commented Jul 24, 2017

Ok, at least I understood correctly.

I am pro-us-using-static-linking since that will resolve this consistently.

@njsmith
Copy link
Contributor

njsmith commented Jul 25, 2017

uwsgi bug: unbit/uwsgi#1590

@alex
Copy link
Member

alex commented Jul 25, 2017

#3811 solves this for our next release.

@alex alex closed this as completed Jul 25, 2017
@alex alex added this to the Twenty first release milestone Jul 25, 2017
openstack-gerrit pushed a commit to openstack/openstack that referenced this issue Jul 26, 2017
Project: openstack/requirements  6ae571e013b6a802bd66c66bbcc7f4370a60b718

Blacklist cryptography 2.0

See pyca/cryptography#3804 --
cryptography introduced an manylinux1 wheel that is incompatible
with uwsgi.

Related-Bug: 1705521
Link: pyca/cryptography#3804

Change-Id: I9e0458742730743d1ba79349e9996beeed0be24f
openstack-gerrit pushed a commit to openstack/requirements that referenced this issue Jul 26, 2017
See pyca/cryptography#3804 --
cryptography introduced an manylinux1 wheel that is incompatible
with uwsgi.

Related-Bug: 1705521
Link: pyca/cryptography#3804

Change-Id: I9e0458742730743d1ba79349e9996beeed0be24f
openstack-gerrit pushed a commit to openstack/openstack that referenced this issue Jul 26, 2017
Project: openstack/requirements  6ae571e013b6a802bd66c66bbcc7f4370a60b718

Blacklist cryptography 2.0

See pyca/cryptography#3804 --
cryptography introduced an manylinux1 wheel that is incompatible
with uwsgi.

Related-Bug: 1705521
Link: pyca/cryptography#3804

Change-Id: I9e0458742730743d1ba79349e9996beeed0be24f
@reaperhulk
Copy link
Member

We've released 2.0.1, could you check to see if this segfault occurs with uwsgi now?

@dirkmueller
Copy link

Yep, will do tomorrow

@njsmith
Copy link
Contributor

njsmith commented Jul 27, 2017

From #3824 it sounds like 2.0.1 did not solve this, and in fact now we're confused all over again about why this is broken (#3824 (comment)). And in any case it sounds like a 2.0.2 is imminent.

openstack-gerrit pushed a commit to openstack/openstack-ansible that referenced this issue Jul 28, 2017
cryptography may bundle openssl in the wheel and that causes symbol
conflicts if a different openssl is provided by the distribution.
As such, it's probably safer to re-build cryptography ourselves just
to be sure that the correct distro libraries are used. This has been
addressed in openstack-ansible-tests/test-vars.yaml
(https://review.openstack.org/#/c/486580/) to fix the CI tests but the
problem is also present on regular deployments so we set it in the
group_variables for the repo_all group of hosts so it's built from
source in the wheel repository.

Related-Bug: 1705521
Link: pyca/cryptography#3804
Change-Id: I54ba3c1fa48a2f4c633930bc7e8cc65397f86659
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

7 participants