Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCSP stapling handling #107

Closed
HLeithner opened this issue Jul 23, 2016 · 9 comments
Closed

OCSP stapling handling #107

HLeithner opened this issue Jul 23, 2016 · 9 comments
Assignees

Comments

@HLeithner
Copy link

Whats the best why to use OCSP stapling.

The OCSP response expires regularly, so it have to be updated. Do I really have to do this in an external Job?
What if the response expires, hitch sends the expired OCSP packaged to the browser. Is this a good idea, that would mean the Browser stop showing the webpage or?

IIRC Apaches mod_ssl handles OCSP stapling complete it self including refreshing the response.

@daghf
Copy link
Member

daghf commented Jul 27, 2016

Hi Harald

I've pushed various improvements to the current master branch, including automated refreshes of OCSP responses. Would you mind giving that a try?

In the updated version, you can let Hitch take care of initial retrieval of the OCSP response as well. To turn it on you just need to supply a directory for --ocsp-dir=mydir where the responses are cached, and Hitch will take care of fetching and refreshing automagically. Take a look at docs/configuration.md and please report back if you run into any issues.

@daghf
Copy link
Member

daghf commented Jul 27, 2016

Also I should mention that in the updated version, Hitch will no longer staple expired OCSP responses, although other than wasting bits on the wire it doesn't make much of a difference to browsers.

@HLeithner HLeithner changed the title OCSP stapling hanlding OCSP stapling handling Aug 1, 2016
@HLeithner
Copy link
Author

HLeithner commented Aug 1, 2016

Hi,

I tested it today with commit 31e4eaf, after restart of hitch I get end endless segfault loop:

ocsp-dir disabled works fine:

config

ocsp-dir = "/var/lib/hitch-ocsp"
ocsp-connect-tmo = 5
ocsp-resp-tmo = 10
ocsp-verify-staple = on

Directory exists and has hitch as group and owner, drwxr-xr-x

Aug 01 11:24:17 front01 hitch[2587]: {core} hitch 1.3.0-beta3 starting
Aug 01 11:24:17 front01 hitch[2587]: {core} Loading certificate pem files (1)
Aug 01 11:24:17 front01 hitch[2587]: {core} Daemonized as pid 2588.
Aug 01 11:24:17 front01 hitch[2588]: {core} hitch 1.3.0-beta3 initialization complete
Aug 01 11:24:17 front01 hitch[2590]: {core} Process 1 online
Aug 01 11:24:17 front01 hitch[2588]: {core} Child 2590 was terminated by signal 11.
Aug 01 11:24:17 front01 hitch[2592]: {core} Process 1 online
Aug 01 11:24:17 front01 hitch[2588]: {core} Child 2592 was terminated by signal 11.
Aug 01 11:24:17 front01 hitch[2589]: {core} Process 0 online
Aug 01 11:24:17 front01 hitch[2588]: {core} Child 2591 was terminated by signal 11.
Aug 01 11:24:17 front01 hitch[2588]: {core} Child 2589 was terminated by signal 11.
Aug 01 11:24:17 front01 hitch[2595]: {core} Process 0 online
Aug 01 11:24:17 front01 hitch[2588]: {core} Child 2594 was terminated by signal 11.
Aug 01 11:24:17 front01 kernel: hitch[2590]: segfault at 40 ip 00000000004123c2 sp 00007ffd1599cbb0 error 4 in hitch[400000+25000]
Aug 01 11:24:17 front01 kernel: hitch[2592]: segfault at 40 ip 00000000004123c2 sp 00007ffd1599cba0 error 4 in hitch[400000+25000]
Aug 01 11:24:17 front01 kernel: hitch[2591]: segfault at 20 ip 000000000040da08 sp 00007ffd1599cce0 error 4 in hitch[400000+25000]
Aug 01 11:24:17 front01 kernel: hitch[2589]: segfault at 40 ip 00000000004123c2 sp 00007ffd1599cbb0 error 4 in hitch[400000+25000]
Aug 01 11:24:17 front01 kernel: hitch[2594]: segfault at 20 ip 000000000040da08 sp 00007ffd1599cce0 error 4 in hitch[400000+25000]
Aug 01 11:24:17 front01 kernel: hitch[2595]: segfault at 40 ip 00000000004123c2 sp 00007ffd1599cba0 error 4 in hitch[400000+25000]
Aug 01 11:24:17 front01 kernel: hitch[2596]: segfault at 20 ip 000000000040da08 sp 00007ffd1599cce0 error 4 in hitch[400000+25000]
Aug 01 11:24:17 front01 hitch[2588]: {core} Child 2595 was terminated by signal 11.
Aug 01 11:24:17 front01 hitch[2597]: {core} Process 0 online
Aug 01 11:24:17 front01 hitch[2588]: {core} Child 2597 was terminated by signal 11.
Aug 01 11:24:17 front01 hitch[2593]: {core} Process 1 online
Aug 01 11:24:17 front01 hitch[2588]: {core} Child 2596 was terminated by signal 11.

@daghf
Copy link
Member

daghf commented Aug 1, 2016

Ouch - that's certainly not supposed to happen. I've tested with running a similar config, and I'm not able to reproduce the crash here.

Would it be possible to get a backtrace?

Could you please try the following:

# ulimit -c unlimited

then start hitch and wait for it to crash, followed by

# gdb path/to/hitch-binary corefile

and then issue bt all at the gdb prompt.

Thanks a lot for testing BTW, this is very valuable.

@daghf daghf self-assigned this Aug 1, 2016
@daghf
Copy link
Member

daghf commented Aug 15, 2016

Hi @HLeithner

Have you been able to get a coredump from this?

@HLeithner
Copy link
Author

@daghf Sorry I missed this.
Here the BT

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00000000004123c2 in handle_connections (mgt_fd=5) at hitch.c:2998
2998                    if (default_ctx->ev_staple)

@daghf daghf closed this as completed in 6f6e67d Aug 15, 2016
@daghf
Copy link
Member

daghf commented Aug 15, 2016

Thanks a lot!

Pushed a fix now - would you mind giving the latest HEAD a try?

@HLeithner
Copy link
Author

Sorry couldn't test earlier, seams too work, I only have a problem with lets encrypt certificate.
Error message:
{ocsp} OCSP_sendreq_nbio failed for ocsp.int-x3.letsencrypt.org:80.

if its not related I would create a new issue.

@lkarsten
Copy link
Contributor

@HLeithner I'm also seeing this running with a Let's Encrypt certificate. Not sure if it is a problem with their OCSP server or Hitch, but filed another issue to track it in: #113

I believe the current issue is resolved and should be kept closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants