Skip to content

Commit

Permalink
get rid of some panics (#1526)
Browse files Browse the repository at this point in the history
* client: simplify (*HostClient).do()

Remove an allocation in favour of deferring a call to release the
response.

* client: remove panic in dialAddr

Return an error instead of panicking if the user supplied a nonsensical
DialFunc.

* compression: remove panic on invalid compression level

If a compression level exceeding gzip's boundaries is provided, fasthttp
will panic. Instead it would be better to handle this error for them by
limiting it to the minimum or maximum value, depending on the direction
the user has exceeded the limits.

Clamp the value of gzip to always be between gzip.BestSpeed and
gzip.BestCompression.

* peripconn: remove panic on negative connection count

When a negative count is reached when unregistering a connection, a
panic is caused even though data-integrity is not at risk.

Replace the panic() with a simple clamp on the value to ensure the
value does not exceed it's expected lower bounds.

References: #1504

* compress: remove error on failed nonblocking writes

Since there is no way of handling or even logging non-critical errors in
stateless non-blocking writecalls, just drop them and hope the user
notices and tries again.

* workerPool: remove panic on redundant Start and Stop calls

Instead of panicking for invalid behaviour, it's preferable to just turn
the function into a noop.

* http: remove panic on invalid form boundary

* http: remove panic on negative reads

Since bufio already panics on negative reads, it is not necessary to do
so as well. If the length is zero and for some reason no error is
returned, readBodyIdentity and appendBodyFixedSize now errors in these
cases.

Link: https://github.com/golang/go/blob/851f6fd61425c810959c7ab51e6dc86f8a63c970/src/bufio/bufio.go#L246

* fs: remove panic on negative reader count

When a negative count is reached when unregistering a reader, a panic is
thrown even though data-integrity is not at risk.

Replace the panic() with a simple clamp on the value to ensure the
value does not exceed it's expected lower bounds.

* server: remove panic in favour of a segfault

Panicking with "BUG: " obscures the error. As the segfault causes a
panic anyway, just let the chaos unfold.

* server: remove panic in favour of returning an error

Writing on a timed-out response is not endangering data integrity and
just fails.

* chore: add comments to all panics

* chore: fix minor typo
  • Loading branch information
mpldr authored Mar 30, 2023
1 parent 5209cc3 commit d0f2727
Show file tree
Hide file tree
Showing 13 changed files with 60 additions and 63 deletions.
5 changes: 1 addition & 4 deletions brotli.go
Original file line number Diff line number Diff line change
Expand Up @@ -138,10 +138,7 @@ func nonblockingWriteBrotli(ctxv interface{}) {
ctx := ctxv.(*compressCtx)
zw := acquireRealBrotliWriter(ctx.w, ctx.level)

_, err := zw.Write(ctx.p)
if err != nil {
panic(fmt.Sprintf("BUG: brotli.Writer.Write for len(p)=%d returned unexpected error: %v", len(ctx.p), err))
}
zw.Write(ctx.p) //nolint:errcheck // no way to handle this error anyway

releaseRealBrotliWriter(zw, ctx.level)
}
Expand Down
3 changes: 3 additions & 0 deletions bytesconv.go
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ func ParseIPv4(dst net.IP, ipStr []byte) (net.IP, error) {
copy(dst, net.IPv4zero)
dst = dst.To4()
if dst == nil {
// developer sanity-check
panic("BUG: dst must not be nil")
}

Expand Down Expand Up @@ -126,6 +127,7 @@ func ParseHTTPDate(date []byte) (time.Time, error) {
// AppendUint appends n to dst and returns the extended dst.
func AppendUint(dst []byte, n int) []byte {
if n < 0 {
// developer sanity-check
panic("BUG: int must be positive")
}

Expand Down Expand Up @@ -281,6 +283,7 @@ var hexIntBufPool sync.Pool

func writeHexInt(w *bufio.Writer, n int) error {
if n < 0 {
// developer sanity-check
panic("BUG: int must be positive")
}

Expand Down
11 changes: 4 additions & 7 deletions client.go
Original file line number Diff line number Diff line change
Expand Up @@ -1294,26 +1294,23 @@ func isIdempotent(req *Request) bool {
}

func (c *HostClient) do(req *Request, resp *Response) (bool, error) {
nilResp := false
if resp == nil {
nilResp = true
resp = AcquireResponse()
defer ReleaseResponse(resp)

This comment has been minimized.

Copy link
@regorov

regorov Apr 18, 2023

as I know, defer adds overhead! I would revert back changes in this function.

This comment has been minimized.

Copy link
@erikdubbelboer

erikdubbelboer Apr 19, 2023

Collaborator

No, defer doesn't have any overhead. It used to in the past but hasn't for many versions.

This comment has been minimized.

Copy link
@regorov

regorov Apr 19, 2023

I would argue for a while here. Please take a look to generated code
https://gist.github.com/regorov/168cdc4bf820dc8b57c00f810004886c

This article also would be useful https://medium.com/i0exception/runtime-overhead-of-using-defer-in-go-7140d5c40e32

This comment has been minimized.

Copy link
@erikdubbelboer

erikdubbelboer Apr 19, 2023

Collaborator

That article is from before all the performance improvements to defer such as https://go-review.googlesource.com/c/go/+/202340

Of course defer is going to result in slightly more instructions, but the overhead so small that you won't be able to see the impact here at all. It's not like this function is going to be called millions of times per second.

This comment has been minimized.

Copy link
@regorov

regorov Apr 19, 2023

i agree, that today on amd64 the overhead between "with" and "without" defer is less than it was time ago. But what about ARM? Was it improved as well?

This comment has been minimized.

Copy link
@erikdubbelboer

erikdubbelboer Apr 19, 2023

Collaborator

Probably as the optimizations weren't about the instructions generated but the way defer works. They made the compiler smarter to handle static defers at compile time instead of runtime.

}

ok, err := c.doNonNilReqResp(req, resp)

if nilResp {
ReleaseResponse(resp)
}

return ok, err
}

func (c *HostClient) doNonNilReqResp(req *Request, resp *Response) (bool, error) {
if req == nil {
// for debugging purposes
panic("BUG: req cannot be nil")
}
if resp == nil {
// for debugging purposes
panic("BUG: resp cannot be nil")
}

Expand Down Expand Up @@ -1994,7 +1991,7 @@ func dialAddr(addr string, dial DialFunc, dialDualStack, isTLS bool, tlsConfig *
return nil, err
}
if conn == nil {
panic("BUG: DialFunc returned (nil, nil)")
return nil, errors.New("dialling unsuccessful. Please report this bug!")
}

// We assume that any conn that has the Handshake() method is a TLS conn already.
Expand Down
29 changes: 19 additions & 10 deletions compress.go
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ func releaseFlateReader(zr io.ReadCloser) {
func resetFlateReader(zr io.ReadCloser, r io.Reader) error {
zrr, ok := zr.(zlib.Resetter)
if !ok {
// sanity check. should only be called with a zlib.Reader
panic("BUG: zlib.Reader doesn't implement zlib.Resetter???")
}
return zrr.Reset(r, nil)
Expand Down Expand Up @@ -101,7 +102,14 @@ func acquireRealGzipWriter(w io.Writer, level int) *gzip.Writer {
if v == nil {
zw, err := gzip.NewWriterLevel(w, level)
if err != nil {
panic(fmt.Sprintf("BUG: unexpected error from gzip.NewWriterLevel(%d): %v", level, err))
// gzip.NewWriterLevel only errors for invalid
// compression levels. Clamp it to be min or max.
if level < gzip.HuffmanOnly {
level = gzip.HuffmanOnly
} else {
level = gzip.BestCompression
}
zw, _ = gzip.NewWriterLevel(w, level)
}
return zw
}
Expand Down Expand Up @@ -175,10 +183,7 @@ func nonblockingWriteGzip(ctxv interface{}) {
ctx := ctxv.(*compressCtx)
zw := acquireRealGzipWriter(ctx.w, ctx.level)

_, err := zw.Write(ctx.p)
if err != nil {
panic(fmt.Sprintf("BUG: gzip.Writer.Write for len(p)=%d returned unexpected error: %v", len(ctx.p), err))
}
zw.Write(ctx.p) //nolint:errcheck // no way to handle this error anyway

releaseRealGzipWriter(zw, ctx.level)
}
Expand Down Expand Up @@ -271,10 +276,7 @@ func nonblockingWriteDeflate(ctxv interface{}) {
ctx := ctxv.(*compressCtx)
zw := acquireRealDeflateWriter(ctx.w, ctx.level)

_, err := zw.Write(ctx.p)
if err != nil {
panic(fmt.Sprintf("BUG: zlib.Writer.Write for len(p)=%d returned unexpected error: %v", len(ctx.p), err))
}
zw.Write(ctx.p) //nolint:errcheck // no way to handle this error anyway

releaseRealDeflateWriter(zw, ctx.level)
}
Expand Down Expand Up @@ -379,7 +381,14 @@ func acquireRealDeflateWriter(w io.Writer, level int) *zlib.Writer {
if v == nil {
zw, err := zlib.NewWriterLevel(w, level)
if err != nil {
panic(fmt.Sprintf("BUG: unexpected error from zlib.NewWriterLevel(%d): %v", level, err))
// zlib.NewWriterLevel only errors for invalid
// compression levels. Clamp it to be min or max.
if level < zlib.HuffmanOnly {
level = zlib.HuffmanOnly
} else {
level = zlib.BestCompression
}
zw, _ = zlib.NewWriterLevel(w, level)
}
return zw
}
Expand Down
3 changes: 2 additions & 1 deletion fs.go
Original file line number Diff line number Diff line change
Expand Up @@ -602,7 +602,7 @@ func (ff *fsFile) decReadersCount() {
ff.h.cacheLock.Lock()
ff.readersCount--
if ff.readersCount < 0 {
panic("BUG: negative fsFile.readersCount!")
ff.readersCount = 0
}
ff.h.cacheLock.Unlock()
}
Expand Down Expand Up @@ -1395,6 +1395,7 @@ func readFileHeader(f *os.File, compressed bool, fileEncoding string) ([]byte, e
func stripLeadingSlashes(path []byte, stripSlashes int) []byte {
for stripSlashes > 0 && len(path) > 0 {
if path[0] != '/' {
// developer sanity-check
panic("BUG: path must start with slash")
}
n := bytes.IndexByte(path[1:], '/')
Expand Down
26 changes: 15 additions & 11 deletions http.go
Original file line number Diff line number Diff line change
Expand Up @@ -963,7 +963,7 @@ func WriteMultipartForm(w io.Writer, f *multipart.Form, boundary string) error {
// Do not care about memory allocations here, since multipart
// form processing is slow.
if len(boundary) == 0 {
panic("BUG: form boundary cannot be empty")
return errors.New("form boundary cannot be empty")
}

mw := multipart.NewWriter(w)
Expand Down Expand Up @@ -2134,13 +2134,14 @@ func readBodyIdentity(r *bufio.Reader, maxBodySize int, dst []byte) ([]byte, err
for {
nn, err := r.Read(dst[offset:])
if nn <= 0 {
if err != nil {
if err == io.EOF {
return dst[:offset], nil
}
switch {
case errors.Is(err, io.EOF):
return dst[:offset], nil
case err != nil:
return dst[:offset], err
default:
return dst[:offset], fmt.Errorf("bufio.Read() returned (%d, nil)", nn)
}
panic(fmt.Sprintf("BUG: bufio.Read() returned (%d, nil)", nn))
}
offset += nn
if maxBodySize > 0 && offset > maxBodySize {
Expand Down Expand Up @@ -2175,13 +2176,14 @@ func appendBodyFixedSize(r *bufio.Reader, dst []byte, n int) ([]byte, error) {
for {
nn, err := r.Read(dst[offset:])
if nn <= 0 {
if err != nil {
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
switch {
case errors.Is(err, io.EOF):

This comment has been minimized.

Copy link
@regorov

regorov Apr 19, 2023

https://cs.opensource.google/go/go/+/refs/tags/go1.20.3:src/errors/wrap.go;l=43

Is() is less effective than direct comparison. I would revert back this change.

This comment has been minimized.

Copy link
@erikdubbelboer

erikdubbelboer Apr 19, 2023

Collaborator

But errors.Is is more correct here as the error could potentially be wrapped by the reader. The overhead is again not big enough here to not do it correctly.

return dst[:offset], io.ErrUnexpectedEOF
case err != nil:
return dst[:offset], err
default:
return dst[:offset], fmt.Errorf("bufio.Read() returned (%d, nil)", nn)
}
panic(fmt.Sprintf("BUG: bufio.Read() returned (%d, nil)", nn))
}
offset += nn
if offset == dstLen {
Expand All @@ -2197,6 +2199,8 @@ type ErrBrokenChunk struct {

func readBodyChunked(r *bufio.Reader, maxBodySize int, dst []byte) ([]byte, error) {
if len(dst) > 0 {
// data integrity might be in danger. No idea what we received,
// but nothing we should write to.
panic("BUG: expected zero-length buffer")
}

Expand Down
1 change: 1 addition & 0 deletions lbclient.go
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ func (cc *LBClient) init() {
cc.mu.Lock()
defer cc.mu.Unlock()
if len(cc.Clients) == 0 {
// developer sanity-check
panic("BUG: LBClient.Clients cannot be empty")
}
for _, c := range cc.Clients {
Expand Down
8 changes: 3 additions & 5 deletions peripconn.go
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
package fasthttp

import (
"fmt"
"net"
"sync"
)
Expand All @@ -25,17 +24,16 @@ func (cc *perIPConnCounter) Register(ip uint32) int {

func (cc *perIPConnCounter) Unregister(ip uint32) {
cc.lock.Lock()
defer cc.lock.Unlock()
if cc.m == nil {
cc.lock.Unlock()
// developer safeguard
panic("BUG: perIPConnCounter.Register() wasn't called")
}
n := cc.m[ip] - 1
if n < 0 {
cc.lock.Unlock()
panic(fmt.Sprintf("BUG: negative per-ip counter=%d for ip=%d", n, ip))
n = 0
}
cc.m[ip] = n
cc.lock.Unlock()
}

type perIPConn struct {
Expand Down
14 changes: 0 additions & 14 deletions peripconn_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,6 @@ func TestPerIPConnCounter(t *testing.T) {

var cc perIPConnCounter

expectPanic(t, func() { cc.Unregister(123) })

for i := 1; i < 100; i++ {
if n := cc.Register(123); n != i {
t.Fatalf("Unexpected counter value=%d. Expected %d", n, i)
Expand All @@ -43,21 +41,9 @@ func TestPerIPConnCounter(t *testing.T) {
}
cc.Unregister(456)

expectPanic(t, func() { cc.Unregister(123) })
expectPanic(t, func() { cc.Unregister(456) })

n = cc.Register(123)
if n != 1 {
t.Fatalf("Unexpected counter value=%d. Expected 1", n)
}
cc.Unregister(123)
}

func expectPanic(t *testing.T, f func()) {
defer func() {
if r := recover(); r == nil {
t.Fatalf("Expecting panic")
}
}()
f()
}
17 changes: 8 additions & 9 deletions server.go
Original file line number Diff line number Diff line change
Expand Up @@ -1929,9 +1929,6 @@ func acceptConn(s *Server, ln net.Listener, lastPerIPErrorTime *time.Time) (net.
for {
c, err := ln.Accept()
if err != nil {
if c != nil {
panic("BUG: net.Listener returned non-nil conn and non-nil error")
}
if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
s.logger().Printf("Timeout error when accepting new connections: %v", netErr)
time.Sleep(time.Second)
Expand All @@ -1943,9 +1940,6 @@ func acceptConn(s *Server, ln net.Listener, lastPerIPErrorTime *time.Time) (net.
}
return nil, io.EOF
}
if c == nil {
panic("BUG: net.Listener returned (nil, nil)")
}

if tc, ok := c.(*net.TCPConn); ok && s.TCPKeepalive {
if err := tc.SetKeepAlive(s.TCPKeepalive); err != nil {
Expand Down Expand Up @@ -2578,7 +2572,7 @@ func (ctx *RequestCtx) LastTimeoutErrorResponse() *Response {

func writeResponse(ctx *RequestCtx, w *bufio.Writer) error {
if ctx.timeoutResponse != nil {
panic("BUG: cannot write timed out response")
return errors.New("cannot write timed out response")
}
err := ctx.Response.Write(w)

Expand All @@ -2596,8 +2590,8 @@ func acquireByteReader(ctxP **RequestCtx) (*bufio.Reader, error) {
c := ctx.c
s.releaseCtx(ctx)

// Make GC happy, so it could garbage collect ctx
// while we waiting for the next request.
// Make GC happy, so it could garbage collect ctx while we wait for the
// next request.
ctx = nil
*ctxP = nil

Expand All @@ -2612,6 +2606,7 @@ func acquireByteReader(ctxP **RequestCtx) (*bufio.Reader, error) {
return nil, io.EOF
}
if n != 1 {
// developer sanity-check
panic("BUG: Reader must return at least one byte")
}

Expand Down Expand Up @@ -2787,19 +2782,23 @@ func (fa *fakeAddrer) LocalAddr() net.Addr {
}

func (fa *fakeAddrer) Read(p []byte) (int, error) {
// developer sanity-check
panic("BUG: unexpected Read call")
}

func (fa *fakeAddrer) Write(p []byte) (int, error) {
// developer sanity-check
panic("BUG: unexpected Write call")
}

func (fa *fakeAddrer) Close() error {
// developer sanity-check
panic("BUG: unexpected Close call")
}

func (s *Server) releaseCtx(ctx *RequestCtx) {
if ctx.timeoutResponse != nil {
// developer sanity-check
panic("BUG: cannot release timed out RequestCtx")
}

Expand Down
1 change: 1 addition & 0 deletions stackless/func.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ import (
// at the moment due to high load.
func NewFunc(f func(ctx interface{})) func(ctx interface{}) bool {
if f == nil {
// developer sanity-check
panic("BUG: f cannot be nil")
}

Expand Down
1 change: 1 addition & 0 deletions timer.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ func initTimer(t *time.Timer, timeout time.Duration) *time.Timer {
return time.NewTimer(timeout)
}
if t.Reset(timeout) {
// developer sanity-check
panic("BUG: active timer trapped into initTimer()")
}
return t
Expand Down
4 changes: 2 additions & 2 deletions workerpool.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ type workerChan struct {

func (wp *workerPool) Start() {
if wp.stopCh != nil {
panic("BUG: workerPool already started")
return

This comment has been minimized.

Copy link
@regorov

regorov Apr 19, 2023

I would say, the panic() here helps developer to find a place where Start() is called second time. I would prefer to get panic than swallow it and leave a mess in the code.

}
wp.stopCh = make(chan struct{})
stopCh := wp.stopCh
Expand All @@ -72,7 +72,7 @@ func (wp *workerPool) Start() {

func (wp *workerPool) Stop() {
if wp.stopCh == nil {
panic("BUG: workerPool wasn't started")
return
}
close(wp.stopCh)
wp.stopCh = nil
Expand Down

0 comments on commit d0f2727

Please sign in to comment.