Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

websocket rebuild 2019 #1178

Closed
jjhesk opened this issue Jan 29, 2019 · 24 comments
Closed

websocket rebuild 2019 #1178

jjhesk opened this issue Jan 29, 2019 · 24 comments

Comments

@jjhesk
Copy link

jjhesk commented Jan 29, 2019

i have got the mel formed json from the websocket output. Currently found out that is caused by some race detection from the data transmission. it is the same issue caused by unsafe access in ByteBuffer.

@kataras
Copy link
Owner

kataras commented Feb 2, 2019

Xm, this is an easy fix, I will delay the next v11.2. I have a question though, do you think that this issue is relatie to the data race you had experienced before with ws?

@jjhesk
Copy link
Author

jjhesk commented Feb 2, 2019

yes, there are a few error races on each events happening.

  1. stress test over 300 connections
  2. all disconnected at once
  3. all connecting at once
  4. loop connection over sync map or hashmap

Currently the max connections can only take up to around 190-233 without the fix.

These extreme conditions forced out the race errors and all mutex, sync.Map are failed to protect them.

By my patch it can take up to 500 and more connections.

I have fixed the bytebuffer, connection management, issue.

The code should like this

Put releases byte buffer obtained via Get to the pool.
The buffer mustn't be accessed after returning to the pool.
in the messsage.go


// websocketMessageSerialize serializes a custom websocket message from websocketServer to be delivered to the client
// returns the  string form of the message
// Supported data types are: string, int, bool, bytes and JSON.
func (ms *messageSerializer) serialize(event string, data interface{}) ([]byte, error) {
	ms.Lock()
	b := ms.buf.Get()
	//fmt.Println("lock for serialize", data)
	b.Write(ms.prefix)
	b.WriteString(event)
	b.WriteByte(messageSeparatorByte)

	switch v := data.(type) {
	case string:
		b.WriteString(messageTypeString.String())
		b.WriteByte(messageSeparatorByte)
		b.WriteString(v)
	case int:
		b.WriteString(messageTypeInt.String())
		b.WriteByte(messageSeparatorByte)
		binary.Write(b, binary.LittleEndian, v)
	case bool:
		b.WriteString(messageTypeBool.String())
		b.WriteByte(messageSeparatorByte)
		if v {
			b.Write(boolTrueB)
		} else {
			b.Write(boolFalseB)
		}
	case []byte:
		b.WriteString(messageTypeBytes.String())
		b.WriteByte(messageSeparatorByte)
		b.Write(v)
	default:
		//we suppose is json
		res, err := json.Marshal(data)
		if err != nil {
			return nil, err
		}
		b.WriteString(messageTypeJSON.String())
		b.WriteByte(messageSeparatorByte)
		b.Write(res)
	}

	message := b.Bytes()
	defer ms.buf.Put(b)
	ms.Unlock()

	var s = make([]byte, len(message))
	copy(s, message)

	return s, nil
}

@jjhesk
Copy link
Author

jjhesk commented Feb 2, 2019

also i have custom built a channel based sync map for connection management specifically to take care of the connection race issues over resource limited condition.


// A thread safe map(type: `map[string]interface{}`).
// This using channel, not mutex.
type ConnectionMap interface {
	// Sets the given value under the specified key
	Store(k string, v *connection)

	// Retrieve an item from map under given key.
	Load(k string) (*connection, bool)

	// Remove an item from the map.
	Delete(k string)

	// Return the number of item within the map.
	Count() int

	Map() map[string]*connection

	Range(func(connectID string, uDB *connection) bool)

	Clear()
}

type connmap struct {
	ConnectionMap
	m map[string]*connection
	c chan command
}

type command struct {
	action    int
	key       string
	value     *connection
	rangeloop func(connectID string, uDB *connection) bool
	result    chan<- interface{}
}

const (
	set    = iota
	get
	remove
	count
	show
	clear
	list
	ranged
)

func (sm *connmap) Range(ug func(connectID string, uDB *connection) bool) {
	callback := make(chan interface{})
	sm.c <- command{action: ranged, rangeloop: ug, result: callback}
	<-callback
}

func (sm *connmap) Map() map[string]*connection {
	callback := make(chan interface{})
	sm.c <- command{action: list, result: callback}
	return (<-callback).(map[string]*connection)
	//return sm.m
}

// Sets the given value under the specified key
func (sm *connmap) Store(k string, v *connection) {
	sm.c <- command{action: set, key: k, value: v}
	//sm.c <- command{action: set, key: k, value: v, result: done}
	//<-done
}

// Retrieve an item from map under given key.
func (sm *connmap) Load(k string) (*connection, bool) {
	callback := make(chan interface{})
	sm.c <- command{action: get, key: k, result: callback}
	result := (<-callback).([2]interface{})
	return result[0].(*connection), result[1].(bool)
}

// Remove an item from the map.
func (sm *connmap) Delete(k string) {
	sm.c <- command{action: remove, key: k}
}

// Return the number of item within the map.
func (sm *connmap) Count() int {
	callback := make(chan interface{})
	sm.c <- command{action: count, result: callback}
	return (<-callback).(int)
}

func (sm *connmap) Clear() {
	callback := make(chan interface{})
	sm.c <- command{action: clear, result: callback}
	<-callback
}

func (sm *connmap) run() {
	for {
		cmd := <-sm.c
		switch cmd.action {
		case set:
			sm.m[cmd.key] = cmd.value
		case get:
			v, ok := sm.m[cmd.key]
			cmd.result <- [2]interface{}{v, ok}
		case remove:
			delete(sm.m, cmd.key)
		case clear:
			sm.m = map[string]*connection{}
			cmd.result <- sm.m
		case count:
			cmd.result <- len(sm.m)
		case show:
			cmd.result <- fmt.Sprint(sm.m)
		case ranged:
			exec := cmd.rangeloop
			for k, v := range sm.m {
				if ! exec(k, v) {
					break
				}
			}
			cmd.result <- true

		case list:
			cmd.result <- sm.m
		}
	}
}

// Create a new shared map.
func NewConnectionMap() ConnectionMap {
	sm := &connmap{
		m: make(map[string]*connection),
		c: make(chan command),
	}
	go sm.run()
	return sm
}

func NewConnectionMapC(cap int) ConnectionMap {
	sm := &connmap{
		m: make(map[string]*connection, cap),
		c: make(chan command),
	}
	go sm.run()
	return sm
}

// Default print method
func (sm connmap) String() string {
	callback := make(chan interface{})
	sm.c <- command{action: show, result: callback}
	return (<-callback).(string)
}

@kataras
Copy link
Owner

kataras commented Feb 3, 2019

Hey, fairly good code you provided to us, did you make any kind of benchmark test to see the dawnsides in terms of performance?

@kataras
Copy link
Owner

kataras commented Feb 7, 2019

@jjhesk I asked for the test code snippet you use to test those, you may not want to share it, so I made one that can simulate your case:

  1. 600+600 clients connecting at once
  2. loop through all available connections every 2 seconds, async
  3. first group of 600 connections all disconnected after 5 seconds, at once
  4. second group of 600 connections all disconnected after 3 seconds, at once

And, as expected, I don't have any issues running those in the same machine with limited resource. Here it is:

$ cd ./server && go run main.go --race

server/main.go

package main

import (
	"fmt"
	"sync/atomic"
	"time"

	"github.com/kataras/iris"
	"github.com/kataras/iris/websocket"
)

func main() {
	app := iris.New()
	ws := websocket.New(websocket.Config{})
	ws.OnConnection(handleConnection)
	app.Get("/socket", ws.Handler())
	go func() {
		t := time.NewTicker(2 * time.Second)
		for {
			<-t.C

			conns := ws.GetConnections()
			for _, conn := range conns {
				fmt.Println(conn.ID())
			}

			if atomic.LoadUint64(&count) == 1200 {
				fmt.Println("ALL CLIENTS DISCONNECTED")
				t.Stop()
				return
			}
		}
	}()

	app.Run(iris.Addr(":8080"))
}

func handleConnection(c websocket.Connection) {
	c.OnDisconnect(func() { handleDisconnect(c) })
}

var count uint64

func handleDisconnect(c websocket.Connection) {
	atomic.AddUint64(&count, 1)
	fmt.Println("client disconnected!")
}
$ cd ./client && go run main.go --race

client/main.go

package main

import (
	"fmt"
	"sync"
	"time"

	xwebsocket "golang.org/x/net/websocket"
)

var (
	origin = "http://localhost/"
	url    = "ws://localhost:8080/socket"
)

func main() {
	wg := new(sync.WaitGroup)
	for i := 0; i < 600; i++ {
		wg.Add(1)
		go connect(wg, 5*time.Second)
	}

	for i := 0; i < 600; i++ {
		wg.Add(1)
		go connect(wg, 3*time.Second)
	}

	wg.Wait()
	fmt.Println("ALL OK")
	time.Sleep(5 * time.Second)
}

func connect(wg *sync.WaitGroup, alive time.Duration) {
	conn, err := xwebsocket.Dial(url, "", origin)
	if err != nil {
		panic(err)
	}

	go func() {
		time.Sleep(alive)
		if err := conn.Close(); err != nil {
			panic(err)
		}

		wg.Done()
	}()
}

Please, give me an example code to test your case, I can really help you because it may not be an iris issue at all. Thank you a lot!

@kataras
Copy link
Owner

kataras commented Feb 14, 2019

Hello @jjhesk, I am in progress of creating a websocket client for go client apps as well, see the whole progress at: #1175. Based on these I made a better example which covers 100% your test case(+emmiting data and pings and 600 random disconnection and + 600 same-time disconnections). It had tricky parts but there are all fixed. You can also verify that, take a look above:

server/main.go

package main

import (
	"fmt"
	"os"
	"sync/atomic"
	"time"

	"github.com/kataras/iris"
	"github.com/kataras/iris/websocket"
)

const totalClients = 1200

func main() {
	app := iris.New()

	// websocket.Config{PingPeriod: ((60 * time.Second) * 9) / 10}
	ws := websocket.New(websocket.Config{})
	ws.OnConnection(handleConnection)
	app.Get("/socket", ws.Handler())

	go func() {
		t := time.NewTicker(2 * time.Second)
		for {
			<-t.C

			conns := ws.GetConnections()
			for _, conn := range conns {
				// fmt.Println(conn.ID())
				// Do nothing.
				_ = conn
			}

			if atomic.LoadUint64(&count) == totalClients {
				fmt.Println("ALL CLIENTS DISCONNECTED SUCCESSFULLY.")
				t.Stop()
				os.Exit(0)
				return
			}
		}
	}()

	app.Run(iris.Addr(":8080"))
}

func handleConnection(c websocket.Connection) {
	c.OnError(func(err error) { handleErr(c, err) })
	c.OnDisconnect(func() { handleDisconnect(c) })
	c.On("chat", func(message string) {
		c.To(websocket.Broadcast).Emit("chat", c.ID()+": "+message)
	})
}

var count uint64

func handleDisconnect(c websocket.Connection) {
	atomic.AddUint64(&count, 1)
	fmt.Printf("client [%s] disconnected!\n", c.ID())
}

func handleErr(c websocket.Connection, err error) {
	fmt.Printf("client [%s] errored: %v\n", c.ID(), err)
}

client/main.go

package main

import (
	"bufio"
	"fmt"
	"math/rand"
	"os"
	"sync"
	"time"

	"github.com/kataras/iris/websocket"
)

var (
	url    = "ws://localhost:8080/socket"
	f      *os.File
)

const totalClients = 1200

func main() {
	var err error
	f, err = os.Open("./test.data")
	if err != nil {
		panic(err)
	}
	defer f.Close()

	wg := new(sync.WaitGroup)
	for i := 0; i < totalClients/2; i++ {
		wg.Add(1)
		go connect(wg, 5*time.Second)
	}

	for i := 0; i < totalClients/2; i++ {
		wg.Add(1)
		waitTime := time.Duration(rand.Intn(10)) * time.Millisecond
		time.Sleep(waitTime)
		go connect(wg, 10*time.Second+waitTime)
	}

	wg.Wait()
	fmt.Println("ALL OK.")
	time.Sleep(5 * time.Second)
}

func connect(wg *sync.WaitGroup, alive time.Duration) {

	c, err := websocket.Dial(url, websocket.ConnectionConfig{})
	if err != nil {
		panic(err)
	}

	c.OnError(func(err error) {
		fmt.Printf("error: %v", err)
	})

	disconnected := false
	c.OnDisconnect(func() {
		fmt.Printf("I am disconnected after [%s].\n", alive)
		disconnected = true
	})

	c.On("chat", func(message string) {
		fmt.Printf("\n%s\n", message)
	})

	go func() {
		time.Sleep(alive)
		if err := c.Disconnect(); err != nil {
			panic(err)
		}

		wg.Done()
	}()

	scanner := bufio.NewScanner(f)
	for !disconnected {
		if !scanner.Scan() || scanner.Err() != nil {
			break
		}

		c.Emit("chat", scanner.Text())
	}
}

where test.data is big and small random text lines

@jjhesk
Copy link
Author

jjhesk commented Feb 14, 2019

@kataras hi there, I used nodejs for the test case. I will write up some test cases that only works on my furthered developed application apis. But i will get you the starter test cases as well.

@jjhesk
Copy link
Author

jjhesk commented Feb 16, 2019

@kataras i have a latest update of the whole websocket engine. still there are alot of bugs from using this. I think we should take a look of their achievement from recent conference from another team. This repository demonstrates how a very high number of websockets connections can be maintained efficiently in Linux.

https://github.com/eranyanay/1m-go-websockets

I think I will switch to this but I want to follow the same format from the current tags and related items.

@jjhesk
Copy link
Author

jjhesk commented Feb 16, 2019

I think we have to follow their findings and make it integrated into iris-go. Check it out: https://medium.freecodecamp.org/million-websockets-and-go-cc58418460bb
https://speakerdeck.com/eranyanay/going-infinite-handling-1m-websockets-connections-in-go

from what I understand now, there are 2 major caveats:

  1. goroutines - reduce amount of goroutines from raising a new connection
  2. buffer allocations - keep a reference to the underlying buffers given by Hijack()

yea. I have been learning alot from stability and optimizations. There are alot of articles to read from..
https://faceair.me/
https://raft.github.io/
https://github.com/lni/dragonboat

@jjhesk
Copy link
Author

jjhesk commented Feb 16, 2019

additional add as the epoller object. here is more toolings: https://github.com/mailru/easygo/tree/master/netpoll

package main

import (
	"golang.org/x/sys/unix"
	"log"
	"net"
	"reflect"
	"sync"
	"syscall"
)

type epoll struct {
	fd          int
	connections map[int]net.Conn
	lock        *sync.RWMutex
}

func MkEpoll() (*epoll, error) {
	fd, err := unix.EpollCreate1(0)
	if err != nil {
		return nil, err
	}
	return &epoll{
		fd:          fd,
		lock:        &sync.RWMutex{},
		connections: make(map[int]net.Conn),
	}, nil
}

func (e *epoll) Add(conn net.Conn) error {
	// Extract file descriptor associated with the connection
	fd := websocketFD(conn)
	err := unix.EpollCtl(e.fd, syscall.EPOLL_CTL_ADD, fd, &unix.EpollEvent{Events: unix.POLLIN | unix.POLLHUP, Fd: int32(fd)})
	if err != nil {
		return err
	}
	e.lock.Lock()
	defer e.lock.Unlock()
	e.connections[fd] = conn
	if len(e.connections)%100 == 0 {
		log.Printf("Total number of connections: %v", len(e.connections))
	}
	return nil
}

func (e *epoll) Remove(conn net.Conn) error {
	fd := websocketFD(conn)
	err := unix.EpollCtl(e.fd, syscall.EPOLL_CTL_DEL, fd, nil)
	if err != nil {
		return err
	}
	e.lock.Lock()
	defer e.lock.Unlock()
	delete(e.connections, fd)
	if len(e.connections)%100 == 0 {
		log.Printf("Total number of connections: %v", len(e.connections))
	}
	return nil
}

func (e *epoll) Wait() ([]net.Conn, error) {
	events := make([]unix.EpollEvent, 100)
	n, err := unix.EpollWait(e.fd, events, 100)
	if err != nil {
		return nil, err
	}
	e.lock.RLock()
	defer e.lock.RUnlock()
	var connections []net.Conn
	for i := 0; i < n; i++ {
		conn := e.connections[int(events[i].Fd)]
		connections = append(connections, conn)
	}
	return connections, nil
}

func websocketFD(conn net.Conn) int {
	//tls := reflect.TypeOf(conn.UnderlyingConn()) == reflect.TypeOf(&tls.Conn{})
	// Extract the file descriptor associated with the connection
	//connVal := reflect.Indirect(reflect.ValueOf(conn)).FieldByName("conn").Elem()
	tcpConn := reflect.Indirect(reflect.ValueOf(conn)).FieldByName("conn")
	//if tls {
	//	tcpConn = reflect.Indirect(tcpConn.Elem())
	//}
	fdVal := tcpConn.FieldByName("fd")
	pfdVal := reflect.Indirect(fdVal).FieldByName("pfd")

	return int(pfdVal.FieldByName("Sysfd").Int())
}

@jjhesk
Copy link
Author

jjhesk commented Feb 16, 2019

optimisation 2
using ScheduleTimeout https://github.com/faceair/fastsocket/blob/master/pool.go

  • Try to accept incoming connection inside free pool worker.
  • If there no free workers for 1ms, do not accept anything and try later.
  • This will help us to prevent many self-ddos or out of resource limit cases.
import (
    "net"
    "github.com/gobwas/ws"
)

ln, _ := net.Listen("tcp", ":8080")

for {
    // Try to accept incoming connection inside free pool worker.
    // If there no free workers for 1ms, do not accept anything and try later.
    // This will help us to prevent many self-ddos or out of resource limit cases.
    err := pool.ScheduleTimeout(time.Millisecond, func() {
        conn := ln.Accept()
        _ = ws.Upgrade(conn)

        // Wrap WebSocket connection with our Channel struct.
        // This will help us to handle/send our app's packets.
        ch := NewChannel(conn)

        // Wait for incoming bytes from connection.
        poller.Start(conn, netpoll.EventRead, func() {
            // Do not cross the resource limits.
            pool.Schedule(func() {
                // Read and handle incoming packet(s).
                ch.Recevie()
            })
        })
    })
    if err != nil {   
        time.Sleep(time.Millisecond)
    }
}

@kataras
Copy link
Owner

kataras commented Feb 16, 2019

If you want to help out and put iris features on your on-going transaction to that library I would love to participate and help you, so I would be more prepared to put something like this for all iris users too. You can contact me on [email protected] and invite me on a private repo or something like this (or public if you are allowed to).

epoll is already used by the standard library if host OS is unix, https://golang.org/src/runtime/netpoll_epoll.go

kataras added a commit that referenced this issue Feb 17, 2019
…. It implements the gobwas/ws library (it works but need fixes on determinate closing connections) as suggested at: #1178
@kataras
Copy link
Owner

kataras commented Feb 17, 2019

@jjhesk I couldn't wait until tomorrow neither sleep without action based on your comments. So I tried to stay awake and code a temp websocket2 package which is based on the gobwas/ws as you suggested and without any serious changes to the end-dev API, I need to fix some more things for determinate closing connections better but I'll do it tomorrow, please check https://github.com/kataras/iris/tree/v11.2.0/websocket2 and the example https://github.com/kataras/iris/tree/v11.2.0/_examples/websocket/go-client-stress-test (low the total connections constant on both client and server if it throws errors about closed conns <- that's the issue we have by using it that I am unable to fix today (it is 4:43 morning now)).

@jjhesk
Copy link
Author

jjhesk commented Feb 17, 2019

right. open a new package and put everything into it and test it out!
Once the coding is fixed done and the next step we will QA it and make sure all the codes are running solid.

The coverage testing setup should be something like this...

  1. given
    setup plan A. 32 core, 32GB ram, intel, ubuntu 18.0.1. try 65540 open files and process too.
    setup plan B. 16 core, 16GB ram, intel, ubuntu 18.0.1. try 65540 open files and process too.
    setup plan C. 8 core, 8GB ram, intel, ubuntu 18.0.1. try 65540 open files and process too.
    setup plan D. 4 core, 8GB ram, intel, ubuntu 18.0.1. try 65540 open files and process too.

rundown test

  1. stay connected for 10 hrs with some operations maybe 1% of the operations. different clients connected with avg 1-2k clients. different ip (the best).

  2. among all the connections, 5% of them are rapidly disconnects after login mechanism or after some sorts of waited computations.

  3. test all disconnected at once every 5 hrs and reconnecting them again and do that loop.

  4. health check these events:

  • broken pipe
  • refuse connection
  • unable to reconnect or simply stop connect. status 1005, 1006
  • data race detection
  • malformed or broken json serialisations
  • overrided gorountines and causes dead locks
  1. having it to dry run for 2 weeks and see the results.

@jjhesk
Copy link
Author

jjhesk commented Feb 17, 2019

my work in progress repo https://github.com/GoLandr/iris
branched: https://github.com/GoLandr/iris/tree/ws1m
I have it named the package as ws1m

for the websocket upgrade session I would suggest to open a branch or dedicated pull request just to sorting out these problems.

@jjhesk jjhesk changed the title melformed json websocket rebuild 2019 Feb 17, 2019
@jjhesk
Copy link
Author

jjhesk commented Feb 17, 2019

@kataras
Copy link
Owner

kataras commented Feb 17, 2019

Hello @jjhesk , I can't follow the changes on the PR you did, it contains the whole #1175 changes I've made for the v11.2 as well... can you please close that PR, and fetch and push from and to #1175 instead? So we all see the progress and can work together, currently I can't do that on #1195

@jjhesk
Copy link
Author

jjhesk commented Feb 17, 2019

yes. noticed and commented on the pr

@jjhesk
Copy link
Author

jjhesk commented Feb 17, 2019

@kataras the websocket2 package is working correctly, so im not going to modify it. I am doing series of testing against different cases for this package. On the other hand, ws1m from my PR is still being worked on and I am trying to build the logic that matched up to the rules of optimisations. that will need some helps to complete that.

@jjhesk
Copy link
Author

jjhesk commented Feb 17, 2019

from the websocket2 alone, so far I got this... its running pretty good under 50 connections with some operations.

==================
WARNING: DATA RACE
Write at 0x00c0004cf0b0 by goroutine 168:
  runtime.mapdelete_faststr()
      /usr/local/Cellar/go/1.11.4/libexec/src/runtime/map_faststr.go:281 +0x0
  _/Users/_local_user/Documents/bbsrun/backendc/main/temp.(*Server).Disconnect()
      /Users/_local_user/Documents/bbsrun/backendc/main/temp/server.go:399 +0x1d9
  _/Users/_local_user/Documents/bbsrun/backendc/main/temp.(*connection).Disconnect()
      /Users/_local_user/Documents/bbsrun/backendc/main/temp/connection.go:658 +0x1f5
  _/Users/_local_user/Documents/bbsrun/backendc/main/temp.(*connection).startReader()
      /Users/_local_user/Documents/bbsrun/backendc/main/temp/connection.go:454 +0x23f
  _/Users/_local_user/Documents/bbsrun/backendc/main/temp.(*connection).Wait()
      /Users/_local_user/Documents/bbsrun/backendc/main/temp/connection.go:645 +0x8f
  _/Users/_local_user/Documents/bbsrun/backendc/main/temp.(*Server).Handler.func1()
      /Users/_local_user/Documents/bbsrun/backendc/main/temp/server.go:101 +0x155
  github.com/kataras/iris/context.DefaultNext()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:1208 +0x134
  github.com/kataras/iris/context.(*context).Next()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:1217 +0x5b
  _/Users/_local_user/Documents/bbsrun/backendc/main/core_x.SetupWebCombineClient.func2()
      /Users/_local_user/Documents/bbsrun/backendc/main/core_x/webhost.go:31 +0x117
  github.com/kataras/iris/context.Do()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:922 +0xa5
  github.com/kataras/iris/context.(*context).Do()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:1094 +0x62
  github.com/kataras/iris/core/router.(*routerHandler).HandleRequest()
      /Users/_local_user/go/src/github.com/kataras/iris/core/router/handler.go:227 +0x751
  github.com/kataras/iris/core/router.(*Router).BuildRouter.func1()
      /Users/_local_user/go/src/github.com/kataras/iris/core/router/router.go:84 +0xc4
  github.com/kataras/iris/core/router.(*Router).ServeHTTP()
      /Users/_local_user/go/src/github.com/kataras/iris/core/router/router.go:161 +0x69
  net/http.serverHandler.ServeHTTP()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2741 +0xc4
  net/http.(*conn).serve()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1847 +0x80a

Previous read at 0x00c0004cf0b0 by goroutine 11:
  runtime.mapaccess2_faststr()
      /usr/local/Cellar/go/1.11.4/libexec/src/runtime/map_faststr.go:101 +0x0
  _/Users/_local_user/Documents/bbsrun/backendc/main/temp.(*Server).emitMessage()
      /Users/_local_user/Documents/bbsrun/backendc/main/temp/server.go:135 +0x1c5
  _/Users/_local_user/Documents/bbsrun/backendc/main/temp.(*emitter).EmitMessage()
      /Users/_local_user/Documents/bbsrun/backendc/main/temp/emitter.go:32 +0x127
  _/Users/_local_user/Documents/bbsrun/backendc/main/temp.(*emitter).Emit()
      /Users/_local_user/Documents/bbsrun/backendc/main/temp/emitter.go:41 +0x10e
  _/Users/_local_user/Documents/bbsrun/backendc/main/temp.(*connection).Emit()
      /Users/_local_user/Documents/bbsrun/backendc/main/temp/connection.go:581 +0x1c6
  _/Users/_local_user/Documents/bbsrun/backendc/main/core_x.EmitToAllSubbed()
      /Users/_local_user/Documents/bbsrun/backendc/main/core_x/ws_core.go:405 +0x23d
  _/Users/_local_user/Documents/bbsrun/backendc/main/core_x.loopMonitorBB.func3.1()
      /Users/_local_user/Documents/bbsrun/backendc/main/core_x/bd_bigbang.go:125 +0xa2
  _/Users/_local_user/Documents/bbsrun/backendc/main/core_x.ExecSyncRountine()
      /Users/_local_user/Documents/bbsrun/backendc/main/core_x/connectionLock.go:60 +0x74

Goroutine 168 (running) created at:
  net/http.(*Server).Serve()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2851 +0x4c5
  github.com/kataras/iris/core/host.(*Supervisor).Serve.func1()
      /Users/_local_user/go/src/github.com/kataras/iris/core/host/supervisor.go:220 +0x83
  github.com/kataras/iris/core/host.(*Supervisor).supervise()
      /Users/_local_user/go/src/github.com/kataras/iris/core/host/supervisor.go:192 +0x4b
  github.com/kataras/iris/core/host.(*Supervisor).Serve()
      /Users/_local_user/go/src/github.com/kataras/iris/core/host/supervisor.go:220 +0x78
  github.com/kataras/iris/core/host.(*Supervisor).ListenAndServe()
      /Users/_local_user/go/src/github.com/kataras/iris/core/host/supervisor.go:232 +0x86
  github.com/kataras/iris.Addr.func1()
      /Users/_local_user/go/src/github.com/kataras/iris/iris.go:666 +0x124
  github.com/kataras/iris.(*Application).Run()
      /Users/_local_user/go/src/github.com/kataras/iris/iris.go:821 +0x177
  _/Users/_local_user/Documents/bbsrun/backendc/main/core_x.SetupWebCombineClient()
      /Users/_local_user/Documents/bbsrun/backendc/main/core_x/webhost.go:63 +0xab6
  main.startProc()
      /Users/_local_user/Documents/bbsrun/backendc/main/main.go:57 +0x103
  main.main()
      /Users/_local_user/Documents/bbsrun/backendc/main/main.go:36 +0x57

Goroutine 11 (running) created at:
  _/Users/_local_user/Documents/bbsrun/backendc/main/core_x.loopMonitorBB.func3()
      /Users/_local_user/Documents/bbsrun/backendc/main/core_x/bd_bigbang.go:124 +0x20d
  _/Users/_local_user/Documents/bbsrun/backendc/main/core_x.CoreLoopV3()
      /Users/_local_user/Documents/bbsrun/backendc/main/core_x/bd_core.go:92 +0x1dc
  _/Users/_local_user/Documents/bbsrun/backendc/main/core_x.CoreLoopEngineV2()
      /Users/_local_user/Documents/bbsrun/backendc/main/core_x/bd_core.go:26 +0x1e3
  _/Users/_local_user/Documents/bbsrun/backendc/main/core_x.loopMonitorBB()
      /Users/_local_user/Documents/bbsrun/backendc/main/core_x/bd_bigbang.go:12 +0xe0
==================

==================
WARNING: DATA RACE
Write at 0x00c000f1e380 by goroutine 694:
  runtime.slicecopy()
      /usr/local/Cellar/go/1.11.4/libexec/src/runtime/slice.go:221 +0x0
  _/Users/_local_user/Documents/project/backendc/main/temp.(*messageSerializer).serialize()
      /Users/_local_user/go/src/github.com/valyala/bytebufferpool/bytebuffer.go:73 +0x17b
  _/Users/_local_user/Documents/project/backendc/main/temp.(*emitter).Emit()
      /Users/_local_user/Documents/project/backendc/main/temp/emitter.go:37 +0xa0
  _/Users/_local_user/Documents/project/backendc/main/temp.(*connection).Emit()
      /Users/_local_user/Documents/project/backendc/main/temp/connection.go:581 +0x1c6
  _/Users/_local_user/Documents/project/backendc/main/core_x.EmitToAllSubbed()
      /Users/_local_user/Documents/project/backendc/main/core_x/ws_core.go:405 +0x23d
  _/Users/_local_user/Documents/project/backendc/main/core_x.loopMonitorBB.func3.1()
      /Users/_local_user/Documents/project/backendc/main/core_x/bd_bigbang.go:125 +0xa2
  _/Users/_local_user/Documents/project/backendc/main/core_x.ExecSyncRountine()
      /Users/_local_user/Documents/project/backendc/main/core_x/connectionLock.go:60 +0x74

Previous read at 0x00c000f1e380 by goroutine 116:
  internal/race.ReadRange()
      /usr/local/Cellar/go/1.11.4/libexec/src/internal/race/race.go:45 +0x42
  syscall.Write()
      /usr/local/Cellar/go/1.11.4/libexec/src/syscall/syscall_unix.go:193 +0xaa
  internal/poll.(*FD).Write()
      /usr/local/Cellar/go/1.11.4/libexec/src/internal/poll/fd_unix.go:268 +0x1d8
  net.(*netFD).Write()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/fd_unix.go:220 +0x65
  net.(*conn).Write()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/net.go:189 +0xa0
  net.(*TCPConn).Write()
      <autogenerated>:1 +0x69
  github.com/gobwas/ws.WriteFrame()
      /Users/_local_user/go/src/github.com/gobwas/ws/write.go:111 +0xd7
  github.com/gobwas/ws/wsutil.writeFrame()
      /Users/_local_user/go/src/github.com/gobwas/ws/wsutil/writer.go:449 +0x1ca
  github.com/gobwas/ws/wsutil.WriteMessage()
      /Users/_local_user/go/src/github.com/gobwas/ws/wsutil/helper.go:161 +0x81
  _/Users/_local_user/Documents/project/backendc/main/temp.(*connection).Write()
      /Users/_local_user/Documents/project/backendc/main/temp/connection.go:359 +0x142
  _/Users/_local_user/Documents/project/backendc/main/temp.(*connection).writeDefault()
      /Users/_local_user/Documents/project/backendc/main/temp/connection.go:371 +0x76
  _/Users/_local_user/Documents/project/backendc/main/temp.(*Server).emitMessage()
      /Users/_local_user/Documents/project/backendc/main/temp/server.go:349 +0x2d2
  _/Users/_local_user/Documents/project/backendc/main/temp.(*emitter).EmitMessage()
      /Users/_local_user/Documents/project/backendc/main/temp/emitter.go:32 +0x127
  _/Users/_local_user/Documents/project/backendc/main/temp.(*emitter).Emit()
      /Users/_local_user/Documents/project/backendc/main/temp/emitter.go:41 +0x10e
  _/Users/_local_user/Documents/project/backendc/main/temp.(*connection).Emit()
      /Users/_local_user/Documents/project/backendc/main/temp/connection.go:581 +0x1c6
  _/Users/_local_user/Documents/project/backendc/main/core_x.StartWebsocket.func1.12.1()
      /Users/_local_user/Documents/project/backendc/main/core_x/ws_core.go:371 +0x73
  _/Users/_local_user/Documents/project/backendc/main/core_x.ExecSyncRountine()
      /Users/_local_user/Documents/project/backendc/main/core_x/connectionLock.go:60 +0x74
  _/Users/_local_user/Documents/project/backendc/main/core_x.StartWebsocket.func1.12()
      /Users/_local_user/Documents/project/backendc/main/core_x/ws_core.go:370 +0x73

Goroutine 694 (running) created at:
  _/Users/_local_user/Documents/project/backendc/main/core_x.loopMonitorBB.func3()
      /Users/_local_user/Documents/project/backendc/main/core_x/bd_bigbang.go:124 +0x20d
  _/Users/_local_user/Documents/project/backendc/main/core_x.CoreLoopV3()
      /Users/_local_user/Documents/project/backendc/main/core_x/bd_core.go:92 +0x1dc
  _/Users/_local_user/Documents/project/backendc/main/core_x.CoreLoopEngineV2()
      /Users/_local_user/Documents/project/backendc/main/core_x/bd_core.go:26 +0x1e3
  _/Users/_local_user/Documents/project/backendc/main/core_x.loopMonitorBB()
      /Users/_local_user/Documents/project/backendc/main/core_x/bd_bigbang.go:12 +0xe0

Goroutine 116 (running) created at:
  _/Users/_local_user/Documents/project/backendc/main/core_x.StartWebsocket.func1()
      /Users/_local_user/Documents/project/backendc/main/core_x/ws_core.go:367 +0xce1
  _/Users/_local_user/Documents/project/backendc/main/temp.(*Server).Handler.func1()
      /Users/_local_user/Documents/project/backendc/main/temp/server.go:97 +0xf4
  github.com/kataras/iris/context.DefaultNext()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:1208 +0x134
  github.com/kataras/iris/context.(*context).Next()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:1217 +0x5b
  _/Users/_local_user/Documents/project/backendc/main/core_x.SetupWebCombineClient.func2()
      /Users/_local_user/Documents/project/backendc/main/core_x/webhost.go:31 +0x117
  github.com/kataras/iris/context.Do()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:922 +0xa5
  github.com/kataras/iris/context.(*context).Do()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:1094 +0x62
  github.com/kataras/iris/core/router.(*routerHandler).HandleRequest()
      /Users/_local_user/go/src/github.com/kataras/iris/core/router/handler.go:227 +0x751
  github.com/kataras/iris/core/router.(*Router).BuildRouter.func1()
      /Users/_local_user/go/src/github.com/kataras/iris/core/router/router.go:84 +0xc4
  github.com/kataras/iris/core/router.(*Router).ServeHTTP()
      /Users/_local_user/go/src/github.com/kataras/iris/core/router/router.go:161 +0x69
  net/http.serverHandler.ServeHTTP()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2741 +0xc4
  net/http.(*conn).serve()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1847 +0x80a
==================
==================
WARNING: DATA RACE
Write at 0x00c000f1e3a1 by goroutine 694:
  runtime.slicestringcopy()
      /usr/local/Cellar/go/1.11.4/libexec/src/runtime/slice.go:256 +0x0
  _/Users/_local_user/Documents/project/backendc/main/temp.(*messageSerializer).serialize()
      /Users/_local_user/go/src/github.com/valyala/bytebufferpool/bytebuffer.go:89 +0x1b6f
  _/Users/_local_user/Documents/project/backendc/main/temp.(*emitter).Emit()
      /Users/_local_user/Documents/project/backendc/main/temp/emitter.go:37 +0xa0
  _/Users/_local_user/Documents/project/backendc/main/temp.(*connection).Emit()
      /Users/_local_user/Documents/project/backendc/main/temp/connection.go:581 +0x1c6
  _/Users/_local_user/Documents/project/backendc/main/core_x.EmitToAllSubbed()
      /Users/_local_user/Documents/project/backendc/main/core_x/ws_core.go:405 +0x23d
  _/Users/_local_user/Documents/project/backendc/main/core_x.loopMonitorBB.func3.1()
      /Users/_local_user/Documents/project/backendc/main/core_x/bd_bigbang.go:125 +0xa2
  _/Users/_local_user/Documents/project/backendc/main/core_x.ExecSyncRountine()
      /Users/_local_user/Documents/project/backendc/main/core_x/connectionLock.go:60 +0x74

Previous read at 0x00c000f1e3a1 by goroutine 116:
  internal/race.ReadRange()
      /usr/local/Cellar/go/1.11.4/libexec/src/internal/race/race.go:45 +0x42
  syscall.Write()
      /usr/local/Cellar/go/1.11.4/libexec/src/syscall/syscall_unix.go:193 +0xaa
  internal/poll.(*FD).Write()
      /usr/local/Cellar/go/1.11.4/libexec/src/internal/poll/fd_unix.go:268 +0x1d8
  net.(*netFD).Write()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/fd_unix.go:220 +0x65
  net.(*conn).Write()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/net.go:189 +0xa0
  net.(*TCPConn).Write()
      <autogenerated>:1 +0x69
  github.com/gobwas/ws.WriteFrame()
      /Users/_local_user/go/src/github.com/gobwas/ws/write.go:111 +0xd7
  github.com/gobwas/ws/wsutil.writeFrame()
      /Users/_local_user/go/src/github.com/gobwas/ws/wsutil/writer.go:449 +0x1ca
  github.com/gobwas/ws/wsutil.WriteMessage()
      /Users/_local_user/go/src/github.com/gobwas/ws/wsutil/helper.go:161 +0x81
  _/Users/_local_user/Documents/project/backendc/main/temp.(*connection).Write()
      /Users/_local_user/Documents/project/backendc/main/temp/connection.go:359 +0x142
  _/Users/_local_user/Documents/project/backendc/main/temp.(*connection).writeDefault()
      /Users/_local_user/Documents/project/backendc/main/temp/connection.go:371 +0x76
  _/Users/_local_user/Documents/project/backendc/main/temp.(*Server).emitMessage()
      /Users/_local_user/Documents/project/backendc/main/temp/server.go:349 +0x2d2
  _/Users/_local_user/Documents/project/backendc/main/temp.(*emitter).EmitMessage()
      /Users/_local_user/Documents/project/backendc/main/temp/emitter.go:32 +0x127
  _/Users/_local_user/Documents/project/backendc/main/temp.(*emitter).Emit()
      /Users/_local_user/Documents/project/backendc/main/temp/emitter.go:41 +0x10e
  _/Users/_local_user/Documents/project/backendc/main/temp.(*connection).Emit()
      /Users/_local_user/Documents/project/backendc/main/temp/connection.go:581 +0x1c6
  _/Users/_local_user/Documents/project/backendc/main/core_x.StartWebsocket.func1.12.1()
      /Users/_local_user/Documents/project/backendc/main/core_x/ws_core.go:371 +0x73
  _/Users/_local_user/Documents/project/backendc/main/core_x.ExecSyncRountine()
      /Users/_local_user/Documents/project/backendc/main/core_x/connectionLock.go:60 +0x74
  _/Users/_local_user/Documents/project/backendc/main/core_x.StartWebsocket.func1.12()
      /Users/_local_user/Documents/project/backendc/main/core_x/ws_core.go:370 +0x73

Goroutine 694 (running) created at:
  _/Users/_local_user/Documents/project/backendc/main/core_x.loopMonitorBB.func3()
      /Users/_local_user/Documents/project/backendc/main/core_x/bd_bigbang.go:124 +0x20d
  _/Users/_local_user/Documents/project/backendc/main/core_x.CoreLoopV3()
      /Users/_local_user/Documents/project/backendc/main/core_x/bd_core.go:92 +0x1dc
  _/Users/_local_user/Documents/project/backendc/main/core_x.CoreLoopEngineV2()
      /Users/_local_user/Documents/project/backendc/main/core_x/bd_core.go:26 +0x1e3
  _/Users/_local_user/Documents/project/backendc/main/core_x.loopMonitorBB()
      /Users/_local_user/Documents/project/backendc/main/core_x/bd_bigbang.go:12 +0xe0

Goroutine 116 (running) created at:
  _/Users/_local_user/Documents/project/backendc/main/core_x.StartWebsocket.func1()
      /Users/_local_user/Documents/project/backendc/main/core_x/ws_core.go:367 +0xce1
  _/Users/_local_user/Documents/project/backendc/main/temp.(*Server).Handler.func1()
      /Users/_local_user/Documents/project/backendc/main/temp/server.go:97 +0xf4
  github.com/kataras/iris/context.DefaultNext()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:1208 +0x134
  github.com/kataras/iris/context.(*context).Next()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:1217 +0x5b
  _/Users/_local_user/Documents/project/backendc/main/core_x.SetupWebCombineClient.func2()
      /Users/_local_user/Documents/project/backendc/main/core_x/webhost.go:31 +0x117
  github.com/kataras/iris/context.Do()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:922 +0xa5
  github.com/kataras/iris/context.(*context).Do()
      /Users/_local_user/go/src/github.com/kataras/iris/context/context.go:1094 +0x62
  github.com/kataras/iris/core/router.(*routerHandler).HandleRequest()
      /Users/_local_user/go/src/github.com/kataras/iris/core/router/handler.go:227 +0x751
  github.com/kataras/iris/core/router.(*Router).BuildRouter.func1()
      /Users/_local_user/go/src/github.com/kataras/iris/core/router/router.go:84 +0xc4
  github.com/kataras/iris/core/router.(*Router).ServeHTTP()
      /Users/_local_user/go/src/github.com/kataras/iris/core/router/router.go:161 +0x69
  net/http.serverHandler.ServeHTTP()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:2741 +0xc4
  net/http.(*conn).serve()
      /usr/local/Cellar/go/1.11.4/libexec/src/net/http/server.go:1847 +0x80a
==================

12 hours testing with 3 data race detected.

@jjhesk
Copy link
Author

jjhesk commented Feb 17, 2019

@kataras does this part of the code from the websocket2 has compiled the rule

using ScheduleTimeout using the connection pool ?

func (s *Server) Handler() context.Handler {
	return func(ctx context.Context) {
		c := s.Upgrade(ctx)
		if c.Err() != nil {
			return
		}

		// NOTE TO ME: fire these first BEFORE startReader and startPinger
		// in order to set the events and any messages to send
		// the startPinger will send the OK to the client and only
		// then the client is able to send and receive from Server
		// when all things are ready and only then. DO NOT change this order.

		// fire the on connection event callbacks, if any
		for i := range s.onConnectionListeners {
			s.onConnectionListeners[i](c)
		}

		// start the ping and the messages reader
		c.Wait()
	}
}

kataras added a commit that referenced this issue Feb 18, 2019
…ng a bit lower level of the new ws lib api and restore the previous sync.Map for server's live connections, relative: #1178
@jjhesk
Copy link
Author

jjhesk commented Feb 18, 2019

from the above testing result. e5d0702

@kataras i updated your part and did another test.. i got the below detection..

==================
WARNING: DATA RACE
Write at 0x00c0003a2480 by goroutine 134:
  runtime.slicecopy()
      /root/.go/src/runtime/slice.go:221 +0x0
  _/root/compiledxx/backendc/main/temp.(*messageSerializer).serialize()
      /root/go/src/github.com/valyala/bytebufferpool/bytebuffer.go:73 +0x17b
  _/root/compiledxx/backendc/main/temp.(*emitter).Emit()
      /root/compiledxx/backendc/main/temp/emitter.go:37 +0xa0
  _/root/compiledxx/backendc/main/core_x.AnnounceProfileUpdate.func1()
      /root/compiledxx/backendc/main/core_x/ws_core.go:516 +0x191
  _/root/compiledxx/backendc/main/core_x.ExecSyncRountine()
      /root/compiledxx/backendc/main/core_x/connectionLock.go:60 +0x74

Previous read at 0x00c0003a2480 by goroutine 24:
  internal/race.ReadRange()
      /root/.go/src/internal/race/race.go:45 +0x42
  syscall.Write()
      /root/.go/src/syscall/syscall_unix.go:193 +0xaa
  internal/poll.(*FD).Write()
      /root/.go/src/internal/poll/fd_unix.go:268 +0x1d8
  net.(*netFD).Write()
      /root/.go/src/net/fd_unix.go:220 +0x65
  net.(*conn).Write()
      /root/.go/src/net/net.go:189 +0xa0
  net.(*TCPConn).Write()
      <autogenerated>:1 +0x69
  github.com/gobwas/ws.WriteFrame()
      /root/go/src/github.com/gobwas/ws/write.go:111 +0xd7
  github.com/gobwas/ws/wsutil.writeFrame()
      /root/go/src/github.com/gobwas/ws/wsutil/writer.go:449 +0x1ca
  github.com/gobwas/ws/wsutil.WriteMessage()
      /root/go/src/github.com/gobwas/ws/wsutil/helper.go:161 +0x81
  _/root/compiledxx/backendc/main/temp.(*connection).Write()
      /root/compiledxx/backendc/main/temp/connection.go:377 +0x166
  _/root/compiledxx/backendc/main/temp.(*connection).writeDefault()
      /root/compiledxx/backendc/main/temp/connection.go:389 +0x76
  _/root/compiledxx/backendc/main/temp.(*Server).emitMessage()
      /root/compiledxx/backendc/main/temp/server.go:366 +0x25d
  _/root/compiledxx/backendc/main/temp.(*emitter).EmitMessage()
      /root/compiledxx/backendc/main/temp/emitter.go:32 +0x127
  _/root/compiledxx/backendc/main/temp.(*emitter).Emit()
      /root/compiledxx/backendc/main/temp/emitter.go:41 +0x10e
  _/root/compiledxx/backendc/main/temp.(*connection).Emit()
      /root/compiledxx/backendc/main/temp/connection.go:695 +0x1c6
  _/root/compiledxx/backendc/main/core_x.EmitToAllSubbed()
      /root/compiledxx/backendc/main/core_x/ws_core.go:405 +0x23d
  _/root/compiledxx/backendc/main/core_x.loopMonitorBB.func1()
      /root/compiledxx/backendc/main/core_x/bd_bigbang.go:39 +0x3ed
  _/root/compiledxx/backendc/main/core_x.CoreLoopEngineV2()
      /root/compiledxx/backendc/main/core_x/bd_core.go:31 +0x4fe
  _/root/compiledxx/backendc/main/core_x.loopMonitorBB()
      /root/compiledxx/backendc/main/core_x/bd_bigbang.go:12 +0xe0

Goroutine 134 (running) created at:
  _/root/compiledxx/backendc/main/core_x.AnnounceProfileUpdate()
      /root/compiledxx/backendc/main/core_x/ws_core.go:502 +0x105
  _/root/compiledxx/backendc/main/core_x.loopMonitorBB.func1.2()
      /root/compiledxx/backendc/main/core_x/bd_bigbang.go:36 +0x1ad
  _/root/compiledxx/backendc/main/core_x.(*BBGame).ReconsultAfterExplode.func1()
      /root/compiledxx/backendc/main/core_x/bd_bgb.go:467 +0x4c0
  sync.(*Map).Range()
      /root/.go/src/sync/map.go:337 +0x13c
  _/root/compiledxx/backendc/main/core_x.(*BBGame).ReconsultAfterExplode()
      /root/compiledxx/backendc/main/core_x/bd_bgb.go:429 +0xa5
  _/root/compiledxx/backendc/main/core_x.loopMonitorBB.func1()
      /root/compiledxx/backendc/main/core_x/bd_bigbang.go:34 +0x394
  _/root/compiledxx/backendc/main/core_x.CoreLoopEngineV2()
      /root/compiledxx/backendc/main/core_x/bd_core.go:31 +0x4fe
  _/root/compiledxx/backendc/main/core_x.loopMonitorBB()
      /root/compiledxx/backendc/main/core_x/bd_bigbang.go:12 +0xe0

Goroutine 24 (running) created at:
  _/root/compiledxx/backendc/main/core_x.StartWebsocket()
      /root/compiledxx/backendc/main/core_x/ws_core.go:394 +0x336
  _/root/compiledxx/backendc/main/core_x.SetupWebCombineClient()
      /root/compiledxx/backendc/main/core_x/webhost.go:58 +0x788
  main.startProc()
      /root/compiledxx/backendc/main/main.go:57 +0x103
  main.main()
      /root/compiledxx/backendc/main/main.go:36 +0x57
==================

limits on connections: 54. from switching to sync.map is performance is slower and there is a limit on max connections which is 54 on my test case. once it reached to this number the server just simple timeout all later connections.

19:09:21.188 [BOT] success on connection # 54
...

@jjhesk
Copy link
Author

jjhesk commented Feb 20, 2019

@kataras are there any ways to avoid reset connection by peer attack from the clients?

@jjhesk
Copy link
Author

jjhesk commented Feb 21, 2019

i will update you with the latest fix on the websocket2 package.

@kataras kataras closed this as completed Jul 23, 2019
github-actions bot pushed a commit to goproxies/github.aaakk.us.kg-kataras-iris that referenced this issue Jul 27, 2020
… commit fixes the kataras#1178 and kataras#1173)

Former-commit-id: 74ccd8f4bf60a71f1eb0e34149a6f19de95a9148
github-actions bot pushed a commit to goproxies/github.aaakk.us.kg-kataras-iris that referenced this issue Jul 27, 2020
…. It implements the gobwas/ws library (it works but need fixes on determinate closing connections) as suggested at: kataras#1178

Former-commit-id: be5ee623b7d030bd9e03a1a2f320ead975ef2ba8
github-actions bot pushed a commit to goproxies/github.aaakk.us.kg-kataras-iris that referenced this issue Jul 27, 2020
…ng a bit lower level of the new ws lib api and restore the previous sync.Map for server's live connections, relative: kataras#1178

Former-commit-id: 40da148afb66a42d47285efce324269d66ed3b0e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants