Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

at 1200 connections of paho client getting Seg fault after updating to latest version of paho #268

Closed
kushal4 opened this issue May 2, 2017 · 15 comments

Comments

@kushal4
Copy link

kushal4 commented May 2, 2017

I am getting error of segmentation fault when running 1200 connections
the erron on gdb is
gdb) bt
#0 0x00007ffff4ca5027 in Socket_getReadySocket () from /usr/local/lib/libpaho-mqtt3a.so
#1 0x00007ffff4c9e350 in MQTTAsync_cycle () from /usr/local/lib/libpaho-mqtt3a.so
#2 0x00007ffff4c9b712 in MQTTAsync_receiveThread () from /usr/local/lib/libpaho-mqtt3a.so
#3 0x00007ffff79c0184 in start_thread (arg=0x7fffeffff700) at pthread_create.c:312
#4 0x00007ffff679a37d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

@kushal4
Copy link
Author

kushal4 commented May 2, 2017

I have updated my client to paho develop branch's code

@kushal4
Copy link
Author

kushal4 commented May 2, 2017

also sometimes getting segmentation fault on 1100 connections too with the error on gdb
as :-
bt
#0 0x00007ffff4c9e352 in MQTTAsync_cycle () from /usr/local/lib/libpaho-mqtt3a.so
Cannot access memory at address 0x6e7fbde8

@icraggs
Copy link
Contributor

icraggs commented May 2, 2017

1200 simultaneous connections? The maxmum is about 1024.

@kushal4
Copy link
Author

kushal4 commented May 2, 2017

yes

@kushal4
Copy link
Author

kushal4 commented May 2, 2017

i open 1200 simultaneous connections here

@icraggs
Copy link
Contributor

icraggs commented May 2, 2017

Well I think that 1200 won't work - the maximum for select() is 1024.

@kushal4
Copy link
Author

kushal4 commented May 2, 2017

ok thanks

@kushal4
Copy link
Author

kushal4 commented May 2, 2017

HI,
but it should deliver an error message when more than 1024 sockets are openend simultaneously in your updated library.I think that part is not showing in this case

@icraggs
Copy link
Contributor

icraggs commented May 3, 2017

I tried this on my machine, and the first client that failed caused the connect failure callback to be called. In this case tcpd denied any more connections than 1018.

@icraggs
Copy link
Contributor

icraggs commented May 3, 2017

The connect failure callback is the way that is shown that the connect failed.

@kushal4
Copy link
Author

kushal4 commented May 3, 2017

ok thnks i will give it a try thn :)

@kushal4 kushal4 closed this as completed May 3, 2017
@kushal4 kushal4 reopened this May 4, 2017
@kushal4
Copy link
Author

kushal4 commented May 4, 2017

hi,
I have increased the ulimit to 650000 so is there any provision by which i can change the FD_SETSIZE to that amount ? i now can easily have that many connections

@kushal4
Copy link
Author

kushal4 commented May 4, 2017

I have set the connect failure callback too but still i am getting the Seg fault in this case with the same gdb error message

@icraggs
Copy link
Contributor

icraggs commented May 4, 2017

Well it depends what you're doing after the connect fails. The test works for me.

I wouldn't change FD_SETSIZE if I were you, that's just unmanageable, and each process will then slow down due to the amount of clients it will be handling. I won't fix problems that occur specifically as a result, because this client library is not aimed at performance testing.

If you have separate processes that each contain say 500 clients, and you ramp up step by step to see how far you can get without readiing 100% CPU.

If you are maxing out the CPU at any point in all of this then the behaviour/timing is likely to be unpredictable.

@kushal4
Copy link
Author

kushal4 commented May 4, 2017

ok i am doing that way :)

@kushal4 kushal4 closed this as completed May 4, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants