Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limited download speed on MacOS Mojave #428

Closed
jeduardo opened this issue Feb 25, 2019 · 81 comments · Fixed by #820
Closed

Limited download speed on MacOS Mojave #428

jeduardo opened this issue Feb 25, 2019 · 81 comments · Fixed by #820

Comments

@jeduardo
Copy link

When I download files from a server behind the VPN using openfortivpn my download speeds are capped around 2.5mb/sec. When I perform the same download behind the official Fortinet client, I can maximize my connection speed (around 10mb/sec).

Any ideas what might be happening there? The problem is reproducible on different MacOS laptops when downloading the same files.

@jeduardo jeduardo changed the title Limited downloads on MacOS Mojave Limited download speed on MacOS Mojave Feb 25, 2019
@DimitriPapadopoulos
Copy link
Collaborator

Does the native Fortinet client use IPsec or SSL VPN?

@jeduardo
Copy link
Author

Hey @DimitriPapadopoulos. Both the openfortivpn and the native Fortinet client as using SSL VPN for these connections.

@DimitriPapadopoulos
Copy link
Collaborator

DimitriPapadopoulos commented Feb 26, 2019

Ah right, I came across IPsec VPN with the native Mac OS client but it probably refers to the default VPN client built into the Mac OS, not the Fortinet client.

@mrbaseman
Copy link
Collaborator

Maybe it's better optimization, usage of hardware-supported encryption/decryption? If that's the explanation, a follow-up question would be how can we improve the compiler flags that autoconf finds out? Maybe we can suggest a few options that it would try if they are supported...

Thread parallelization could be an issue, but openfortivpn is already multithreaded. Doing more than one read thread and one write thread would make things very complicated, especially one would have to know what the remote side does ;) Maybe locking is a problem. There are a few hacks about semaphores in the MacOSX code. Maybe those are not quite optimal.

Maybe we have packet fragmentation. Does the native client on MacOS use a different MTU?

@DimitriPapadopoulos
Copy link
Collaborator

@mrbaseman Wouldn't hardware-supported encryption/decryption depend on the underlying SSL library?

About MTU, see for example:

@mrbaseman
Copy link
Collaborator

@DimitriPapadopoulos Oh, yes, you are right. If it's an optimization issue of that kind, one would have to put some effort into the ssl library. But I would be astonished if the openssl from Macports or Homebrew didn't use the hardware acceleration when it's available.

@DimitriPapadopoulos
Copy link
Collaborator

How to test if OpenSSL is hardware-accelerated on macOS:
Is there any part of OSX that gets a significant speed boost from Intel AES instructions?

@JPlanche
Copy link

JPlanche commented Mar 24, 2019

I have also this problem. There isn't any CPU pike, so could it be a lack of hardware support?
I don't know how it could affect performances but I saw that the routing table with openfortivpn doesn't look like the one I have with forticlientsslvpn.

@jeduardo By curiosity, what is the version of the Fortinet agent you're using, and do you have alike routing tables with both clients?
I use a little app called forticlientsslvpn, version 4, with copyright 2006-2014.

edit :

My ratio is poorer.
I have 1,7 Mbps vs. 11 Mbps, downloading a 8 MB file.
I made captures and I see a few TCP Spurious Retransmissions and full TCP windows that I don't see with the official client.
But I don't want to hijack jeduardo's bug report.

@mrbaseman
Copy link
Collaborator

I've tried to check it on my Macbook but Forticlient doesn't want to connect :-/ So I can not compare the MTU and MRU settings.
Anyhow, for the connection made with openfortivpn I have noticed that networksetup -getMTU ... only works on physical devices, not on the ppp0 device for the tunnel.

But maybe I have found another hint: I stumbled over the speed value that we pass to pppd. It is set to 38400, but maybe 115200 could give a better performance? I'm not sure to which extend this baud rate setting of the serial device has a real influence and if pppd and the server side finally agree on that value. But if so, the ratio of the two standard values promises a factor of 3 which would at least roughly match the observed difference in performance.

The --pppd-call command line parameter can be used to pass a script that contains the desired settings. This can be used for testing different pppd parameters without having to re-compile each time.

@mrbaseman
Copy link
Collaborator

I have merged #444 on the current master. Maybe this helps?

@jeduardo
Copy link
Author

Sorry guys, I'm unable to test it anymore as I no longer have access to a fortinet VPN.

@JPlanche
Copy link

The --pppd-call command line parameter can be used to pass a script that contains the desired settings. This can be used for testing different pppd parameters without having to re-compile each time.

@mrbaseman Could you give me an advice on what I could put in this script?
I tried with speed 115200 but it gives me:

ERROR: pppd: An error was detected in processing the options given, such as two mutually exclusive options being used.

I tried also openfortivpn --pppd-ipparam='speed 115200' but I saw no difference in speed.

I wanted to avoid compilation. :-)

@mrbaseman
Copy link
Collaborator

@JPlanche now that I have merged the pull request you can simply download the current master and compile that one.

the ipparam is something that pppd takes and hands over to the ip-up/ip-down scripts which are executed when the tunnel interface is brought up or down.

With --pppd-call you can pass an options file (if your pppd implementation supports that), from which the calling options are read. speed 115200 would go in there, but also all the other options needed to bring pppd up.

But as mentioned before, comparing openfortivpn <= 1.9.0 with the current master should already do the job (if this change has any effect at all)

@JPlanche
Copy link

JPlanche commented Jun 15, 2019

@JPlanche now that I have merged the pull request you can simply download the current master and compile that one.

@mrbaseman I have these brew formulaes installed: automake autoconf [email protected] pkg-config.
I exported the LDFLAGS and CPPFLAGS variables.
I have openssl in the PATH:
$ openssl version OpenSSL 1.0.2s 28 May 2019
But configure fails on:
checking for libssl >= 0.9.8 libcrypto >= 0.9.8... no
configure: error: Cannot find OpenSSL 0.9.8 or higher.

Sorry, I'm not very used to compilation. :-) I did a search but didn't find any clue...

@mrbaseman
Copy link
Collaborator

Adrien has tagged the 1.10.0 release on the weekend and it has already been picked up by Homebrew.
So, a simple brew update should install the new release.

About the configure error that you see: the configure script uses pkg-config to check if openssl is installed in a reasonably new version (some enterprise distributions still backport fixes for 0.9.8). But somehow pkg-config fails to find the openssl package. You may need to set
export PKG_CONFIG_PATH="/usr/local/opt/openssl/lib/pkgconfig:$PKG_CONFIG_PATH"
that's at least what my Homebrew recommends when I install openssl, but for some reason I didn't have to do that on my system.

@JPlanche
Copy link

JPlanche commented Jun 17, 2019 via email

@mrbaseman
Copy link
Collaborator

hmm... this is still a factor of more than ten slower with openfortivpn, whereas the forticlient nearly reaches the speed that you see without vpn. So we still have an issue here. Unfortunately, my forticlient doesn't want to connect for some reason, so I can't compare, but I can see the limit of about 2.5 MB/s that was reported by the original author of this issue.

@mrbaseman
Copy link
Collaborator

on my linux laptop I can reach around 40 MB/s so it's either related to old hardware (which hasn't yet encryption support in the cpu) or to the OS type. BTW I have tested with scp of a 130 MB file over Gigabit Ethernet. Without VPN (however routed through a small Fortigate) the file is transferred in 1 second, so a speed of 130 MB/s has to be taken with a grain of salt. Anyhow, it all performs much better than on the old mac that I have.

@mrbaseman
Copy link
Collaborator

We have gathered some experience about the download speed through an SSL VPN connection the last days here at work. We have a Fortigate 90D and there we see around 2.5 MB/s for scp through SSL VPN on Linux. So, we have studied data sheets and forum posts and had a look at the configuration.
Two things that limit the speed which are often mentioned are:

  • a software switch on the Fortigate linked to the interface on which the SSL VPN is connecting (This probably was an issue in older FortiOS versions, but I couldn't reproduce this)
  • DTLS not working - well... DTLS is "TLS over UDP protocol on the corresponding UDP port" but it has to be enabled explicitly in the Forticlient and in the configuration of the SSL VPN and it only works with the commercial client. We haven't implemented it in openfortivpn.

In our speed tests I have seen a high CPU load on the Fortigate every time when I have started a transfer through the SSL VPN.
ipsec by nature of the protocol offers better performance. With SSL VPN we send TCP packets encapsulated into an encrypted data stream that again goes over a TCP connection, and a lot of weird things can happen.
Well, but ipsec is more complicated to configure, especially when you have customers who are supposed to configure their own client. In that case it should be as easy as possible, and provisioning the configuration to the client also works only with the commercial client and only when they connect in a different manner, e.g. directly to a wifi that belongs to the "security fabric".
So, well, the only option that we have seen is throwing more compute power at the problem, and we have tested a newer model, and a much larger one. Either a similar sized model of a newer series, or within a series a larger model with more cores and perhaps additional ASICS, both can help to address the problem of the limited speed.
Comparing openfortivpn against Forticlient, DTLS might have an impact. Maybe the Forticlient opens several threads for the download (openfortivpn has one thread for each communication direction and some others for recieving the configuration etc.). Something which also may impact the speed is an Antivirus scan on the Fortigate, which may limit the speed of the first download, and when the same file is accessed another time, the URL is already cached for some time as harmless content.

@earltedly
Copy link

I'm experiencing this too and by a similar factor of slowdown. Let me know if there's any info I can collect to help.

~80KB/s download with openfortivpn, ~1.8MB/s on commerical client

@mrbaseman
Copy link
Collaborator

Hi @earltedly
As a first step you could double-check the MTU setting on the ppp interface and see if the commercial client uses the same value (unfortunately it refuses to install on my old MacBook).

Another topic that might impact the performance could be the proper choice of the ciphers. Currently, we use the following default:

cipher-list = HIGH:!aNULL:!kRSA:!PSK:!SRP:!MD5:!RC4

The insecure ones which we exclude probably provide a better performance, but that would be a bad choice. Maybe we should exclude some others for performance reasons. Openssl has different settings depending on the version of the library, e.g. for 1.0.2 it is

ALL:!EXPORT:!LOW:!aNULL:!eNULL:!SSLv2

or maybe replacing HIGH by MEDIUM in our default setting, namely:

MEDIUM:!aNULL:!kRSA:!PSK:!SRP:!MD5:!RC4

could be a good choice. The bottleneck can be on both sides. It may be the Mac (where we could test with openssl speed, but it can as well be on the Fortigate - if the client suggests to use secure ciphers and FortiOS chooses one which is not supported in hardware, then we have the situation where the client settings have a large impact on the system load on the remote side.

So, some performance numbers for different ciphers would be good input.

openssl ciphers DEFAULT gives a long list of known ciphers. Well, one has to find the common set of client and server, but I would bet that this is still a lengthy list.

@mrbaseman
Copy link
Collaborator

mrbaseman commented Sep 26, 2019

I have done some testing myself. On Linux (when connecting to a big Fortigate) I can reach good throughput rates (measured with scp), and indeed I see a dependence on the ciphers:

DHE-RSA-CAMELLIA256-SHA 12.4MB/s
CAMELLIA256-SHA 17.4MB/s
AES256-GCM-SHA384 19.3MB/s
AES256-SHA 19.3MB/s
AES256-SHA256 19.3MB/s
DHE-RSA-AES256-SHA 19.3MB/s
ECDHE-RSA-AES256-GCM-SHA384 19.3MB/s
ECDHE-RSA-AES256-SHA 19.3MB/s
DHE-RSA-AES256-GCM-SHA384 24.9MB/s
DHE-RSA-AES256-SHA256 24.9MB/s
ECDHE-RSA-AES256-SHA384 29.0MB/s

On OS X El Capitan I must admit that I see very poor rates, too, by default 2.2 MB/s,
and I have noticed that when openfortivpn is called without -v option the terminal is much less busy and the throughput more than doubles to 4.6 MB/s.
I haven't seen much performance improvement when I specify one of the ciphers that look promising on Linux, some of them even don't work. Well, it's a little bit a different openssl version, probably configured differently, and also older hardware...

@zez3
Copy link

zez3 commented Oct 1, 2019

I just needed to say that SHA1, CAMELLIA and RSA should not be used cuz they are not safe anymore.
I am glad that the openfortivpn works with higher SHA integrity because at the moment(6.2.1) neither windows or macOS official FortiClient are able to negotiate that. Shame on them.
Worst there is no DTLS support for the macOS version. 😠
The DTLS implementation on the openfortivpn would be a very desired addition and I for one in the name of the institution that I represent I would/could partially finance that.

@mrbaseman
Copy link
Collaborator

Well, Openssl classifies the ciphers, and some really bad ones are not activated anymore at compile time. The ones mentioned above currently are in MEDIUM I think (but it also depends on the version you take, the 1.0.2 LTS release is approaching EOL now).

Thanks for your offer of financial support for the DTLS implementation. Unfortunately, we are a very small team of volunteers here in the project, so time for looking into new topics is a quite rare resource. Anyhow, if any volunteer comes up and provides a pull request we are happy to review and test it.

And well... maybe it's even not that much work to implement it, because as mentioned here Openssl already supports DTLS, so it's maybe just a question of finding out if the server supports it, too and switching it on if possible.

@DimitriPapadopoulos
Copy link
Collaborator

DimitriPapadopoulos commented Oct 3, 2019

Hopefully this can help anyone interested in DTLS:

@javerous
Copy link

javerous commented Dec 3, 2019

Hi ! Just as a quick note (I have the same problem).

I replaced the OpenSSL code with Apple macOS Security framework (Secure Transport, SSLCreateContext etc.): I have the exact same result, so slowness doesn't seem related to SSL…

I also played a bit with pppd settings, but I don't know it well, and I always have the same results.

@DimitriPapadopoulos
Copy link
Collaborator

I have noticed this part of the code:

openfortivpn/src/io.c

Lines 613 to 627 in cfcc420

* I noticed that using TCP_NODELAY (i.e. disabling Nagle's algorithm)
* gives much better performance. Probably because setting up the VPN
* is sending and receiving many small packets.
* A small benchmark gave these results:
* - with TCP_NODELAY: ~ 4000 kbit/s
* - without TCP_NODELAY: ~ 1200 kbit/s
* - forticlientsslvpn from Fortinet: ~ 3700 kbit/s
* - openfortivpn, Python version: ~ 2000 kbit/s
* (with or without TCP_NODELAY)
*/
if (setsockopt(tunnel->ssl_socket, IPPROTO_TCP, TCP_NODELAY,
(const char *) &tcp_nodelay_flag, sizeof(int))) {
log_error("setsockopt: %s\n", strerror(errno));
goto err_sockopt;
}

Please bear in mind I know close to nothing on network performance, I've just read a couple online articles:

TCP_NODELAY is supposed to be efficient especially for large downloads. Yet it might be worth it investigating performance with/without TCP_NODELAY or TCP_NOPUSH on macOS.

@DimitriPapadopoulos
Copy link
Collaborator

I have also read this online article:

I know openfortivpn links with the OpenSSL library from Homebrew, not the LibreSSL library from Apple. I also do understand the above article refers to the LibreSSL library from Apple. Therefore the speed bump between macOS 10.14.4 and 10.14.5 is probably not relevant in our case. Nevertheless it would be worth:

  • reporting the exact version of macOS on machines with download speed issues,
  • benchmarking openssl on machines with download speed issues - both the Homebrew and Apple version, after making sure the cypher is identical between FortiClient and openfortivpn as suggested by @zez3.

@zez3
Copy link

zez3 commented May 5, 2020

It took me some time to gather this but here it is:
I did a lot more tests with the openfortivpn compared to the new official linux FortiClient EMS with SSL VPN implementation(there is no free version yet, you need an TAC account)
All tests have been done with an FGT 3960E in production. One slight improvement that I saw was when I had to restart the daemon of sslvpnd because httpsd also seg fault crashed.

I always used iperf with different run times 10, 20, 40 seconds and chose the max speed. I performed the tests a few times at different hours during the course of 3 days

On linux I monitored the /etc/pppd directory and see who is using it and it seems that the FCT does not use pppd. Most probably they wrote their own pppd/hdlc flavor. The same goes for on Mac.

on linux they use libgcrypt:

$ldd /opt/forticlient/fortivpn
	linux-vdso.so.1 =>  (0x00007fff37dd1000)
	libsecret-1.so.0 => /usr/lib/x86_64-linux-gnu/libsecret-1.so.0 (0x00007fa0dfb06000)
	libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007fa0df7f5000)
	libanl.so.1 => /lib/x86_64-linux-gnu/libanl.so.1 (0x00007fa0df5f1000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fa0df3ed000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fa0df1d0000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa0dee06000)
	/lib64/ld-linux-x86-64.so.2 (0x00007fa0dfd55000)
	libgcrypt.so.20 => /lib/x86_64-linux-gnu/libgcrypt.so.20 (0x00007fa0deb25000)
	libgio-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0 (0x00007fa0de79d000)
	libgobject-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0 (0x00007fa0de54a000)
	libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fa0de2da000)
	libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007fa0de0c6000)
	libgmodule-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgmodule-2.0.so.0 (0x00007fa0ddec2000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fa0ddca8000)
	libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007fa0dda86000)
	libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007fa0dd86b000)
	libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x00007fa0dd663000)
$ readelf -d /opt/forticlient/fortivpn | grep 'NEEDED'
 0x0000000000000001 (NEEDED)             Shared library: [libsecret-1.so.0]
 0x0000000000000001 (NEEDED)             Shared library: [libglib-2.0.so.0]
 0x0000000000000001 (NEEDED)             Shared library: [libanl.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libdl.so.2]
 0x0000000000000001 (NEEDED)             Shared library: [libpthread.so.0]
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
 0x0000000000000001 (NEEDED)             Shared library: [ld-linux-x86-64.so.2]

Speed tests:
on EOL Ubuntu16.04.6
with official Linux FCT 6.2.4 I always got
Negotiated Cipher Suite: TLS_AES_256_GCM_SHA384 (0x1302)
libgcrypt20 Version 1.6.5-2ubuntu0.6
myuser@UbuVirt:~$ iperf -c myserver -P 10 -m | tail -n 2
[ 9] MSS size 1348 bytes (MTU 1388 bytes, unknown interface)
[SUM] 0.0-10.1 sec 371 MBytes 360 Mbits/sec

vpn Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:172.x.x.x P-t-P:x.x.x.x Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1
RX packets:88030 errors:0 dropped:0 overruns:0 frame:0
TX packets:251558 errors:0 dropped:573 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:4672048 (4.6 MB) TX bytes:351917510 (351.9 MB)

with openfortivpn 1.3.0
Negotiated Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028)
OpenSSL 1.0.2g 1 Mar 2016 (I tried to force negotiate --cipher-list= TLS_AES_256_GCM_SHA384 but with the default old openssl version is not possible)
ppp0 Link encap:Point-to-Point Protocol
inet addr:172.x.x.x P-t-P:1.1.1.1 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1354 Metric:1
RX packets:211482 errors:0 dropped:0 overruns:0 frame:0
TX packets:451684 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:3
RX bytes:11069791 (11.0 MB) TX bytes:611136178 (611.1 MB)
myuser@UbuVirt:~$ iperf -c myserver -P 10 -m | tail -n 2
[ 8] MSS size 1302 bytes (MTU 1342 bytes, unknown interface)
[SUM] 0.0-10.3 sec 158 MBytes 172 Mbits/sec

So half the speed difference with the old openssl and old openfortivpn.

Next on Debian Kali-roling with openfortivpn 1.13.3
Negotiated Cipher Suite: TLS_AES_256_GCM_SHA384 (0x1302)
OpenSSL 1.1.1g 21 Apr 2020 (I tried to force downgrade using --cipher-list=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 but this openssl version has dropped CBC which is also no longer secure)

ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1354
inet 172.x.x.x netmask 255.255.255.255 destination 192.0.2.1
ppp txqueuelen 3 (Point-to-Point Protocol)
RX packets 180797 bytes 12531750 (11.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 593697 bytes 801637662 (764.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

root@kali:~# iperf -c myserv -P 10 -m | tail -n 2
[ 7] MSS size 1302 bytes (MTU 1342 bytes, unknown interface)
[SUM] 0.0-10.4 sec 364 MBytes 293 Mbits/sec

with official Linux FCT 6.2.4 I got again
Negotiated Cipher Suite: TLS_AES_256_GCM_SHA384 (0x1302)
libgcrypt20:amd64 1.8.5-5

vpn: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1400
inet 172.x.x.x netmask 255.255.255.255 destination 172.x.x.x
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 318104 bytes 19857725 (18.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1339379 bytes 1872842555 (1.7 GiB)
TX errors 0 dropped 6751 overruns 0 carrier 0 collisions 0

root@kali:~# iperf -c myserv -P 10 -m | tail -n 2
[ 4] MSS size 1348 bytes (MTU 1388 bytes, unknown interface)
[SUM] 0.0-10.1 sec 578 MBytes 478 Mbits/sec
and best case I got ~500Mbps

I also did some tests on an Scientific Linux release 7.8 (redhat based distro)
libgcrypt.x86_64 1.5.3-14.el7
OpenSSL 1.0.2k-fips 26 Jan 2017
forticlient-6.2.6.0356-1.el7.centos.x86_64 (The server/cli version 'cuz the gui version did not worked)
openfortivpn-1.13.3-1.el7.x86_64
with pretty much the same values ~280 Mbits/sec and with the FCT ~500Mbps

All my test VMs had 4 CPUs assigned and the nic in bridge mode running on the same host machine.
Without any VPN connected iperf was close to my 1Gbps nic speed
I know I am not comparing apple with apple here being different libraries(openssl vs libgcrypt) and different versions but I guess old lib version do have some impact on speed.
I have to do some app profiling on linux but the KCachegrind / QCachegrind+valgrind looks a bit non-intuitive comapared to the XCode Instruments thing. Or I just need more time to learn how to use it.

On my physical iMac model 2013 I get pretty much the same speed on both VPN clients
MacOS Catalina 10.15.4 (19E287)

FCT 6.2.6.737
Interesting enough on mac they chose to use openssl:

$ otool -L /Library/Application\ Support/Fortinet/FortiClient/bin/sslvpnd 
/Library/Application Support/Fortinet/FortiClient/bin/sslvpnd:
	/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 800.7.0)
	/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit (compatibility version 1.0.0, current version 275.0.0)
	/usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
	/usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.11)
	/Library/Application Support/Fortinet/FortiClient/bin/libcrypto.1.1.dylib (compatibility version 1.1.0, current version 1.1.0)
	/Library/Application Support/Fortinet/FortiClient/bin/libssl.1.1.dylib (compatibility version 1.1.0, current version 1.1.0)
	/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1673.126.0)
	/System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration (compatibility version 1.0.0, current version 1061.40.2)
	/System/Library/Frameworks/Carbon.framework/Versions/A/Carbon (compatibility version 2.0.0, current version 162.0.0)
	/System/Library/Frameworks/Security.framework/Versions/A/Security (compatibility version 1.0.0, current version 59306.41.2)
	/System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa (compatibility version 1.0.0, current version 23.0.0)
	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1281.0.0)
	/System/Library/Frameworks/CFNetwork.framework/Versions/A/CFNetwork (compatibility version 1.0.0, current version 0.0.0)
	/System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices (compatibility version 1.0.0, current version 1069.11.0)
	/System/Library/Frameworks/Foundation.framework/Versions/C/Foundation (compatibility version 300.0.0, current version 1673.126.0)
	/usr/lib/libobjc.A.dylib (compatibility version 1.0.0, current version 228.0.0)

Let's try to see the version:

mac:~# strings /Library/Application\ Support/Fortinet/FortiClient/bin/libssl.1.1.dylib | grep 1.1
...
OpenSSL 1.1.1b  26 Feb 2019
mac:~# strings /Library/Application\ Support/Fortinet/FortiClient/bin/libcrypto.1.1.dylib |  grep "1.1"
...
OpenSSL 1.1.1b  26 Feb 2019

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 14
eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE>
inet 172.x.x.x --> 169.254.38.179 netmask 0xffff0000
state availability: 0 (true)
scheduler: FQ_CODEL
link rate: 230.40 Kbps
qosmarking enabled: no mode: none
low power mode: disabled
multi layer packet logging (mpklog): disabled

imac:$ iperf -c myserv -P 10 -m -t 20 | tail -n 2
[ 12] MSS size 1302 bytes (MTU 1342 bytes, unknown interface)
[SUM] 0.0-20.1 sec 792 MBytes 369 Mbits/sec

image
FCT_xcode_Instruments.txt

openfortivpn 1.13.2 using openssl 1.0.1t 3 May 2016
imac$ ifconfig -v ppp0
ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 14
eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE>
inet 172.x.x.x --> 192.0.2.1 netmask 0xffff0000
state availability: 0 (true)
scheduler: FQ_CODEL
link rate: 115.20 Kbps
qosmarking enabled: no mode: none
low power mode: disabled
multi layer packet logging (mpklog): disabled

$ iperf -c 130.92.9.40 -P 10 -t 40 -m | tail -n 2
[ 12] MSS size 1302 bytes (MTU 1342 bytes, unknown interface)
[SUM] 0.0-40.0 sec 1.61 GBytes 346 Mbits/sec

image

openfortivpn_xcode_instruments.txt

From my observations there is not big difference here on my Catalina. Perhaps it was the case with the older versions like @DimitriPapadopoulos said here #428 (comment)
Both VPN Clients use HDLC framing which I don't even know if it's needed.
Can we not drop this L2 encapsulation altogether and point the VPN routes behind an different loopback like/virtual ethernet interface ?
like so:
https://stackoverflow.com/questions/87442/virtual-network-interface-in-mac-os-x

@Haarolean
Copy link

This issue affects me and bunch of my colleagues who use openfortivpn. Speeds are extremely slow, like 200 kb/s.

@zez3
Copy link

zez3 commented Dec 5, 2020

This issue affects me and bunch of my colleagues who use openfortivpn. Speeds are extremely slow, like 200 kb/s.

What you could try is to run some tests directly from the FGT
See:
https://weberblog.net/iperf3-on-a-fortigate/

@Haarolean
Copy link

This issue affects me and bunch of my colleagues who use openfortivpn. Speeds are extremely slow, like 200 kb/s.

What you could try is to run some tests directly from the FGT
See:
https://weberblog.net/iperf3-on-a-fortigate/

That's a difficult task, since I'm just a user and have no access to forti hw at all. Approving running iperf there would be a nontrivial task, also it won't solve anything, just prove the issue. There are some serious issues with openforti which are not present in official client.

@DimitriPapadopoulos
Copy link
Collaborator

DimitriPapadopoulos commented Dec 7, 2020

Indeed these are know issues. We just need someone with a Mac to address them - or at least find a possible explanation for these speed issues.

@wiremangr
Copy link

Please check in the file tunnel.c in lines 233 to 247.
I have changed the line 235 which defines the ppp speed to "20000000" instead of "115200" and recompiled again.
The difference is noticeable in the vpn during RDP sessions as the system is responding much faster.
Maybe it has something to do with the internal workings of macos queuing of packets.

I have noticed that the official Forticlient while connected in macos catalina with
ifconfig -v ppp0 gives a link rate of 230.40 Kbps.

After the change i get a rate of 20.00 Mbps with ifconfig and the response of the remote system is much better.
Bellow is the code segment i have tried with the change in speed in the file tunnel.c

                  static const char *const v[] = {
                                ppp_path,
                                //"115200", // speed
                                "20000000",
                                ":192.0.2.1", // <local_IP_address>:<remote_IP_address>
                                "noipdefault",
                                "noaccomp",
                                "noauth",
                                "default-asyncmap",
                                "nopcomp",
                                "receive-all",
                                "nodefaultroute",
                                "nodetach",
                                "lcp-max-configure", "40",
                                "mru", "1354"
                        };

Testing was done with Macos Catalina ver 10.15.7 with 50Mbps DSL line.

@wiremangr
Copy link

Update for the previous post with results from a download of a 600Mb file from a remote system with sftp via openfortivpn:

Using speed 115200 sftp download has a rate around 642.2KB/s
Using speed 20000000 sftp download maxed out my DSL line at 5.2MB/s

Testing was done between the same local and remote system and the same vpn gateway.

@DimitriPapadopoulos
Copy link
Collaborator

DimitriPapadopoulos commented Dec 18, 2020

Strange, I'm certain @mrbaseman had already tried changing the speed option in the past without effect.

By the way, we cannot change this speed option on Linux in the same way because as written in the man page:

An option that is a decimal number is taken as the desired baud rate for the serial device. On systems such as 4.4BSD and NetBSD, any speed can be specified. Other systems (e.g. Linux, SunOS) only support the commonly-used baud rates.

Therefore, we will guard this change with #ifdef. But then what happens if you remove the speed option altogether on macOS?

@wiremangr
Copy link

If i remove the speed option, the tunnel cannot be established.
pppd returns with unrecognized option '' and openfortivpn shows WARN: read returned 0 until it is stopped .
It seems that defining speed is mandatory.

I have run some more tests today with different speeds.
It seems that setting the speed to 230400 or higher, the responsiveness of RDP and download speeds are greatly increased.
It may help if the default setting of 115200 is increased to 230400 or 460800 if it is compatible with other systems also.

Today tests results with different speeds:

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 13
eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE>
inet 10.x.x.x --> 192.0.2.1 netmask 0xffffff00
state availability: 0 (true)
scheduler: FQ_CODEL
link rate: 3.07 Mbps
qosmarking enabled: no mode: none
low power mode: disabled
multi layer packet logging (mpklog): disabled

openfortivpn 3072000 -> 5.1MB/s to 5.5MB/s sftp download

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 13
eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE>
inet 10.x.x.x --> 192.0.2.1 netmask 0xffffff00
state availability: 0 (true)
scheduler: FQ_CODEL
link rate: 460.80 Kbps
qosmarking enabled: no mode: none
low power mode: disabled
multi layer packet logging (mpklog): disabled

openfortivpn 460800 -> 5.1MB/s to 5.5MB/s sftp download

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 13
eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE>
inet 10.x.x.x --> 192.0.2.1 netmask 0xffffff00
state availability: 0 (true)
scheduler: FQ_CODEL
link rate: 230.40 Kbps
qosmarking enabled: no mode: none
low power mode: disabled
multi layer packet logging (mpklog): disabled

openfortivpn 230400 -> 5.1MB/s to 5.5MB/s sftp download

ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1354 index 13
eflags=1002080<TXSTART,NOAUTOIPV6LL,ECN_ENABLE>
inet 10.x.x.x --> 192.0.2.1 netmask 0xffffff00
state availability: 0 (true)
scheduler: FQ_CODEL
link rate: 115.20 Kbps
qosmarking enabled: no mode: none
low power mode: disabled
multi layer packet logging (mpklog): disabled

openfortivpn 115200 -> 622.5KB/s to 640KB/s sftp download

@DimitriPapadopoulos
Copy link
Collaborator

DimitriPapadopoulos commented Dec 18, 2020

OK, I do seem to recall the speed option isn't really optional. It has to be there.

On Linux I thought the highest available baud-rate for consoles is 115200, but higher baud-rates for other serial devices, listed in <asm-generic/termbits>. For example on CentOS 6:

[...]
#define  B50    0000001
#define  B75    0000002
#define  B110   0000003
#define  B134   0000004
#define  B150   0000005
#define  B200   0000006
#define  B300   0000007
#define  B600   0000010
#define  B1200  0000011
#define  B1800  0000012
#define  B2400  0000013
#define  B4800  0000014
#define  B9600  0000015
#define  B19200 0000016
#define  B38400 0000017
[...]
#define    B57600 0010001
#define   B115200 0010002
#define   B230400 0010003
#define   B460800 0010004
#define   B500000 0010005
#define   B576000 0010006
#define   B921600 0010007
#define  B1000000 0010010
#define  B1152000 0010011
#define  B1500000 0010012
#define  B2000000 0010013
#define  B2500000 0010014
#define  B3000000 0010015
#define  B3500000 0010016
#define  B4000000 0010017
[...]

The baud-rate passed to pppd does not seem to be taken into account on Linux - or at least it does not limit the speed. We could perhaps use 20000000, except I believe pppd expects only predefined values such as 9600, 19200, 38400, 57600, 115200, 230400, 460800. I'll double-check what's acceptable on Linux.

Depending on acceptable values of speed on Linux, we could use 20000000 or even higher values on macOS.

@DimitriPapadopoulos
Copy link
Collaborator

DimitriPapadopoulos commented Dec 18, 2020

It looks like not only 4000000 works, but so does 20000000! At least that's the case with recent Linux distributions. Even 2147483647 or 9223372036854775807 work. I find this disturbing because I do see code that checks valid speeds in pppd:
https://github.com/paulusmack/ppp/blob/ad3937a/pppd/sys-linux.c#L796-L943

But then, if it works, who cares? Perhaps the above code is not in the execution path in the absence of a real serial port.

@DimitriPapadopoulos
Copy link
Collaborator

DimitriPapadopoulos commented Dec 21, 2020

I suggest we use 2147483647 (the value of INT_MAX on 32-bit systems) as pppd does not seem to be enforcing baud rates in this use-case. Indeed, specific baud rates are only enforced in set_up_tty(), which I suspect is not called in this use-case as we shouldn't need to "set up the serial port".

@rkirkpat
Copy link

Was testing out openfortivpn today on my MacOS 10.14.6 (Mojave) system and encountered this issue with v1.15.0 installed via Brew. With a 40mbps (down) DSL connection, was only getting about 600kB/sec on an SCP over the VPN compared to nearly 4MB/sec with Fortinet's client. I cloned the git repo for openfortivpn, switched to the v1.15.0 tag, applied the fix proposed above to src/tunnel.c, rebuilt, and re-tested. I was now indeed getting 4MB/sec over the openfortivpn session! So yes, the fix works!

Note, used the v1.15.0 tag as building master resulted in a getsockopt error. I will open another issue about that shortly.

@DimitriPapadopoulos
Copy link
Collaborator

@zez3 Does patch #820 help?

@Haarolean
Copy link

Thank you guys for fixing this! Much appreciated. May I ask, when the next release will be published containing this fix?

@DimitriPapadopoulos
Copy link
Collaborator

@Haarolean Not certain yet about the release. I'd like to see #826 fixed first, which will probably require a few days of work.

Have you been able to test this change (you need to build openfortivpn for that)? If so would you be able to test whether baud rates of 230400, 576000 and 2147483647 result in different speeds?

@Meroje
Copy link

Meroje commented Jan 14, 2021

Hi, I ran tests on macOS 10.15.7, no too sure about the fortigate but it is on a 10G pipe.

baseline

Screenshot_2021-01-14 Internet Speed Test - Measure Latency Jitter Cloudflare

A little lower than expected, I can get up to 890/660 if testing across the city

openfortivpn 1.15.0

Screenshot_2021-01-14 Internet Speed Test - Measure Latency Jitter Cloudflare(3)

openfortivpn HEAD-b123e99

Screenshot_2021-01-14 Internet Speed Test - Measure Latency Jitter Cloudflare(1)

ipsec (using the builtin macos client)

Screenshot_2021-01-14 Internet Speed Test - Measure Latency Jitter Cloudflare(2)

Changing baud rates didn't yield any change to the results.

@Haarolean
Copy link

Haarolean commented Jan 14, 2021

@DimitriPapadopoulos I haven't tested the changes because I wanted to get it on brew first. Being not very familiar with C/CPP I'm not quite sure how do I compile it to make it present only in brew directory.
Readme states that I should do ./configure --prefix=/usr/local --sysconfdir=/etc, do I have to replace both options with brew subdirectories in case of brew binaries being being installed in /opt/brew instead of /usr/local?

P.S. I could just wait for the release if Meroje's tests are enough for you.

@DimitriPapadopoulos
Copy link
Collaborator

@Meroje Thank you so much for all these tests. Since changing baud rates doesn't yield any change to the results, and since 230400 has been reported elsewhere to be the value used by FortiClient itself, I believe we have the proper fix.

Would you be able to test VPN SSL in addition to VPN IPSec with FortiClient? It has been reported elsewhere that openfortivpn has been in par with FortiClient in VPN SSL mode after this change. Hopefully that will be the case for you too.

@Meroje
Copy link

Meroje commented Jan 14, 2021

It's been a year or two since Forticlient was last usable, which is why we transitioned to openfortivpn before ultimately using ipsec.

image

@zez3
Copy link

zez3 commented Feb 28, 2021

https://docs.fortinet.com/document/fortigate/6.0.0/hardware-acceleration/177344/np6-np6xlite-and-np6lite-acceleration

https://docs.fortinet.com/document/fortigate/6.0.0/hardware-acceleration/149012/np6-session-fast-path-requirements

The traffic that can be offloaded, maximum throughput, and number of network interfaces supported by each varies by processor model:

NP7 supports offloading of most IPv4 and IPv6 traffic, IPsec VPN encryption (including Suite B), SSL VPN encryption, GTP traffic, CAPWAP traffic, VXLAN traffic, and multicast traffic. The NP7 has a maximum throughput of 200 Gbps using 2 x 100 Gbps interfaces. For details about the NP7 processor, see NP7 acceleration and for information about FortiGate models with NP7 processors, see FortiGate NP7 architectures.
NP6 supports offloading of most IPv4 and IPv6 traffic, IPsec VPN encryption, CAPWAP traffic, and multicast traffic."

If your FGT has NP7 it should be able to offload SSL
if you have NP6 then you are stuck with IPSEC offloading only.

This could be the reason behind the speed problem.
Reproduced and confirmed by my test.
Also some smaller models do not have NPUs(ASICs)
https://docs.fortinet.com/document/fortigate/6.2.0/cookbook/661836/vpn-and-asic-offload

IPsec traffic might be processed by the CPU for the following reasons:

Some low end models do not have NPUs.
NPU offloading and CP IPsec traffic processing manually disabled.

Or could have it disabled for whatever reason.

As it seems the SSL VPN traffic is never offloaded or accelerated. It all goes through the CPU and depending on the crypto suites used in the VPN configuration they could or not be accelerated by the CPUs AES-NI instructions.

@DimitriPapadopoulos
Copy link
Collaborator

DimitriPapadopoulos commented Mar 1, 2021

@zez3 Thank you for the above thorough research, next time a user complains I will ask the FortiGate model and the FortiOS version.

That said, the question remains why only macOS users complained, not Linux users. Of course Linux users only have VPN SSL, whether they use openfortivpn or FortiClient, while macOS users have both VPN SSL and IPSec at their disposal. But then the initial report by @jeduardo states that both openfortivpn and FortiClient are using SSL VPN in his case. Perhaps macOS mistakenly believed both openfortivpn and FortiClient use VPN SSL in their case?

@Haarolean
Copy link

Last version has fixed the issue for me, now there's stable 10 mb/s connection via HTTP and 5 mb/s via SSH.
Thank you folks very much for this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.