|Deepan Chakravarthy||Apr 7, 2009 7:44 am|
|Anoop Alias||Apr 7, 2009 7:59 am|
|Jérôme Loyet||Apr 7, 2009 8:07 am|
|Kon Wilms||Apr 7, 2009 9:06 am|
|Artis Caune||Apr 8, 2009 12:46 am|
|Igor Sysoev||Apr 8, 2009 1:09 am|
|Anıl Çetin||Apr 8, 2009 7:56 am|
|Maxim Dounin||Apr 8, 2009 8:40 am|
|Anıl Çetin||Apr 8, 2009 9:51 am|
|Peter Langhans||Apr 8, 2009 10:10 am|
|Anıl Çetin||Apr 9, 2009 2:16 am|
|Subject:||Re: lots of connections on TIME_WAIT state|
|From:||Peter Langhans (pete...@incipience.co.uk)|
|Date:||Apr 8, 2009 10:10:02 am|
On Wed, Apr 8, 2009 at 5:51 PM, Anıl Çetin <an...@saog.net> wrote:
Thanks for answer Maxim, but it is exactly "out of sockets" openvz has resource limit option for opened tcp sockets (numtcpsock) and when time comes there becomes thousands (ten thousands approx.) of open connections while there is only 2000-3000 clients. Apache keepalive is off also. I read about TIME_WAIT state, I know the connection is reusable but why there is so much sockets opened? As my knowledge while nginx is a proxy it opens 2 connections for one client with apache, isnt it right? So there must be 4000-6000 tcp sockets not 10000-15000.
Isn't "/proc/sys/net/ipv4/tcp_tw_recycle" turned on by default in linux? This can be my problem I will check and try it. May be it isnt using sockets in TIME_WAIT states and timeout for connections in this state is very big, so that it is opening new connections again and again without dropping any of before opened sockets.
Maxim Dounin yazmış:
On Wed, Apr 08, 2009 at 05:56:51PM +0300, Anıl Çetin wrote:
So, what is the solution? I have exactly the same problem, my nginx is in a virtual server (openvz), working as a proxy server in front of apache and oftenly (after 2k-3k requests) server becomes "out of sockets" even I raise the allowed numbers of sockets to a very big number.
You probably "out of ports", not out of sockets. Solution is to configure TIME_WAIT reusing (tw_reuse, tw_recyle or something like depending on your OS). You may also allow your system to use more ports for outgoing connections.
Under FreeBSD reusing of TIME_WAIT sockets is the default, and portrange for outgoing connections may be tuned via net.inet.ip.portrange.hifirst and net.inet.ip.portrange.hilast sysctls.
Not sure about Linux, but Google suggests reusing of TIME_WAIT sockets may be turned on via /proc/sys/net/ipv4/tcp_tw_recycle.
Igor Sysoev yazmış:
On Wed, Apr 08, 2009 at 10:47:16AM +0300, Artis Caune wrote:
Hi, I am using nginx with fast-cgi . When I run $netstat -np | grep 127.0.0.1:9000 I find lot of connections in TIME_WAIT state. Is this because of high keepalive_timeout value ? When lot of people use (5 requests per second) nginx takes more time to respond. System load goes more than 10 during peak hours.
This is because of how TCP works.
debian:~# netstat -np | grep 127.0.0.1:9000 tcp 0 0 127.0.0.1:9000 127.0.0.1:45603 TIME_WAIT - tcp 0 0 127.0.0.1:9000 127.0.0.1:45601 TIME_WAIT -
If you were on FreeBSD, you could disable TIME_WAIT on loopback completely by setting:
Due to the incorrect implementation this remedy is worse than the disease. The net.inet.tcp.nolocaltimewait relys on unlimited RST delivery, therefore if there are too many RSTs, they will be limited by net.inet.icmp.icmplim and you will have a lot of sockets in the LAST_ACK state on server side instead of lot of sockets in the TIME_WAIT on client side.