|Maxim Dounin||Aug 1, 2011 9:07 am|
|liseen||Aug 2, 2011 6:36 am|
|António P. P. Almeida||Aug 2, 2011 8:24 am|
|Maxim Dounin||Aug 2, 2011 10:32 am|
|Maxim Dounin||Aug 2, 2011 10:35 am|
|David Yu||Aug 2, 2011 10:41 am|
|Maxim Dounin||Aug 2, 2011 10:49 am|
|David Yu||Aug 2, 2011 10:52 am|
|Maxim Dounin||Aug 2, 2011 11:46 am|
|David Yu||Aug 2, 2011 12:09 pm|
|卫越||Aug 2, 2011 7:48 pm|
|liseen||Aug 2, 2011 8:56 pm|
|SplitIce||Aug 2, 2011 10:20 pm|
|Maxim Dounin||Aug 3, 2011 12:37 am|
|Charles Chen||Aug 3, 2011 2:18 am|
|Matthieu Tourne||Aug 3, 2011 5:06 pm|
|Maxim Dounin||Aug 3, 2011 11:51 pm|
|SplitIce||Aug 7, 2011 9:43 pm|
|Maxim Dounin||Aug 8, 2011 2:21 am|
|SplitIce||Aug 8, 2011 2:34 am|
|SplitIce||Aug 8, 2011 2:35 am|
|Matthieu Tourne||Aug 12, 2011 12:32 pm||.patch, .patch|
|Maxim Dounin||Aug 12, 2011 12:59 pm|
|Matthieu Tourne||Aug 12, 2011 2:11 pm|
|Maxim Dounin||Aug 12, 2011 3:26 pm|
|Matthieu Tourne||Aug 12, 2011 3:41 pm||.patch|
|Matthieu Tourne||Aug 16, 2011 4:29 pm|
|Maxim Dounin||Aug 16, 2011 5:21 pm|
|magicbear||Aug 24, 2011 10:11 am|
|Maxim Dounin||Aug 24, 2011 5:04 pm||.txt, .txt|
|Shaun savage||Aug 24, 2011 6:16 pm|
|magicbear||Aug 24, 2011 10:30 pm|
|magicbear||Aug 26, 2011 12:07 am|
|Maxim Dounin||Aug 26, 2011 2:38 am|
|magicbear||Aug 26, 2011 4:00 am|
|magicbear||Aug 26, 2011 4:04 am|
|magicbear||Aug 26, 2011 4:27 am|
|Maxim Dounin||Aug 26, 2011 4:36 am|
|magicbear||Aug 26, 2011 4:53 am|
|Maxim Dounin||Aug 26, 2011 8:54 am|
|magicbear||Aug 26, 2011 9:16 am|
|magicbear||Aug 26, 2011 9:27 am|
|magicbear||Aug 26, 2011 10:00 am|
|magicbear||Aug 26, 2011 10:51 am|
|Maxim Dounin||Aug 26, 2011 11:05 am|
|magicbear||Aug 26, 2011 12:00 pm|
|magicbear||Aug 28, 2011 10:06 am|
|magicbear||Aug 28, 2011 10:10 am|
|Maxim Dounin||Aug 28, 2011 6:46 pm||.txt|
|magicbear||Aug 31, 2011 1:04 pm|
|SplitIce||Aug 31, 2011 6:56 pm|
|magicbear||Sep 1, 2011 6:37 am|
|magicbear||Sep 4, 2011 10:33 am|
|Maxim Dounin||Sep 4, 2011 11:20 am|
|MagicBear||Sep 4, 2011 11:31 am|
|Maxim Dounin||Sep 5, 2011 12:07 am|
|ビリビリⅤ||Sep 5, 2011 8:41 am|
|Maxim Dounin||Sep 5, 2011 11:01 am|
|magicbear||Sep 5, 2011 11:39 pm|
|Matthieu Tourne||Sep 7, 2011 4:33 pm|
|Maxim Dounin||Sep 8, 2011 2:26 am|
|Maxim Dounin||Sep 8, 2011 8:41 am|
|Matthieu Tourne||Sep 8, 2011 3:04 pm|
|magicbear||Sep 14, 2011 3:53 pm|
|MagicBear||Sep 15, 2011 10:50 am||.txt|
|SplitIce||Sep 15, 2011 6:41 pm|
|philipp||Dec 29, 2011 4:46 am|
|Maxim Dounin||Dec 29, 2011 7:03 am|
|alexscott||Mar 8, 2012 6:29 am|
|Andrew Alexeev||Mar 8, 2012 10:17 pm|
|alexscott||Mar 12, 2012 7:34 am|
|Maxim Dounin||Mar 12, 2012 7:53 am|
|alexscott||Mar 12, 2012 10:39 am|
|Maxim Dounin||Mar 12, 2012 10:58 am|
|alexscott||Mar 12, 2012 12:55 pm|
|Subject:||Re: upstream keepalive - call for testing|
|From:||Matthieu Tourne (matt...@gmail.com)|
|Date:||Aug 16, 2011 4:29:29 pm|
I'm still a little confused,
Is the peer selection algorithm guaranteed to never run at the same time for different workers? (i-e: creating race conditions in the keepalive queue) I see that the round robin code has a bunch of mutex locks all commented out.. On the other hand nginx_http_upstream_check_module (healthchecks) uses mutexes.
For faster lookups, I was thinking about a hashmap of queues, hashed on sockaddr. This is probably overkill for a small amount of keepalives connections though. I'll send a patch if I get around to implement it.
On Fri, Aug 12, 2011 at 3:41 PM, Matthieu Tourne <matt...@gmail.com>wrote:
Thanks for the help Maxim, I'll submit this code if I get around implementing it.
Also, I think I used the wrong string comparison function in the patch I sent earlier. This one should work as intended in the description ..
On Fri, Aug 12, 2011 at 3:27 PM, Maxim Dounin <mdou...@mdounin.ru> wrote:
On Fri, Aug 12, 2011 at 02:11:51PM -0700, Matthieu Tourne wrote:
Also, if I was planning on having a lot of different connections using the upstream keepalive module. Would it make sense to convert the queues into rbtrees for faster lookup
Yes, it may make sense if you are planning to keep lots of connections to lots of different backends (you'll still need queues though, but that's details).
On Fri, Aug 12, 2011 at 12:59 PM, Maxim Dounin <mdou...@mdounin.ru> wrote:
On Fri, Aug 12, 2011 at 12:32:26PM -0700, Matthieu Tourne wrote:
I think I have found a small issue, if we're using proxy_pass to
origin that doesn't support keep alives. The origin will return a HTTP header "Connection: close", and
connection (TCP FIN). We don't take this into account, and assume there is a keep-alive connection available. The next time the connection is used, it won't be part of a valid TCP stream, and the origin server will send a TCP RST.
Yes, I'm aware of this, thank you. Actually, this is harmless: upstream keepalive module should detect connection was closed while keeping it, and even if it wasn't able to do so - nginx will re-try sending request if sending to cached connection fails.
This can be simulated with 2 nginx instances, one acting as a
with keep alive connection. And the other using the directive keepalive_timeout 0; (which will always terminate connections right away).
The patches attached take into account the response of the origin, and should fix this issue.
I'm planing to add similar patch, thanks.
On Mon, Aug 8, 2011 at 2:36 AM, SplitIce <mat...@gmail.com> wrote:
Oh and I havent been able to reproduce the crash, I tried for a
gave up. if it happens again ill build with debugging and restart howeaver so far its been 36hours without issues (under a significant amount of traffic)
On Mon, Aug 8, 2011 at 7:35 PM, SplitIce <mat...@gmail.com> wrote:
50ms per HTTP request (taken from firebug and chrome resource
the time it takes the html to load from request to arrival. 200ms is the time saved by when the http starts transfering to me (allowing other resources to begin downloading before the HTML completes), previously the html only started transfering after the full
downloaded to the proxy server (due to buffering)
HTTP to talk to the backends (between countries)
The node has a 30-80ms ping time between the backend and frontend. (Russia->Germany, Sweden->NL, Ukraine->Germany/NL etc)
On Mon, Aug 8, 2011 at 7:22 PM, Maxim Dounin <mdou...@mdounin.ru
On Mon, Aug 08, 2011 at 02:44:12PM +1000, SplitIce wrote:
Been testing this on my servers now for 2 days now, handling approximately 100mbit of constant traffic (3x20mbit, 1x40mbit).
Havent noticed any large bugs, had an initial crash on one of the servers however havent been able to replicate. The servers are
openvz, XEN and one vmware virtualised containers running
By "crash" you mean nginx segfault? If yes, it would be great to track it down (either to fix problem in keepalive patch or to prove it's unrelated problem).
Speed increases from this module are decent, approximately
request time and the HTTP download starts 200ms earler
quicker load time on average.
Sounds cool, but I don't really understand what "50ms from the request time" and "download starts 200ms earler" actually means. Could you please elaborate?
And, BTW, do you use proxy or fastcgi to talk to backends?
all in all, seems good.
Thanks for all your hard work Maxim.
On Thu, Aug 4, 2011 at 4:51 PM, Maxim Dounin <
On Wed, Aug 03, 2011 at 05:06:56PM -0700, Matthieu Tourne wrote:
I'm trying to use keepalive http connections for
containing variables. Currently it only works for named upstream blocks.
I'm wondering what would be the easiest way, maybe setting peer->get to
the end of ngx_http_create_round_robin_peer(). If I can figure how to set kp->conf to something sane this
You may try to pick one from upstream's main conf upstreams array (e.g. one from first found upstream with init set to ngx_http_upstream_init_keepalive). Dirty, but should work.
Thank you, Matthieu.
On Tue, Aug 2, 2011 at 10:21 PM, SplitIce <
Ive been testing this on my localhost and one of my live
backend) for a good week now, I haven't had any issues
as of yet.
Servers are Debian Lenny and Debian Squeeze (oldstable, stable)
Hoping it will make it into the developer (1.1.x) branch
On Wed, Aug 3, 2011 at 1:57 PM, liseen <
Could nginx keepalive work with HealthCheck? Does
On Wed, Aug 3, 2011 at 3:09 AM, David Yu <
On Wed, Aug 3, 2011 at 2:47 AM, Maxim Dounin <
On Wed, Aug 03, 2011 at 01:53:30AM +0800, David Yu wrote:
On Wed, Aug 3, 2011 at 1:50 AM, Maxim Dounin <
On Wed, Aug 03, 2011 at 01:42:13AM +0800, David
On 1 Ago 2011 17h07 WEST, email@example.com:
Last week I posted patch to
support to various backends (as with
including fastcgi and http backends (this
able to talk HTTP/1.1 to backends, in
understands chunked responses). Patch
Testing is appreciated.
You may find patch and description here:
No, to keep backend connections alive you
Patch provides foundation in nginx core for
fastcgi and http.
With a custom nginx upstream binary protocol, I
now be possible?
After some googling ... ENOPARSE is a nerdy term. It is one of the standard
that can be set in the global variable "errno" and
Parse. Since you didn't get it, I can thus conclude
a normal, well adjusted human being ;-)
Actually, this definition isn't true: there is no
it's rather imitation. The fact that author of
it's real error indicate that unlike me, he is
adjusted human being. ;)
Now I get it. Well adjusted I am.
Now you may try to finally explain what you mean to
original message. Please keep in mind that your are
somebody far from being normal and well adjusted. ;)
p.s. Actually, I assume you are talking about fastcgi multiplexing.
Nope not fastcgi multiplexing. Multiplexing over a custom/efficient nginx binary protocol. Where requests sent to upstream include a unique id
also send on response. This allows for asychronous, out-of-bands, messaging. I believe this is what mongrel2 is trying to do now
server, it is nowhere near as robust/stable as nginx. If nginx implements this (considering nginx already
share), it certainly would bring more developers/users
ones needing async, out-of-bands request handling)
Short answer is: no, it's still not possible.
-- When the cat is away, the mouse is alone. - David Yu
-- Warez Scene <http://thewarezscene.org> Free Rapidshare
-- Warez Scene <http://thewarezscene.org> Free Rapidshare