|Alexandre Girao||Apr 24, 2008 9:11 pm|
|Maxim Dounin||Apr 25, 2008 1:02 am|
|Manlio Perillo||Apr 25, 2008 2:58 am|
|Igor Sysoev||Apr 25, 2008 3:47 am|
|Igor Sysoev||Apr 25, 2008 4:24 am|
|Alexandre Girao||Apr 25, 2008 5:03 am|
|Alexandre Girao||Apr 25, 2008 5:08 am|
|Alexandre Girao||Apr 25, 2008 5:18 am|
|Igor Sysoev||Apr 25, 2008 5:26 am|
|Alexandre Girao||Apr 25, 2008 5:41 am|
|Maxim Dounin||Apr 25, 2008 5:55 am|
|Alexandre Girao||Apr 25, 2008 6:05 am|
|Subject:||Re: fastcgi, simply wrong|
|From:||Alexandre Girao (alex...@public.gmane.org)|
|Date:||Apr 25, 2008 5:03:03 am|
On Fri, Apr 25, 2008 at 5:02 AM, Maxim Dounin
On Fri, Apr 25, 2008 at 01:11:26AM -0300, Alexandre Girao wrote:
i've just dedicated some hours upon the nginx behavior/source code (version 0.6.29, but also happens to 0.5.35) towards fastcgi protocol and discovered that the requestId is fixed, it's simple always equal do 1, this break the
Since nginx doesn't send more than one request within single connection to FastCGI application - there is nothing wrong with requestId always being 1.
ok, but my application (which btw works perfectly with lighttpd/apache) tracks request state based on the request id, just as specification says, if i change my application to track request states based on connection.. geee.. this is ugly
See http://www.fastcgi.com/devkit/doc/fcgi-spec.html#S3.3 for details. Quote:
% The Web server re-uses FastCGI request IDs; the application % keeps track of the current state of each request ID on a given % transport connection.
the specification is not saying that the webserver can fix the request id, it simple says that after a request id is over (full life cycle) it can be reused, indeed, this just reinforces my previous paragraph.. that i need to track request state based on the request id
the spec says "the application keeps track of the current state of each request ID" not "the application keeps track of the current state of each transport connection"
the concurrency completely (as i've proved easily) and it also causes early closed connections from the web server/client became out-of-sync with the request state in correctly implemented fastcgi applications, not trying to be unpleasant, but i think that saying that nginx supports fastcgi can do more harm than good to
If you experience problems with this - it's likely due to problems with FastCGI protocol implementation in your application. The way how nginx talks to application may be not fastest one, but it's perfectly correct as far as I see.
the project.. passing by just to say this, hope you guys find a good solution, im out.
fastcgi is a good thing, see (and think) for yourself
"rationality and objectivity is greatly discredited in these days" -- George Soros