On Wed, Nov 28, 2012 at 02:28:35PM -0800, Greg Ewing wrote:
Trent Nelson wrote:
I'm arguing that with my approach, because the background
IO thread stuff is as optimal as it can be -- more IO events would
be available per event loop iteration, and the latency between the
event occurring versus when the event loop picks it up would be
reduced. The theory being that that will result in higher through-
put and lower latency in practice.
But the data still as to wait around somewhere until the Python
thread gets around to dealing with it. I don't see why it's
better for it to sit around in the interlocked list than it is
for the completion packets to just wait in the IOCP until the
Python thread is ready.
Hopefully the response I just sent to Guido makes things a little
clearer? I gave a few more examples of where I believe my approach
is going to be much better than the single thread approach, which
overlaps the concerns you raise here.