I think you kicked off your watcher job with an HTTP request, and it keeps
the port open until it finishes. Only one thread can use the port at the
same time. Use a different port for task response traffic, or consider
running your watcher as a scheduled task.
Not super robust, and probably not used in production, but i did write an
alternative queque for MarkLogic. It might give you some ideas..
On 11/10/17, 1:06 AM, "gene...@developer.marklogic.com on behalf
of Eliot Kimber" <gene...@developer.marklogic.com on behalf of
I have a system where I have a ³client² ML server that submits jobs to a
set of remote ML servers, checking their task queues and keeping each
server¹s queue at a max of 100 queued items (the remote servers could go
away without notice so the client needs to be able to restart tasks and
not have too many things queued up that would just have to resubmitted).
The remote tasks then talk back to the client to report status and return
their final results.
My job submission code use recursive functions to iterate over the set of
tasks to be submitted, checking for free remote queue slots via the ML
REST API and submitting jobs as the queues empty. This code is spawned
into a separate task in the task server. It uses xdmp:sleep(1000) to
pause between checking the job queues.
This all works fine, in that my jobs are submitted correctly and the
remote queues fill up.
However, as long as the job-submission task in the task server is
running, the HTTP app that handles the REST calls from the remote servers
is blocked (which blocks the remote jobs, which are of course waiting for
responses from the client).
If I kill the task server task, then the remote responses are handled as
I would expect.
My question: Why would the task server task block the other app? There
must be something I¹m doing or not doing but I have no idea what it might