I am interested. but I dont' know what's this module suitable scene.
Can you intro more about this?
The intended use case of the upstream module, in combination with some
of the other changes, is the ability to use a cluster of nginx servers
for large amounts of file caching with redundancy and fail-over.
So for example you may have large amounts of data, far too large to
reasonable fit on a single server, that you want to provide
high-throughput access to it. This means spreading the data around
over multiple machines for three reasons:
(1) The data will not fit on a single server.
(2) A single server (even with many disks) may not be capable of
serving a sufficiently high request rate.
(3) You want files to be accessible when individual servers fail.
In addition you want to be able to dynamically add and remove hosts
from the system, to scale according to performance demands and/or disk
space demands and/or redundancy demands.
This is essentially what you can do with the DHT upstream module, the
patches to nginx itself, and using the caching module in nginx. The
main components is the reading of configuration via DNS (making it
practical to maintain configuration in an authoritative fashion), the
actual routing (i.e., for a given request /path/to/some/resource,
produce a set of hosts which, according to the DHT hash ring, should
have a copy of the file), and the failover logic that is capable of
marking hosts as down.
The routing part allows re-routing a request to other members of the
same hash ring, but also supports forwarding the request to a parent
ring if the resource must be obtained further upstream (has not yet
The use case assumes that at some point up the tree of hash rings,
there is some kind of authoritative storage (i.e., does not just cache
something upstream, but serves concrete files).