atom feed8 messages in org.apache.hc.httpclient-usersRE: after 60000 requests more than 70...
FromSent OnAttachments
Wittmann ArminSep 4, 2007 4:22 am 
Pete KeyesSep 4, 2007 7:20 am 
Wittmann ArminSep 4, 2007 8:16 am 
Pete KeyesSep 4, 2007 8:26 am 
Raymond KroekerSep 4, 2007 9:08 am 
Oleg KalnichevskiSep 4, 2007 9:27 am 
Wittmann ArminSep 5, 2007 11:40 pm 
Oleg KalnichevskiSep 6, 2007 7:26 am 
Subject:RE: after 60000 requests more than 700 sockets in CLOSE_WAIT
From:Wittmann Armin (awit@ethz.ch)
Date:Sep 4, 2007 8:16:16 am
List:org.apache.hc.httpclient-users

Hi Pete

under Linux it is found at a different place (in proc-fs). Look at the output below. I guess most settings are in seconds. I liked that one for newbies: http://www.utdallas.edu/~cantrell/ee6345/pocketguide.pdf http://ipsysctl-tutorial.frozentux.net/chunkyhtml/tcpvariables.html

Any suggestions? 5 hours after the test stopped (tomcat is still running) none of the sockets has been freed.

Thanks, Armin

linux:>$ ls | grep tcp_ tcp_abc tcp_abort_on_overflow tcp_adv_win_scale tcp_app_win tcp_congestion_control tcp_dsack tcp_ecn tcp_fack tcp_fin_timeout tcp_frto tcp_keepalive_intvl tcp_keepalive_probes tcp_keepalive_time tcp_low_latency tcp_max_orphans tcp_max_syn_backlog tcp_max_tw_buckets tcp_mem tcp_moderate_rcvbuf tcp_no_metrics_save tcp_orphan_retries tcp_reordering tcp_retrans_collapse tcp_retries1 tcp_retries2 tcp_rfc1337 tcp_rmem tcp_sack tcp_stdurg tcp_synack_retries tcp_syncookies tcp_syn_retries tcp_timestamps tcp_tso_win_divisor tcp_tw_recycle tcp_tw_reuse tcp_window_scaling tcp_wmem linux:>$ cat tcp_fin_timeout 60 linux:>$ cat tcp_keepalive_time 7200 linux:>$ cat tcp_keepalive_probes 9 linux:>$ cat tcp_keepalive_intvl 75 linux:>$ cat tcp_fin_timeout 60

-----Original Message----- From: Pete Keyes [mailto:PKe@starbucks.com] Sent: Tuesday, September 04, 2007 4:21 PM To: HttpClient User Discussion Subject: RE: after 60000 requests more than 700 sockets in CLOSE_WAIT

I believe you problem has to do with a UNIX network (tcp) configuration setting. We've seen this often in volume testing. The socket is left in this state for re-use...

Not sure how the configuration is handled on Linux, but on Solaris you can see all the configuration options with the following: ndd -get /dev/tcp ? tcp_time_wait_interval (read and write) tcp_conn_req_max_q (read and write) tcp_conn_req_max_q0 (read and write) tcp_conn_req_min (read and write) tcp_conn_grace_period (read and write) tcp_cwnd_max (read and write) tcp_debug (read and write) tcp_smallest_nonpriv_port (read and write) tcp_ip_abort_cinterval (read and write) tcp_ip_abort_linterval (read and write) tcp_ip_abort_interval (read and write) tcp_ip_notify_cinterval (read and write) tcp_ip_notify_interval (read and write) tcp_ipv4_ttl (read and write) tcp_keepalive_interval (read and write) tcp_maxpsz_multiplier (read and write) tcp_mss_def_ipv4 (read and write) tcp_mss_max_ipv4 (read and write) tcp_mss_min (read and write) tcp_naglim_def (read and write) tcp_rexmit_interval_initial (read and write) tcp_rexmit_interval_max (read and write) tcp_rexmit_interval_min (read and write) tcp_deferred_ack_interval (read and write) tcp_snd_lowat_fraction (read and write) tcp_sth_rcv_hiwat (read and write) tcp_sth_rcv_lowat (read and write) tcp_dupack_fast_retransmit (read and write) tcp_ignore_path_mtu (read and write) tcp_rcv_push_wait (read and write) tcp_smallest_anon_port (read and write) tcp_largest_anon_port (read and write) tcp_xmit_hiwat (read and write) tcp_xmit_lowat (read and write) tcp_recv_hiwat (read and write) tcp_recv_hiwat_minmss (read and write) tcp_fin_wait_2_flush_interval (read and write) tcp_co_min (read and write) tcp_max_buf (read and write) tcp_strong_iss (read and write) tcp_rtt_updates (read and write) tcp_wscale_always (read and write) tcp_tstamp_always (read and write) tcp_tstamp_if_wscale (read and write) tcp_rexmit_interval_extra (read and write) tcp_deferred_acks_max (read and write) tcp_slow_start_after_idle (read and write) tcp_slow_start_initial (read and write) tcp_co_timer_interval (read and write) tcp_sack_permitted (read and write) tcp_trace (read and write) tcp_compression_enabled (read and write) tcp_ipv6_hoplimit (read and write) tcp_mss_def_ipv6 (read and write) tcp_mss_max_ipv6 (read and write) tcp_rev_src_routes (read and write) tcp_ndd_get_info_interval (read and write) tcp_rst_sent_rate_enabled (read and write) tcp_rst_sent_rate (read and write) tcp_use_smss_as_mss_opt (read and write) tcp_wroff_xtra (read and write) tcp_extra_priv_ports (read only) tcp_extra_priv_ports_add (write only) tcp_extra_priv_ports_del (write only) tcp_status (read only) tcp_bind_hash (read only) tcp_listen_hash (read only) tcp_conn_hash (read only) tcp_acceptor_hash (read only) tcp_host_param (read and write) tcp_time_wait_stats (read only) tcp_host_param_ipv6 (read and write) tcp_1948_phrase (write only) tcp_reserved_port_list (read only) tcp_close_wait_interval(obsoleted- use tcp_time_wait_interval) (no read or write)

...Pete Starbucks Coffee Co. - MS IT-5 2401 Utah Ave S Seattle, WA. 98134 (w) 206-318-5933

-----Original Message----- From: Wittmann Armin [mailto:awit@ethz.ch] Sent: Tuesday, September 04, 2007 4:23 AM To: HttpClient User Discussion Subject: after 60000 requests more than 700 sockets in CLOSE_WAIT

Hi httpclient-Team

over the last 4 days I made a long term test for our new software release.

This software ist integrated into an Apache Tomcat 5.5 (as a servlet receiving requests, transform them and sending out other GET-requests through http-client) and should run very reliable and without any resource leaks.

After finishing this test cycle I noticed that there remained over 700 sockets in CLOSE_WAIT state (Linux -> netstat -a -p). Due to the identified PID and the destination ip number it is obvious that these sockets have been caused/used by the program part using http-client-3.0.1.

I am not a real network crack so I don't know if I need to bother about this. Since the software/tomcat is itended to run months (7x24) without necessarily rebooting it I am not shure if I will run out of network resources.

Can somebody help in this subject?

By the way: all 60000 http-GET-requests worked well and there were no other problems at all.

Regards

Armin

-------------------------------------------------------------------- My Code (simplified): this code ist executed for every single request

HttpConnectionManager connectionManager = new SimpleHttpConnectionManager(); HttpClient client = new HttpClient(clientParams, connectionManager); client.setHostConfiguration(hostConfiguration); HttpMethod method = = new GetMethod(); method.setQueryString(pairs); method.setPath(pUrl.getPath()); method.setParams(methodParams); try { client.executeMethod(method); } catch (Exception e) { failed = true; throw new Exception(...); } finally { if (failed) method.abort();

method.releaseConnection();

client.setHttpConnectionManager(null); client = null; }

try { responseString = method.getResponseBodyAsString(); } catch (Exception e) { throw new Exception(...); } finally { method.releaseConnection(); method = null; }