top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Disabling Naggle's algorithm / Setting tcpNoDelay programatically for web sockets

+1 vote
563 views

As per tomcat's performance tuning doc, 'tcpNoDelay' can be enabled/disabled at connector level.

Is there a programmatic way to set 'tcpNoDelay' to true for web socket connections ? I am using tomcat's proprietary web socket APIs in my application.

I have gone though the API documentation of tomcat's proprietary web socket implementation, I didn't see any API which allows application to override the 'tcpNoDelay' value.

As per doc for "writeTextMessage(CharBuffer msgCb)" of 'WsOutbound', for each write, tomcat flushes the socket buffer and sends the new frame with the buffer passed. Does that mean, flushing of socket buffer gives the same effect of disabling naggle's algorithm ?

posted Jun 25, 2015 by anonymous

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

2 Answers

0 votes

Is there a programmatic way to set 'tcpNoDelay' to true for web socket connections ? I am using tomcat's proprietary web socket APIs in my application.

No

...Does that mean, flushing of socket buffer gives the same effect of disabling naggle's algorithm ?

I don't know. The Javadoc only guarantees that the bytes will be passed to the OS when flush() is called. What happens after that is up to the OS.

You'll need to do some testing on your system (possibly with Wireshark) to see exactly what is going on.

answer Jun 25, 2015 by Ahmed Patel
0 votes

Hi,
As far as I know I am not sure how to do it using websockets, but generatlly the method to do this is using setsockopt with option to TCP_NODELAY.

answer Jun 25, 2015 by Rajendra Stalekar
Similar Questions
0 votes

We have a websocket application which keeps writing data to the clients.

We found that when tabs (not whole browser) of Firefox (ver. 22) is closed, the websocket connection is not closed. Anyway, reproducibility is very low. And the sendQ (netstat -an) keeps growing

So, what we did is. we kept sending heart beat from client. If this heart beat timeout occurs, we are trying to close the connection as follows

ByteBuffer bbuff = ByteBuffer.allocate(1);
bbuff.put((byte) 0);
messageInbound.getWsOutbound().close(0, bbuff);

Is this correct approach to close the connection from server? Because, the connection is not closed at lower level (netstat -an). Anyway, writing data to it , is stopped and sendQ stops growing.

# netstat -an | grep :8080
tcp 0 402376 172.22.59.176:8080 198.162.18.207:64805 ESTABLISHED
+1 vote

How can I calculate the maximum number of concurrent webocket for machine? Is there a relationship(maybe a factor 1 or similar) for the maximum number of http requests and the maximum number of websocket for the same tomcat server?

In particular way to develop a application using websocket offers the same kind of performance problems as for http requests performance?

+2 votes

We can set bandwidth for UDP while TCP uses its maximum bandwidth...why it is like this?

+3 votes

When the following iptables rule is used:
iptables -A INPUT -p tcp --dport 1234 -j NFQUEUE --queue-num 0

If a TCP packet is re-transmitted, will the userspace program receive the re-transmitted packet ? or it just receive one packet, and the re-transmitted one is dropped in kernel space ?

...