Site icon BLOGS

Troubleshooting a Slow ICA-Proxy Session

by Marius Sandbu, CTP, Norway CUGC

This article is meant as an way to troubleshoot network issues on a NetScaler appliance, and of course ways to troubleshoot may differ, if you have any comments on what you typically do in this type of scenario please post a comment below!

So the other day I was tasked to troubleshoot a NetScaler issue, where the customer had some issues with ICA sessions going slow and unreliable. A big problem was the file transfers were not working at all, where the bandwidth usage was going between 0KBps – 200 KBps. So when doing an initial assessment I noticed the following

First a couple things worth checking if ICA sessions are going slow:

So I noticed that the VPX had plenty of resources, the amount of SSL transactions were low (This could also be why the customer has issues with unreliable connections) the Packet CPU usage was low (I could see this by using stat cpu in CLI).

Then after we noticed that there was nothing wrong with the VPX, we took a closer look at the virtual infrastructure. I checked if the NetScaler VMware host was saturated, if there were any performance issues on the virtual network that the NetScaler was placed on.

Since the issue was persistent and it affected both client drive transfers and plain ICA proxy sessions, we guessed that this was an issue with the external traffic and not the internal traffic causing the issue. We also checked that there were no bandwidth policies set on the XenApp farm which might affect the file transfer.

Now since the bandwidth performance of the NetScaler was going up and down, I was thinking that this might be congestion somewhere. So the simplest way was to do a trace file from the NetScaler to see what kind of traffic is going back and forth and if there were any issues.

After using WireShark for a while you get used to searching for the most common parameters. If you have congestion somewhere you might get a lot of RST or retransmits because of a full buffer. If you think about it, file transfer using client drive mapping will try to use as much bandwidth as possible. Another thing that was done before I did my test was to change the TCP profile to use nstcp_xa_xd_tcp_profile, which enables use of features like SACK and Nagle to reduce the amount of TCP segments and need for ACK messages in case of packet drops.

NOTE: A good tip when doing starting trace files from NetScaler for SSL connections is to enable for “Decrypt SSL packets”

Sandbu – Blogg | Just another computer rambling blog

From the trace file we noticed a couple of things.

1: A lot of retransmissions from the XenApp server to the NetScaler SNIP

2: TCP ZeroWindow

Which are two symptoms which are often connected.

Sandbu – Blogg | Just another computer rambling blog

This meant that the NetScaler was not able to receive further information at the time, and the TCP transmission is halted until it can process the information in its receive buffer. So I immediately assumed that the TCP buffer size was adjusted or somewhat altered. This was not the case since it was still using the default size.

So why was this happening?

A quick google search indicated that this was an issue in the NetScaler build, which has since then been resolved in the later build –> http://support.citrix.com/article/CTX205656

So some quick tips when troubleshooting a NetScaler VPX

NOTE: You can read more about TCP Window Scaling in this article –> http://support.citrix.com/article/CTX113656

Exit mobile version