SharePoint Performance Tuning and Profiling


There is a plenty of articles concerning this topic, so this article is not going to create another compilation of previously published approaches and methods, even at the end of this article you will find a list of some truly useful links. My contribution to the story is to make the community aware of a powerful gun while hunting performance bottlenecks. This gun is so amazingly effective and easy to use, yet I wonder the common awareness is less then expected.

The talk here is about the magical VSPerfASPNETCmd profiler command line utility, which is part of the Visual Studio 2010 Premium and Ultimate editions. It is truly strange, that opening the official Microsoft’s Command-Line Profiling Tools Reference, you will not find here VSPerfASPNETCmd mentioned, but you have to go rather to Command-Line Profiling of ASP.NET Web Applications.

It should be noted, that VSPerfASPNETCmd is not imperative for profiling, as this utility is rather a convenience tool. This means, you could do the very same profiling session and analysis using instead the classic set of profiling tools like VsPerfMon and/or VsPerfCmd.

First of all taming overheated expectations, let summarize the prerequisites this tool is appropriate to use in your scenario. If you are hunting performance bottlenecks caused by custom code located in your custom .NET Assembly loaded either from the GAC or from the bin-Directory, the VSPerfASPNETCmd  is your best friend. So it is important to underline, that profiling is only an option if previous analysis has identified the node and requests running your code as the suspicious part of the bottleneck-hunting story. In order to be more accurate let’s summarize shortly the methodology which would lead you to the idea to use the profiler:

  1. You have set up performance counters for long-running observation (Baseline, taking probes once a minute) and later you altered these counters for incident based short term observations (taking probes either once a second or every other second, or once in 5 second)
  2. You have set-up IIS log files especially the “time-taken” column is reported
  3. You have gathered ULS logs
  4. You have analyzed system & application event logs
  5. You have utilized xperf (Windows Performance Toolkit) from the latest Windows 7.1 SDK

About the the 1. – 4. exact howabouts you can read in this article Troubleshooting MOSS/WSS Performance Issues. Regarding 5. is the Quick Start Guide the best address to start with.

Let assume once more, that based on all these monitoring efforts you have lastly identified the requests and the node (e.g. the Web Front End Server) as the physical source of the bottleneck. More happily you even suspect a couple of custom Libraries as possible victims for further analysis. If you did so, there are these scenarios to consider.

Based on previous analysis:

  1. you can reproduce the issue in your test environment
  2. you can reproduce the issue in your production environment issuing appropriate requests
  3. you cannot reproduce the issue as it happens randomly

In cases 1. your destiny is to use VSPerfASPNETCmd. In case 2. you could utilize profiling in a limited manner. In case of 3. this utility cannot help you, as VSPerfASPNETCmd is not appropriate for long term observation. The amount of collected data should be limited to a time-span measured within a couple of minutes. Therefore in order to optimally utilize VSPerfASPNETCmd you have to prepare an exact step-by-step load/requests scenario which will reproduce the bottleneck with high probability.

In production environment special care has to be taken, as the profiler will restart IIS both at the beginning and at the end of the session. This is due to the fact, that profiler will inject additional libraries into the w3wp.exe process. Therefore this scenario of course is an option in a scalable farm environment while incoming requests could be routed through the remaining servers. Furthermore after the profiler is started and the IIS process runs, tester must make sure the issued test requests will be routed through the tested server.


You have to install the Visual Studio 2010 stand alone profiling tools on the targeted server, which will deploy the .NET Framework 4.0. This could scare some people, however SharePoint 2010 will be not affected at all, and side-by-side installation of the .NET 3.5 SP1 and .NET 4.0 is supported by Microsoft. Here the installation guidance.

Additionally it is highly recommended, that you set up the _NT_SYMBOL_PATH environment variable. Here my preferred value setting, which will download and cache the symbols locally at c:\windows\symbols. Feel free to modify according your environment. More details in this regard you could find at Debugging Tools description.



Profiling means either sampling or instrumentation (or concurrency profiling which is not today’s article’s matter). There is often a confusion which one to use and when. In order to make decisions, it is important to understand the difference. Sampling is the “easy to immediately go” way. It does not need too much preparations except the scenario reproducing the appropriate load/pressure using any appropriate test-client. In the contrary while taking instrumentation, the observed libraries have to be instrumented, resigned and redeployed prior starting instrumentation. Thus this is the way to go if you want to provide more close and precisely targeted, fine-tuned observations.

Sampling could be started from any privileged command line window. If you suspect database related performance issues, make sure you will use the tier interaction option “/tip”. This will reveal the issued SQL-Statements during profiling.   

In order to do some example, I started profiling my previously published FBA Utility. First of all let’s take a look on sampling. Here the command line to start the session:

VSPerfASPNETCmd /tip http://trainingw2k8r2a:804/ /o:FBAsample1.vsp

The above command-line is targeting my web application listening on port 804. The output will be put into the FBAsample1.vsp file. Tier interaction data will be collected as well. Here you see the command line window as the profiler starts and shuts down. Note the red marker where the tests are running, the command line windows is blocked (Picture 1).


Picture 1

When you stop the profiler, it will download the symbols needed and will pack them into the produced output file. The best analyzer is thereafter Visual Studio 2010 itself. Just open the produced .vsp file and you can start immediately your own analysis.


Picture 2

The picture above (Picture 2) is the summary view, which is here divided in two parts. The first part in this particular case represents the IIS process being started (warm-up)and the second part at seconds 155 is the phase while running test requests against the up and running web application. In your scenario perhaps the second part will be longer. Using markers you can filter out this first part anyway, however this is an advanced scenario I will not talk about here and today.

Next you have to check the libraries being sampled using the modules view. This will allow you to see, whether the suspected libraries are captured.  In my case you see the FBAUserRoleManager.dll is listed with 161 samples collected. In your scenario this number could be even much higher (Picture 3).


Picture 3

Next you could check the Call Tree View and highlight the hot spots. In my particular case it is clearly visible the call depicted down to System.Web.Security.GetAllUsers() with exactly listed costs measured in samples. The more samples the more resources are used and consumed, the worse for the server (Picture 4).


Picture 4

Another interesting view is the Tier Interactions. As you can see, the UsersRoot.aspx page is accessing my aspnet database within my sqlexpress instance issuing 6 queries captured and the time elapsed for this calls. You see as well this very same page accessing the WSS_Content_804 content database as well, which is within a different (default) server instance (Picture 5).


Picture 5


You have seen, that sampling will provide information mostly in numbers of captured samples, which is often enough to make conclusions. Sometimes however you need more details, like you would need to know the exact number of calls and the exact time spent within calls. In sampling we cannot answers these questions, except in tier interaction view. So how to do instrumentation?

First of all you have to instrument the libraries you will need to observe. For this reason switch in administrative command line windows to the location your (not yet instrumented) libraries live. In my case this is the debug folder beneath my Visual Studio Solution folder. Using the next command will list all available for instrumentation entries in the FBAUserRoleManager dll (Picture 6).

vsinstr /dumpfuncs FBAUserRoleManager.dll


Picture 6

Next you have to make sure you are using the 64 Bit version of the VsInstr tool. If you will instrument your dlls with the 32Bit version, your SharePoint process will crash (you have been warned!). Time to instrument the library using this command:

[%path to x64 folder%]vsinstr FBAUserRoleManager.dll

The library is now instrumented, meaning the instrumenter injected additional CLR code into the library. This however broke the signature and this library is not installable to the GAC. Therefore it has to be resigned using this command line:

sn -R FBAUserRoleManager.dll key.snk

The last parameter is the very same key-file from the Visual Studio Solution. Time to remove the originally deployed library from the GAC and re-deploying the instrumented and resigned library, using these command lines:

gacutil /u FBAUserRoleManager

gacutil /i FBAUserRoleManager.dll

Make sure the success message is displayed. Now we are prepared to run instrumentation profiling, which could be done similarly like sampling was started, although with one additional parameter “/trace”:

VSPerfASPNETCmd /trace /tip http://trainingw2k8r2a:804/ /o:FBAtrace1.vsp

The above command will start a new profiling session and your job is now to issue the same requests and doing so taking the server under pressure. The results are gathered in FBAtrace1.vsp file. Let’s assume I have provided the test and the end-file has been packaged with symbols and written to the disk. The most interesting part is perhaps in the Call Tree View. Here what I could observe (Picture 7):


Picture 7

This above Picture 7 is the instrumented view of the tests revealing exactly measured data. As you could see, the UsersRoot’s Page_Load() event handler was called exactly six times. These calls took exactly 1,127 milliseconds to execute. Going down to the bottom of the call, we arrive at Membership.GetAllUsers() which however was called only once and this call took 379 milliseconds to execute. The Tier Interactions view reveals (Picture 8), that UsersRoot.aspx issued 4 Queries taking 134.52 milliseconds to execute. So I could conclude that the call GetAllUsers() spent most of the time not in database call but rather more than 200 milliseconds somewhere else in the layers.


Picture 8

Of course, there are even more information saved in these profiling outputs to check and analyze. Try and enjoy.

Troubleshooting MOSS/WSS Performance Issues

Dealing with Memory Pressure problems in MOSS/WSS

SharePoint 2010 Logging Improvements – Part 1

SharePoint 2010 Logging Improvements – Part 2 (Introducing Developer Dashboard)

SharePoint Server 2010 capacity management: Software boundaries and limits

SharePoint Dispose Checker Tool

Patrick Heyde:

On CAML performance:

HP SharePoint 200/2010 Sizer

SharePoint Performance Tuning – Part 1, 2 (Caching & NTLM/Kerberos)

Overview of Kerberos authentication for Microsoft SharePoint 2010 Products

How to avoid the Top 5 SharePoint Performance Mistakes

This entry was posted in SharePoint 2010 and tagged , . Bookmark the permalink.

4 Responses to SharePoint Performance Tuning and Profiling

  1. Elvira Acosta says:

    Nice piece of information, I was finding something else when landed here anywazz can anyone here please tell me about StorageEdge? It is really worth using? Will I get benefits like “scalability” once I start using it? If you see their page they are promising many things but I am sure if it really works?

    Any help in this regard will be highly appreciated.


  2. gnana says:

    how to remove the instrumentation code from the DLL after the profiling ?

    • Stefan R. says:

      No way. You cannot remove injected instrumentation code from the dll. You have rather to reuse the preserved original libraries or recreate them using your development environment. Please note, that during istrumentation the original dlls are not deleted – check please your folder.

  3. gan says:

    Hi ,

    I am using VsPerfCMD.exe to to do sampling of processes other than W3WP.exe …..Tier information is not coming consistently …some times it is coming …somtimes it is not ……..Have you faced this issue ?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s