This blog is faster than Stack Overflow.
The ASP.NET MVC website, designed and maintained by some of the best developers in the world, has been beat by this home-grown blog engine running on a $10/month shared hosting provider.
I don’t mean to gloat. I just mean my web app pwnz yours!!1
I kid, I kid. As a collective of web performance best practices, Google’s Page Speed is a pretty good measure of how fast a website performs from the client point of view. Naturally, several Page Speed recommendations can come at the cost of server-side performance (or just development overhead) so maxing out your score can quickly become an effort of diminishing returns.
A score in the 90s takes a bit more work, and frankly it may be a better choice to work on server-side performance or useful features depending on your situation. My efforts to score as high as a 94 was for little more than to see if I could get a new high score in this little game of web performance.
The following is an assortment of techniques for improving your page speed ranking.
I’m not going go into this in detail. There are many solutions and implementation examples you can find with a few Google searches. What I do want to do is highlight Justin Etheredge’s SquishIt utility. It’s small, simple, slick and easy.
For instance, take all the CSS files on your site (or page) and add them to a bundle like so:
<%= Bundle.Css() .Add("~/css/reset.css") .Add("~/css/text.css") .Add("~/css/960.css") .Render("~/css/combined_#.css") %>
And out comes something like this:
<link rel="stylesheet" type="text/css" href="/css/combined_55A2DED9A14F8B269A584B0E56382BE4.css" />
One CSS file, whitespace minified. Would you like fries with that? If you’re wondering what the ugly mess of characters in the name is for – that’s the hash of the file contents. It’s necessary so when you change one of the source files, SquishIt knows it needs to create a new bundle (and your browser knows not to use the cached old version it may have).
Check out SquishIt on Justin’s blog for more details.
This was actually new to me. There are several tools available that can perform lossless compression on JPEG and PNG files with no effect on image quality. Google recommends a few tools here.
I downloaded all my Windows Live Writer generated PNG files and ran them through PUNIG, a .NET GUI frontend for OptiPNG. A minute later I had 5MB of images reduced to 4.5MB or so. That’s not amazing, and frankly it’s not worth it to me and this blog, but things like this can add up in bandwidth costs. 10% reduced payload, for free? Something to consider.
This can be done programmatically, but if you have access to IIS 7’s admin console (or want to create a web.config) you can easily turn this on for certain directories.
Go to and open the HTTP Response Headers feature
Click “Set Common Headers…” in the Actions pane
Check “Expire Web content” as desired.
Resources with a “?” in the URL are not cached by some proxy caching servers, so if you serve up files like I do on this blog (eg. http://blog.kurtschindler.net/legacy_img/image_84.png) they may not be cacheable by some proxy servers.
This is something I’m tempted to do because it’s not too hard – even if I wanted to retain the functionality of my image handler, I could just do some URL rewriting to appease Google and any caching server. But you know what. I ain’t here to be perfect. I’m just not certain this is such a big deal. Frankly, I’d say the proxy servers that don’t honor query strings like this are the ones that need fixing. This is perfectly normal and typical HTTP behavior.
As I mentioned, not every page speed rule will be worth it.
Still, who actually does this? I’ll put this in the “no thanks” category.
This may be the most contradictory rule. Yes, requiring a browser to download a bunch of CSS styles that aren’t even used by the page is a waste. But you also want to serve as few total CSS files as possible so the browser likely has them in cache.
10 pages with 10 small, distinct CSS files, or 10 pages with 1 big CSS file? If you expect your users to hit several pages, you are probably better off forcing them to download 1 big CSS file just once – especially once you consider the development overhead of former option.
Use Google Page Speed to evaluate how well you are taking advantage of client-side performance techniques. While a higher score does imply your site is speedier, there are trade-offs you may have to consider.
You could be inadvertently exposing your visitors to unfriendly 404 and 403 errors when updating your ASP.NET MVC application, even after deploying an app_offline.htm file!
The ability to take an ASP.NET application temporarily offline by uploading a file named app_offline.htm was a little known, undocumented feature of ASP.NET 2.0 until popularized by Scott Gu’s post about it back in 2005. How it works is simple, to quote Scott:
[If you place a file named app_offline.htm] in the root of a web application directory, ASP.NET 2.0 will shut-down the application, unload the application domain from the server, and stop processing any new incoming requests for that application. ASP.NET will also then respond to all requests for dynamic pages in the application by sending back the content of the app_offline.htm file (for example: you might want to have a “site under construction” or “down for maintenance” message).
Interestingly, this is the second most popular “hidden feature of ASP.NET” on Stack Overflow.
This technique works for traditional ASP.NET 2.0 web forms because typically every url on a site corresponds to an .aspx file. The application gets unloaded and the request for such a dynamic page results in the serving of a friendly “down for maintenance” type of message to any visitor accessing virtually any page of the site.
Try this with an ASP.NET MVC application though, and consider the potential horror:
(Note the 403 error occurs on the application root (“blog” in this case) and the 404 occurs on any MVC Controller Action “page.”)
Don’t believe me? Go ahead and drop an app_offline.htm file into your application root and… wait… what errors? Everything appears to be working as expected…
Everything seems fine.
I’ll get to this in just a minute, but let’s consider a different scenario where we’re doing a major deployment and perhaps deleting everything including the web.config in the application’s directory first, leaving us with just a lonely app_offline.htm file. The result, alas:
Let’s consider the latter scenario where we have an app_offline.htm file and nothing else in our application directory. Though this tells IIS to unload the application and serve up this file in place of dynamic resources such as .aspx pages, it doesn’t change the way it handles other resource requests, and without certain configurations in IIS and/or a web.config file (when present) your visitors may still see 404 or 403 errors.
If you don’t have “app_offline.htm” listed in the IIS Default Document configuration (and you have directory browsing turned off) you’ll get a 403 error when trying to access the root of a site. IIS simply tries to find the first match for a default document to serve up, and if it doesn’t find one it will return the 403.
You could simply add “app_offline.htm” to the defaut document configuration to get around this error. IIS will find it, and serve up your friendly offline page. Alternatively you could add any .aspx page (such as default.aspx – an empty file is fine) that is configured as a possible default document. This will clue IIS in that an ASP.NET resource is being requested and cause it to serve up the app_offline.htm instead.
When you request a dynamic “MVC resource,” such as a Controller Action method with its inherent friendly url (eg. http://kurtschindler.net/blog/post/some-post-here) you’ll get a 404. Think about the consequences here. Even if your application root was showing a friendly “down for maintenance” page, many other potential visitors may be accessing other pages directly and being fooled into thinking they no longer exist!
Again, IIS knows not, and assumes you are requesting a default document in a directory that matches the url specified (eg. it’s looking for a default document under [root]\blog\post\some-post-here). Of course, no such directory or document exists. 404.
In both scenarios, IIS has no clue ASP.NET is involved and thus completely ignores the presence of the app_offline.htm file. So the solution is to tell IIS to runAllManagedModulesForAllRequests so it invokes ASP.NET which then serves up the app_offline.htm page.
Add a web.config along with your app_offline.htm with just the bare minimum required setting:
<?xml version="1.0"?> <configuration> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> </configuration>
Now request any resource on your site again and the app_offline.htm takes over as expected!
This web.config is required to tell IIS to enable all managed (ASP.NET) modules to run for all requests and therefore serve up the app_offline.htm for all requests.
In a lot of cases, you’ll likely be taking your site offline while retaining (or copying over instantly) your site’s web.config, and you probably already have runAllManagedModulesForAllRequests set to true or other modules present which dictate that ASP.NET should handle the request. You’ll be error free in this case, but if you ever remove the web.config during the outage you’re in trouble.
The app_offline.htm feature of ASP.NET is not the be all, end all, magical way to turn your entire application off in a friendly way for MVC applications. Because a lot of requests for an MVC application go to resources that first appear not to be managed by ASP.NET (friendly, extension-less urls), it’s vital that IIS be told to invoke ASP.NET for all possible requests, and this requires the presence of a web.config file stating as such.
If you are going to remove your application’s web.config file during site maintenance, you must also include a temporary web.config that specifies runAllManagedModulesForAllRequests=”true” along with the app_offline.htm file.
This article will demonstrate the steps required to install and configure ELMAH for ASP.NET applications running on Windows 2008/IIS7 in Integrated mode. I have tested this process and have focused on installing it within the DiscountASP.NET IIS7 shared-hosting platform, but these instructions apply to any default IIS7 server in Integrated mode.
I’m going to roughly follow the ASP.NET MVC instructions to configure an ELMAH installation that will utilize 3 of the most common features, below.
Please note that MVC no longer requires additional setup (routing configuration) as mentioned in the official instructions – they were written back when MVC was in beta and didn’t ignore routes to .axd files. These instructions apply equally to an ASP.NET web forms application.
The Web.Config will need to have an ELMAH sectionGroup added, ELMAH’s custom config section, and entries in the httpHandlers and httpModules sections.
1. Add the following ELMAH section group, including the security, errorLog, and errorMail sub-sections:
<sectionGroup name="elmah"> <section name="security" type="Elmah.SecuritySectionHandler, Elmah" /> <section name="errorLog" requirePermission="false" type="Elmah.ErrorLogSectionHandler, Elmah" /> <section name="errorMail" requirePermission="false" type="Elmah.ErrorMailSectionHandler, Elmah" /> </sectionGroup>
2. Add this to the <httpHandlers> section within <system.webServer>:
<add name="Elmah" verb="POST,GET,HEAD" path="elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah"/>
3. Add this to the <httpModules> section within <system.webServer>:
<add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" /> <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" />
4. Add a section to configure ELMAH, somewhere outside of system.web. Here I am configuring the error logger by specifying a log path of ~/App_Data/Errors, the error emailer by defining some email attributes, and the security module by specifying that I want to allow remote access.
<elmah> <errorLog type="Elmah.XmlFileErrorLog, Elmah" logPath="~/App_Data/Errors" /> <errorMail subject="elmah error" to="email@example.com" from="firstname.lastname@example.org" /> <security allowRemoteAccess="yes" /> </elmah>
Make sure your logPath directory structure exists in your application. ELMAH will throw an exception if it doesn’t exist.
Also, make sure you’ve properly set up your <mailSettings> Web.Config section, as ELMAH will use this to determine your smtp host, delivery method, etc.
Configuration is complete for a basic, non-secure ELMAH installation. I manually caused an exception for testing purposes, and then loaded up /elmah.axd in the browser:
The last thing that needs to be done is secure the elmah.axd handler – at this point, any anonymous user on your site could type in the url to it and see all of your errors, potentially gaining access to sensitive information about your application.
ELMAH has it’s own basic security by default. If I hadn’t configured the security module or left allowRemoteAccess=”false” then by default elmah.axd could only be viewed from the local host. This of course is not an option when on shared hosting, so we have to expose ELMAH to remote users and secure it another way: using the built in ASP.NET authorization.
Adding the following entry to the Web.Config is the final step. Here I am denying access to any anonymous user.
<location path="elmah.axd"> <system.web> <authorization> <deny users="?" /> </authorization> </system.web> </location
Now all you have to do is generate a few exceptions and confirm that the logging, emailing and security is working correctly!
I finally bit the bullet and migrated my horribly old and un-cool IIS6 account (including this blog) to Windows 2008/IIS7 here on DiscountASP.NET. The process was mostly painless, but not glitch-free.
It took me a while to find it, but dASP does have a (theoretically) completely automated migration tool. You no longer have to submit a support ticket to be migrated as it was back in the day:
After reading a few disclaimers and confirming that I was aware of about 12 different breaking changes in dASP’s Windows 2007/IIS7 implementation, I was off and migrating!
And 10 minutes later
I wasn’t told exactly what went wrong, but the takeaway here is you really need to be prepared for a larger than anticipated outage if dASP’s migration tool fails or you have other complications configuring your applications. From start to finish, including contacting support a couple times, my site was down for over 24 hours before all was said and done.
The good news is that once the migration had finished, there were just a few quick things I had to take care of, excluding application-specific configuration issues:
In the case of the last bullet, I’m not sure whether the IIS default document order was changed or if the migration process actually added an index.html file to my web root, but my root was pointing to the dASP starter html page and not my application’s default.aspx page after migration. No big deal, quick fix.
I actually avoided the upgrade primarily because I was worried BlogEngine.NET wasn’t going to play friendly with IIS7, but it appears I was wrong. My blog seems to be working right out of the box without any changes at all!
The official BlogEngine.NET install instructions do have a few comments on IIS7 and resolutions to possible problems – check them out if you are having problems getting it to run on IIS7.