What are Hard Faults per Second?
As anyone who has taken a look at the Task Manager and then into the Performance Monitor, one of the two key metrics “Hard Faults per Second” is an important one to consider. I´ve never been too familiar with this term so I finally decided to research what it really meant (the name has a connotation that the least the better you are).
Apparently Page faults are nowadays known in Windows as Hard faults. A page fault is when a memory address is no longer in the main memory and Windows needs to get it from the Hard drive rather than from the RAM. This is normal as usually computers use more memory than that which is physically available and it swaps the information around to the hard drive based on what it needs to access more often, etc. So a hard fault is simply the occurrence in which the OS has to access secondary memory to retrieve information (this obviously has a higher performance cost than if it was on main memory).
I have no idea what amount of hard faults start being considered a performance concern. Obviously if you are looking for good performance you want to have 0 hard faults per second as much as possible. This is usually the case when you have a decent % of RAM left available. You can opt for limiting the amount of RAM being used by applications that allow it or physically adding more. As one would suspect, a computer with plenty of RAM will generally exhibit no page faults. A way to possibly measure the performance impact this page faults are having on your computer is to take a look at the performance monitor. There if you observe the hard drive section, take a look to see how much activity the page file is generating. If this one is one of your most read/write files consistently then you have a potentially considerable benefit of upgrading your RAM. In my case because of the heavy hard drive use by the applications I strive to maintain enough RAM to avoid page faults.