There doesn't appear to be any noticable real world performance increases when going over 1600MHz, especially when it comes to gaming. Even though the bandwidth does increase, and synthetic benchmarks look prettier, real world applications capitalize more on processing speed rather then RAM speed.
I think with the larger and faster CPU cache that we are seeing and more advanced memory controllers, the MHz speed of RAM is really not as big of a factor anymore. This is why timing changes seem to have the bigger effect because those effect the the speed that the RAM can process the information, as well as communicate with the cache and CPU. Latencies aren't equal to throughput.
If you look at Sandra Cache and Memory benchmarks on Intel systems, Intel seems to have their memory controller as advanced as it can be for current memory speeds. When you start increasing the speeds and looseing timings, the cache and memory speed factor numbers start going up (bad) instead of down.
AMD has been known to have a more shotty memory controller and is more stingy with their cache. In the same benchmarks, increasing the memory and loosening the timings does show an overall improvement. However, I don't think their systems are operating at full potential so any speed increases are a bonus. It was only this last Bulldozer launch that you actually saw nice memory speeds. Although, still nowhere near Intel.
Think of it in network terms. You can have all the bandwidth(increased MHz speed) in the world, but if your ping(timings) are crap, your connection isn't going to go any faster.
I could be way off, but thats my 2 cents.