This kind of analysis is still based on faulty arithmetic. Frame rates are reciprocal, not linear. If you try to add or subtract them, you get a false picture of differences or improvements.
Getting from 35 fps to 60 fps is not a 25 fps (about 71%) improvement. It's going from 1/35 to 1/60 seconds per frame, and that's about a 41% improvement. So the work needed to render a frame needs to be decreased by about 40 percent to make this happen.
By my calculation, TW2 needs about 200-250 instructions to render a pixel. A 40% decrease means you have to make it so that 80-100 of those instructions go away (or get executed across more parallel processors). That's a huge challenge for the programmers. Part of that is writing better code, and the other part is taking advantage of newer technology such as multiple rendering threads.
Mantle, the latest editions of DirectX 11 (I know Windows 8.1 is unpopular for other good reasons, but the graphics stack is much improved), and the latest OpenGLs all do this. I'm sure the consoles do too, though they're not coded to publicly known APIs: those 8 cores have to be good for something, and multithread rendering is about the best thing they're good for in a console. That's why they're of such great and immediate interest.