There are currently only two effects that use the GPU at render time:
1) Shader effect - this is obvious because, by definition, it's a GL shader
2) Video effect if hardware decoding is turned on. If hardware decoding is off, it's not using the GPU.
If a sequence doesn't use the above, a faster video card won't really benefit render time. Couple of notes though:
Hardware decoding is not available for Linux, OFF by default on Windows, but ON by default on OSX. The hardware decoding is very complex and very platform specific code. It's now very stable on OSX (Apple has very good API's for doing this) but kind of hit or miss on Windows at this point.
Shader effect - on Linux and Windows, all the shaders run on the main thread. Thus, if you have a lot of effects using shaders, it ends up being kind of single threaded render (kind of) so it won't use the GPU that hard. One shader at a time. On OSX, it stays on the background thread and allows simultaneous rendering. Thus, OSX will definitely be able to use more of the GPU as it can send a bunch of shaders to the GPU simultaneously. Currently, no attempt it made to limit how much is sent. Haven't need to do that yet.