Yeah, there is definitely an argument for using GPU acceleration. Of course it's more efficient, they are purposefully designed for the task. The one thing that I really don't like is the instability. You never know if a new driver will break functionality. With CPU, not an issue.
I have rarely seen that a new graphics driver breaks functionality - except last year when Intel migrated to the DCH driver architecture.
From my point of view, a much harder part is to detect and evaluate the capabilities of less than latest hardware.
Don't some of the GPUs have their own codecs hard coded? I always thought that Quick sync was that way. And not take instruction from ffmpeg? Am I wrong about that? I know through d3d11 and vaapi there other options, but native QS???
Codecs are usually neither hard-coded nor even being run by the hardware in an 'autonomous' way but rather controlled by some software implementation (often named "driver" even though it's more an addition to the real driver.)
In the world of video en- and decoding, (similar things apply to other areas), the computing requirements can be reduced to a rather small number number of mathematical core operations that need to be executed over and over again in many situations and variations.
Such operations are often called "primitives".
In many cases this is about leveraging SIMD (single instruction multiple data) capabilities of the harware - we all know the x86 CPU extensions like MMX, SSE, etc.
but there are other 'primitive' yet a bit more complex operations as well, for example the Discrete Cosinus Transform (DCT), which is an elementary building block of many video coding standards.
Those 'primitive' operations are finally fulfilled by actual hardware implementation - sometimes fixed, but sometimes the hardware even allows to upload primitive logic dynamically.
In case of QuickSync, the actual higher-level implementation of encoders and decoders is implemented in the 'libmfxhw64.dll' library which is delivered as part the of the graphics driver..