Quote:
Again, some of these details are guesswork, as I don't have actual benchmarks/tests (or detailed specs) for, but given the general system bandwidth and typical GPU designs of the time, it's pretty likely that perspective correction was "free" (as in little/no impact on fillrate) while bilinear filtering probably slowed things down a fair extent (and consumed much more bandwidth), while things like Z-buffering and alpha blending would be more unclear.
In fact the only remotely definitive information on what "polygon accuracy" refers to is in regards to the default T&L mechanism used for "fast3D," ie the 3D geometric transoformation processing along with the light-sourcing compuation and logic handled by the Vector unit and RSP, which has absolutely NOTHING to do with the sort of detail feature issues you've been mentioning in terms of "accuracy." (it very well may have been an issue with T&L computation being done at excessively high precision on the RSP+VU -and not even an RDP triangle-set-up or fillrate related bottleneck at all)
--If the T&L overhead was really the main bottleneck across the board, and the RDP is indeed quite fast for most/all features (more like the Voodoo), that's even more ridiculous and implies that there'd be no reason to drop features like texture interpolation or such, and still have a huge fillrate advantage over the PSX or Saturn. (but in actuality, there was probably at least SOME added overhead and hit to fillrate and bus bandwidth needed for doing filtering, Z-buffering, and certainly apha blending -and Z-buffering uses more RAM too)
Oh, and on another note, I should say a couple more things on the programming interface:
1. The API itself was pretty good, powerful, and flexible (and OpenGL-like), and many of the fillrate cripling features could be disabled or enabled as desired (though things like dithering were either forced in 16-bit mode or only adjustable per-frame not per-polygon -like all GPUs using that).
2. It was more the "driver" end of things rather than the API itself that was the problem, meaning certain features were not programmable as such and were done in inefficient ways. And since custom drivers (let alone APIs or direct low-level programming) wern't possible or practical to do for the most part, most developers were "stuck" with the in-house drivers as such. (and completely at the mercy of any updates/changes/improvements at SGI/Nintendo discretion)
3. the combination of driver/microcode performance and programmability issues and the use of carts meant that, even of the existing "performance boosting" possibilities, there were too many other bottlenecks to make it worth bothering in many cases. ie, if fillrate wasn't going to be a huge bottleneck, than using lower quality texture rendering (or software z-sorting) wouldn't make much difference. Plus, even with the polygon budget and texture rendering issues addressed, ROM space issues would tend to force lower res textures too, making unfiltered textures not worth the trade-offs of fillrate. (albeit you'd at least have more potential for games using a limited variety of much higher quality textures)