Okay, I will post a quick explanation on this, as I've posted several long essays on this in the past and can't remember exactly what I've said ... last thing I want is someone saying "You said this now, but you said this on {insert some really old date in here}!!"
http://images.ngemu.com/forum/smilies/Standard/wink.gif
Here goes: the PSX GPU does lack a z-buffer (the N64 does have one, afaik). Because the PSX GPU lacks a z-buffer, no depth information is passed to the GPU with the primitives that are sent for rendering. Because there is no depth information sent with the primitives (ie. on x & y values are sent), there is no way to do perspective correct texturing (which requires *correct* depth values for each vertex in the primitive, so the 1/z interpolation can be performed).
Bilinear filtering does not require any more information than the PSX GPU already gets passed, so it can be enabled. *BUT*, bilinear filtering does require carefull layout of textures in VRAM, which most PSX games do *not* do. This means there are glitches when enabling bilinear filtering, such as black halos (due to chroma-keying issues).
Perspective correct texturing would stop texture "swimming" or "warping", but would be dependant on the PSX game doing perspective correct clipping on the side of the view-port. If the game just does simple clipping, and does no take 1/z interpolation in to account when clipping primitives, you will still get texture swimming on the edge of the view-port, particularly when cameras are slowly panning, etc.
Perspective correct textuing would not really help "wobbling" polys: that needs sub-pixel accuracy in the vertex data (PSX has no sub-pixel accuracy, so everything is rounded to integer values). This is also why small objects do not scale very well even when running an emulator at 1600x1200: one step on the PSX screen of 320x240 results in a jump of 5 pixels on the emulator screen! If the PSX had sixteen levels of sub-pixel correct vertex data, the emulated screen would still be sub-pixel correct.