So dramatically improved color/digital audio/3D capabilities aren't worthwhile gimmicks? (albeit the Sega CD skewed things and the 32x wasn't nearly as cost effective as it could be -more hardware acceleration, less CPU grunt, and a better thought-out architecture could have made a big difference)
Then there's the fact that 1. it was an add-on only without "Neptune", 2. the Saturn conflict killed it (where the CD already complicated matters), and 3. the nature of the market was different at the time and the competition met that differently as well:
if the PS3 in 2006 had had the same clean hardware design, cost/performance, and followthrough with software support as the PSX, the Wii wouldn't have had a chance. (might have been forced into a budget/specialized niche)
Actually it adds more than 2x the RAM, it replaces the 16 MB general purpose/"buffer" DRAM with 64 MB of GDDR3, but the 24 MB 1-T SRAM and 3 MB video RAM stayed the same. (still 88 vs 43 MB)
The CPU and GPU are only 50% faster though, but there's the added 243 MHz ARM926EJ coprocessor as well.
The closest thing to the Wii is the Game Boy color in the sense of a straightforward direct "boosted" successor except the GBC added more. (if the Wii had had full double the clock speed of the GC, and some other tweaks -especially to video RAM and the GPU, it would have been closer to the GBC vs GB comparison) The Wii should also be capable of HD resolutions (not sure if the GC's GPU allowed it, but unless they physically locked the framebuffer size and digital video output, there's no reason it couldn't as well), but HD resolutions and "HD gaming" is not really the same thing as far as the mass market is concerned. (given PC games from 10+ years ago were HD in resolution)![]()
Doing that to the Genesis would be more like the Supergrafx, or a direct update to the MD VDP plus a faster CPU, sound upgrade, and more RAM. (maybe more like the original Mars/Super Mega Drive concept depending on just what that really was)
The Saturn probably would have been a better candidate for a Wii-like design as such though. (cut way back with minimal RAM and CPU resource to be reasonably competitive, but far cheaper -and push heavily for high-level tools as efficient as possible within limits of the hardware)
It was also overly conservative and limited by using 1 micron chips to minimize risk and ensure development stayed on schedule (albeit that would also mean dramatic consolidation later on), they did go for a very low-cost CPU and slow/cheap 80 ns DRAM for main memory, but the VRAM only used as a framebuffer and lack of GPU or CPU cache really limited things. (presumably the huge 1 MB dual-port VRAM framebuffer block was used for intended multimedia purposes -and perhaps high-res 24-bit images, but for what the system needed as a game/VCD platform was really just framebuffers the size of the Saturn and quite possibly using a simple page flipping/bus steering bank arrangement for 2 256 kB banks of 32-bit DRAM -ie 4 64kx16-bit DRAM chips)
Then they needed to avoid the bus contention, and short of a GPU cache (which the Saturn's VDP1 also lacked -or even Jaguar style line buffers, only practical due to the ambitious .5 micron process used), they could have added another block of RAM for textures and/or a CPU with a cache (like an ARM600 -adding an FPU and dropping the fixed-point matrix coprocessor would have been nice since it was quad based) would have addressed that. (1 MB main DRAM, 1 MB texture RAM, and 2x256k framebuffers might have been practical and topping that off with a CPU with a cache -let alone FPU- would have made it into a really competitive system in the long term -and cheaper in several areas; if they skimped on some things like the CPU/FPU upgrade, it very well may have been cheaper than the real 3DO due to the removal of the expensive VRAM and use of a smaller chunk of DRAM buffers in its place -with the added buses mitigating that to some extent; keeping the shared 2 MB main RAM/texture block but adding a faster CPU with cache would have been another trade-off to maintain lower cost)
Hell a 25 MHz 68EC020 could have been a better alternative to the ARM-60 and still much cheaper than any RISC CPUs with caches. (only a small I-cache, but enough to take a major bite out of the bus contention issues)
On top of that, they weren't pushing it in a low-cost form factor until the FZ-10 years into the system's life.
And yes, the market model killed it too, if they were going to push something like that, they needed something much lower-cost in general. (sort of like the above, except just the famrbuffer banks and limit main RAM to 1 MB with the same low-cost CPU and chipset and perhaps swap the CPU to a 68EC020 to reduce contention issues while providing a well-known/common assembly architecture with good C support -or more so if they were more aggressive and pushed for a fully consolidated graphics+I/O+audio+coprocessor ASIC on a smaller process like .8 micron or smaller -more so if they pushed even harder like the Jag and crammed it down to .5 micron and added line buffers for much more efficient bus sharing)
Hell, the Jaguar might have been cheap enough to push into that market model with some tweaks to the design (namely the funds needed to work out the bugs and swap the 68k for an EC020 or maybe an ARM-60 like the 3DO -no cache but high performance and C-friendly- or a lower cost x86 chip with a cache -Jag supported a variety of architectures by design and x86 might have eased PC ports too -especially with the rising PC market)
But that market model was an experiment doomed to fail: razor and blade was tried and true, and a tight partnership with Panasonic might have provided that. (maybe not on Sony's level of spending, but pretty capable)
And the bugs, bus contention (lack of caching -though heavy buffering for some graphics operations- and single bus that the 68k and JERRY accessed very slowly and heavily exacerbated the contention issues), and weak/slow CPU with no cache. (hence why it's difficult to code for in assembly and why it was difficult to make any decent compilers for -and why the first thing many programmers suggest should have been changed is 68k replaced with a more useful CPU with cache or addition of a separate bus for the CPU -actually, a separate slow 16-bit bus would have been great to offload both the CPU and JERRY/DSP onto -DSP could be 32-bits and has 32 data lines but was configured to match the bus width of the CPU -plus connecting only 16 data lines cut PCB cost and for the intended sound synthesis role, bandwidth wasn't a prime factor -using it as a general coprocessor is when it chokes the bus)The Jaguar was a real bitch to develop for, mainly because the tools were immature due to rushing to the market (very much like the 32X).
Performance could have been boosted significantly with faster RAM, but they wanted to keep cost low. (hence why they also only used 375 ns ROM)
There's a nice summary here:
http://www.atariage.com/forums/topic...1#entry1751151
(and that's not even taking the tools into account -or the difficulty of using the GPU as a CPU)
Except for RSP microcoding tools. (albeit that was in part due to SGI not developing them, but also from Nintendo not investing them) Plus restricting use of the more PSX-like "turbo 3D" microcode, though that may have been less-used anyway due to the ROM space limits makign unfiltered textures highly undersirable. (and no middleground from the slower "fast 3D" and the "turbo 3D" code as far as official SGI microcodes -ie other than the handful of developers who pushed custom microcode like Factor 5/Lucas Arts and Rare)The N64 was a real bitch, but had excellent support from Nintendo.
Hmm, I got the impression that Sony's APIs for the PS2 were cripplingly weak and that 3rd parties had to resort to assembly language programming and/or developing their own APIs/tools for the system.The PS2 and PS3 have the same issue - they're easy to program for unless you need to use the vector accelerators, then they are only as easy or hard as the libraries available for the vector accelerators. Sony hasn't been as good at providing libraries as Nintendo, but are much better than Atari or SEGA were.
The Dreamcast's vector accelerator was well supported in libraries, wasn't it?
That, and isn't the PS3 also limited by CPU resource in many cases? (the single PPE being a bottleneck for games pushing "conventional" CPU cores as with PCs and the 360 -let alone games fundamentally having heavy logic/AI related stuff that can't be accelerated by vector units/FPUs)
It's a masterpiece in priting money!
Or more seriously, a masterpiece of good marketing and finding a gimmick to push into an increasingly neglected segment of the market with and milk for all its worth. (and also realizing when the general public will find the graphical capabilities "good enough" and the price poitn a major mitigating factor)
It's nowhere near as innovative/well designed as the Game Cube, but they managed to realize that good hardware doesn't matter: gimmicks and marketing do.
Nothing remotely like what made the Famicom in Japan (very good hardware at the right time with no real previous competition and weak contemporary competition -like the VCS in the US), but I will give you that Nintendo's marketing and gimmicks with the NES are hugely tied to its success in the US. (and their followup anti-competitive tactics plus preexisting Japanese support on a monopolistic level)
Yes, Sega doesn't seem to have had the business sense and stress on profitability that has kept Nintendo alive. They never learned to milk products for the max profit and not go overboard on the hardware side. (let alone recognizing how to make compromises optimized for different market regions)Sega was incapable of anything like that as they have proven many many times - aside of the early consoles maybe. Not even Dreamcast did not go online until months after it was officially discontinued.
What Sega did made sense with the SG-1000 up to MCD to attempt to stay competitive (though their US marketing sucked up until Katz came onboard -Tonka was a lot better than Sega, but far from great).
SoJ and SoA made a combination of mistakes on the Saturn/Mars/32x/MD/CD/GG/Dreamcast in various different areas from 1993 onward (or earlier on SoJ's side with inconsistent support of the MCD that lingered through its entire life).
Not sure what you mean about the DC going online in 2001. (though I agree mistakes were made with the DC as well, especially if they were trying to optimize competition on the market in a sustainable manner given their financial situation -the pack-in modem, various extensive rebates, and price drops were not smart moves in that regard -and they cocked up marketing in Europe too, not sure Japan could have been helped with Sony pushing like they did, maybe they could have milked Saturn a bit longer though)
They didn't seem to push nearly as far as they could have into the PC market either. (tons of great/marketable Sega Published Genesis, CD, 32x, Saturn, DC, and arcade games that should have been released but weren't -and could have provided critical revenue and possibly strengthened the brand name as well -potential PC sales is one reason they probably should have completed Sonic Xtreme in spite of missing the Christmas '96 date)
See below for the other consoles, but the PSX wasn't inexpensive compared to anything but the Saturn, it was only priced as it was because of Sony's vertical integration (more so with the ownership of CD-ROM tech patents and already producing them in quantity plus already owning a license for the MIPS R3000A) and deep pocked to allow selling well below cost is what gave them the advantage. (they could have made it lower cost by doing things like cutting out some of the RAM and/or consolidating the buses -putting video, CPU, and audio all on a shared 4 MB block perhaps with a bit more buffering could have been significantly cheaper, but would have weakened peak performance -memory use would have been more flexible though, just as on other single bus designs)
Sony won the market in the 5th gen because they had the money/position/inherent advantages and the right management/marketing to pull it off. If you take any other hardware from the generation and trade that chipset for Sony's, things wouldn't have been much different. (Sony would have pushed stronger marketing, stronger software and tools, modified hardware -due to their funding, position, and ability to absorb cost for a final result being rather similar -with 3DO they'd either be on the market sooner or hold off release until '94 with some tweaks to sigificant modifications to the architecture -tweaks as in a faster CPU with a cache, more significant as in a GPU with a cache; Jaguar was almost completely limited by funding and low cost emphasis -Sony easily could have provided the funds to allow the bugs to be eliminated and pushed it to a configuration with near-PSX capabilities -with advantages and trade-offs- plus a CD drive and STILL have lower production costs than the PSX, and the N64 would have been switched to a dual bus design -probably 2 MB video, 2 MB CPU+audio, maybe use plain PC-66 SDRAM on 32-bit buses rather than the 9-bit RDRAM configuration, or limit the RDRAM to video- and add a CD-ROM drive and still might be cheaper to manufacture than the PSX as well as more powerful in pretty much every respect -and with better tools it could have really kicked ass)
The PSX's friendly architecture and good software support (the latter a critical move also related to buying Psygnosis and having them build the western SDKs) did not follow through for any later Sony consoles:
they horribly botched the PS2 in hardware design (the Dreamcast OTOH took every element that made the PSX hardware great and improved upon it while being truly lower cost as well), the PS3 wasn't as bad on the development side (still nowhere nearly as freindly as it should have been), but they totally f*cked up with the price point and cost/performance let alone efficiently integrating backwards compatibility. (granted, the PS2 hardware design would have complicated that, but a strong focus on efficiently embedding PS2 hardware compatibility and making as much integral use of the old hardware -or enhanced derivatives thereof- should have allowed that without sacrificing price point or performance unacceptably -it may have meant dropping an off the shelf GPU unless it provided close enough compatibility/features to allow it to emulate the PS2 GPU with a bit of added CPU grunt -cramming the EE in there efficiently would be another story, but might have been possible to employ as a useful coprocessor)
There's so many other areas where the PS3 added unnecessary cost and inflated the design (and obviously wasn't aiming at a conservative price point with high cost/performance), that it ended up an utter mess not unlike the Saturn in terms of cost/performance. (could have pushed for a single bus design, maybe DDR-2 give the cost advantages -allowing significantly more RAM and still saving cost plus the savings of a shared bus, the Cell derived CPU should have had fewer SPEs, maybe just 2, and at least 2 PPEs to better match contemproary standards -but still cut down the die size considerably over the full 1 PPE+7/8 SPE Cell used, etc, etc)
NEC has a chance to pull the same thing as the PSX with the PCE/TG-16, but they screwed that up. (didn't invest heavily enough, push aggressively enough, take advantage of their deep pocked and corporate clout, or get the management/marketing right -tying back into investment, for building up a comprehensive western marketing/distribution branch among other things-) They even screwed up with the SGX in Japan and progressive enhancements to the CD to some degree (albeit the 2 issues could b considered 1 in the same -basically SGX should have been an add-on and aimed at coupling with the CD), and obviously the PC-FX.



Reply With Quote



