Quantcast

Page 9 of 25 FirstFirst ... 567891011121319 ... LastLast
Results 121 to 135 of 370

Thread: Thoughts on Sega's "clean" hardware design aspects . . . and the Saturn.

  1. #121
    ESWAT Veteran Chilly Willy's Avatar
    Join Date
    Feb 2009
    Posts
    6,744
    Rep Power
    81

    Default

    Quote Originally Posted by LastBronx View Post
    Xbox one (that you said was an unified memory system) also has eDRAM (32 MB); so have Xbox360 (10 MB), Wii (3 MB), GameCube (3 MB, same as Wii : 1T SRAM) and PS2 (4 MB). All of them allow their cpu as well as their gpu to access the main pool of "DRAM". Which of these systems have unified memory according to you?
    Only the XBox One. The others all use dedicated memory for the video, even if the CPU can access it. You don't put code in that memory. The XBox One, like the PS4, uses an AMD APU (CPU+GPU in one chip) which uses the same exact bus for the CPU and GPU. There's no dedicated ram in a unified architecture. The ESRAM in the XBone is NOT dedicated video ram - because it's using an APU, MS had 32MBytes of ESRAM (which is embedded STATIC ram) added to give the system 32MB of super fast ram it can read/write to accelerate whatever needs the most speed. It's basically a manually controlled level 3 cache. The PS4 does without that ESRAM because it uses faster system memory (GDDR5 vs the DDR3 in the xbone).

    Remember that unified architecture is like politics, nothing is 100% democratic vs 100% socialist - it's a spectrum with some consoles being closer to one end than the other. The ones I called unified are close to the unified end, and the others are closer to non-unified. For example, everyone "knows" the N64 uses unified memory... except of course for the IMEM and DMEM inside the RSP. If the N64 was 100% unified, it wouldn't have either of those, only the RDRAM. The Jaguar is close as well, but has 4KB for the GPU, and 8KB for the DSP. Again, very close to unified, but not quite. Most folks consider it unified if the code and video buffers coexist in a single block of system ram. Most non-unified architectures have allowed putting textures into system ram, but they still need the video buffers in video ram. That pushes them away from 100% non-unified towards unified, but only a little.

  2. #122
    Banned by Administrators
    Join Date
    Sep 2014
    Posts
    47
    Rep Power
    0

    Default

    Thank you zyrobs and Chilly Willy.

  3. #123
    Banned by Administrators
    Join Date
    Sep 2014
    Posts
    47
    Rep Power
    0

    Default

    Please Chilly Willy, in your opinion, was 28.64 mhz the SH2 max speed in 1993/1994 bacause it was the best Hitachi could reach with their 0.8 micron process or because they contented to match Sega's specs and didn't need to go further to adress their other markets?

  4. #124
    ESWAT Veteran Chilly Willy's Avatar
    Join Date
    Feb 2009
    Posts
    6,744
    Rep Power
    81

    Default

    Quote Originally Posted by LastBronx View Post
    Please Chilly Willy, in your opinion, was 28.64 mhz the SH2 max speed in 1993/1994 bacause it was the best Hitachi could reach with their 0.8 micron process or because they contented to match Sega's specs and didn't need to go further to adress their other markets?
    Given the "odd" speed, I'd guess it was process limited. In fact, I'd hazard a guess that Sega added the speed switch to the cpu clock (to switch between 26 and 28 MHz) after seeing the max speed of the SH2 to take advantage of it.

  5. #125
    Banned by Administrators
    Join Date
    Sep 2014
    Posts
    47
    Rep Power
    0

    Default

    Thank you Chilly Willy.

  6. #126
    Hero of Algol
    Join Date
    Aug 2010
    Posts
    8,315
    Rep Power
    202

    Default

    LastBronx, please, could you share with us links to the docs/sources you have used to build your arguments on the subject?

  7. #127
    ESWAT Veteran Chilly Willy's Avatar
    Join Date
    Feb 2009
    Posts
    6,744
    Rep Power
    81

    Default

    Quote Originally Posted by Barone View Post
    LastBronx, please, could you share with us links to the docs/sources you have used to build your arguments on the subject?
    The only things he didn't take from common user manuals on the net were the bit about Atari making the JRISC for cheap (not sure anyone knows what they really spent on it), and that Sega could have made their own RISC even better given they had more money.

    Personally, I think that if when SGI approached Sega with what became the N64, Sega had accepted with the caveat that they make it CD based, that would have put them in a much better position. The N64 was far and away the most powerful console of the generation, but the lack of a CD and too much control on Nintendo's part as to what were acceptable games kept it from killing the competition.

  8. #128
    ESWAT Veteran Chilly Willy's Avatar
    Join Date
    Feb 2009
    Posts
    6,744
    Rep Power
    81

    Default

    If you weren't noticing, I was defending your post. You don't have to tell me about reading old manuals - I got them all. And desiging your own processor is not something everyone can do. Atari bought a preexisting design (Flare did all the work). Sega never made their own processors, and they weren't about the start then. The SuperH was a great choice for a general purpose processor; it had a great balance between power and price. The SuperH was not the problem with the Saturn at all. If they hadn't gone with the SH2, I imagine they might have gone with a MIPS R3000. Just a single one as they couldn't afford to put two on a console like they could the SH2. Funny this is, if they had, they would have gotten a better perception from devs since single cpu designs were all anyone worked on at the time. Now it's flipped - everyone is gaga over multi-core systems. It's all about threading these days.

  9. #129
    Extreme Procrastinator Master of Shinobi Flygon's Avatar
    Join Date
    Aug 2008
    Location
    Victoria, Australia
    Age
    31
    Posts
    1,993
    Rep Power
    40

    Default

    What, the Mega Drive having two CPUs by default is single CPU design now?

  10. #130
    Hero of Algol Kamahl's Avatar
    Join Date
    Jan 2011
    Location
    Belgium
    Age
    33
    Posts
    8,637
    Rep Power
    145

    Default

    Quote Originally Posted by Flygon View Post
    What, the Mega Drive having two CPUs by default is single CPU design now?
    One of them is exclusively used for audio, much simpler .

  11. #131
    ESWAT Veteran Chilly Willy's Avatar
    Join Date
    Feb 2009
    Posts
    6,744
    Rep Power
    81

    Default

    Quote Originally Posted by Flygon View Post
    What, the Mega Drive having two CPUs by default is single CPU design now?
    While the Z80 could be used for gaming purposes in a MD game, I can't think of any games that actually did that. Many don't even use it as a sound processor. I know most of my homebrew doesn't use it (other than my experiments into MD Z80 compressed audio).

  12. #132
    Nameless One SonicTheHedgehog's Avatar
    Join Date
    Aug 2011
    Location
    Cardiff, Wales
    Age
    39
    Posts
    80
    Rep Power
    13

    Default

    After reading through this thread there are a few things that wern't discussed.

    One thing that wasn't discussed was the feasability of going with a single Hitachi SH3 cpu design for a later release date 1 year later around the end of 1995 in japan and the end of 1996 in the US & Europe at the latest. Would the extra 12 months of made the SH3 chip affordable to use instead of twin SH2's? If so could they and should they have had a floating point unit with the cpu like the Dramcasts SH4 had whilst still allowing the machine itself to be affordable (300 Dollars/Pounds launch price).

    Also another thing i was wondering is if they were going for a release around the end of 1995 and could use the SH3 chip would they have been able to go with a single beefed up 64 bit wide VDP1 or would that have made the machine to expensive.

    Would the extra 12 months of allowed Sega to make the DSP faster assuming they went with an SH3 cpu design?

    assuming a late 1995 single CPU, 32/64 bit wide single VDP, faster DSP plus more simplified design in other areas, do you think Sega could of got away with the machine still being quad based and not triangular?

  13. #133
    Master of Shinobi
    Join Date
    Sep 2012
    Posts
    1,547
    Rep Power
    50

    Default

    I don't think an extra year would have been enough to significantly beef up the VDP1. Unless they could find a way to push 2-3x performance out of it (node change, faster ram, whatever), they would've needed to build a completely different chip from scratch. The design was just not meant for the task it was forced to do.

  14. #134
    Nameless One SonicTheHedgehog's Avatar
    Join Date
    Aug 2011
    Location
    Cardiff, Wales
    Age
    39
    Posts
    80
    Rep Power
    13

    Default

    I mean assuming when the specs of the PS came out (Sept 93 ish) they decided to push back the release to November 95 which would of given them 2 years ish from the Sept 93 reveal to remake their machine?

  15. #135
    Hedgehog-in-Training Hedgehog-in-Training
    Join Date
    Sep 2015
    Location
    https://t.me/pump_upp
    Posts
    7
    Rep Power
    0

    Default

    Quote Originally Posted by kool kitty89 View Post
    Or yester-year's SLI. (ie 1998)

    But as far as "waste" goes, it's not about the fact that you've got dual CPUs or dual VDPs, but what they can actually pull off relative to cost . . . and TBH, there's a REASON dual CPU set-ups weren't really seen in consoles or normal consumer level PCs, and why multi-GPU solutions aren't seen in modern consoles or cost-optimized PC builds. (aside from upgrades where you add the 2nd card later on)

    The PC Engine is a great example to look at: it's got a dual-chip VDP set up that allows for features not really practical on one chip in 1987, but it's still effectively used as a single VDP too. The MD VDP makes sacrifices (in CRAM and VDAC resolution) to allow everything to fit on one chip and keep costs down. (which they also had to do to help make sure SMS compatibility didn't inflate manufacturing costs too much)
    I've always been wondering why Sega did choose to support only a resolution of 9 bits per color. The colors are coded on 12 bits (probably coming from the System 16) yet the last bit of each component is ignored. Besides that, the DAC needs to support an input of 12 bits anyway, because shadow/highlight produce a different set of colors, right?

    So why this choice? And why a so small CRAM? (only 64 colors total, which was a strain compared to the PC engine released earlier)
    How did these 2 choices help keeping the costs down? Or what would be the technical limit that made it much easier to fit on one chip? I'm asking because I can't understand how 128 bytes of CRAM can be so significant compared to the 64K of VRAM in general, which needs to be as fast as the CRAM, doesn't it?

    Thanks to anybody who can enlighten me

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •