PS3 Gets Physical with Nvidia
This afternoon Nvidia said that it has signed a "tools and middleware" license agreement with SCEI, bringing PhysX back to the PlayStation 3 console.
If there was one thing Sony did right in regards to the PlayStation 3, it was to develop the console with a PC in mind. While many console owners and PC enthusiasts will undoubtedly flame that very comment, there's a definite certainty that it is more PC-like than its counterparts: the Nintendo Wii and Microsoft's Xbox 360. With Sony's Cell multiprocesor and Nvidia's G70-based GPU (the RSX) thrown under the hood, the console's removable hard drive and support for mouse and keyboard user input gives the console an overall PC quality. Heck, gamers can even install Linux on the console.
With that in mind, Nvidia announced today that it has signed a deal with Sony Computer Entertainment Inc that gives PlayStation 3 developers access to Nvidia's PhysX software development kit (SDK). According to the company, the kit is now available as a free download on the SCEI Developer Network and consists of a full-featured API and "robust" physics engine. Now developers, level designers, and artists have complete creative control over character and object physical interactions, as the SDK allows them to author and preview the physics effects in real time.
“NVIDIA is proud to support PLAYSTATION 3 as an approved middleware provider,” said Tony Tamasi, senior vice president of content and technology at NVIDIA. “Games developed for the PLAYSTATION 3 using PhysX technology offer a more realistic and lifelike interaction between the games characters and other objects within the game. We look forward to the new games that will redefine reality for a new generation of gamers.”
Originally developed by Ageia as a Physics Processing Unit and as the NovodeX SDK, the physics middleware eventually became a part of Nvidia's overall product offering when the company acquired Ageia back in February 2008. Games supporting hardware accelerated PhysX either use a PhysX PPU or a CUDA-enabled GeForce GPU. Using this process, physics processing thus shifts away from the CPU, allowing for faster framerates and realistic interaction with environments.
With that said, there was one thing about today's announcement that left us a little confused. According to Nvidia, the company released drivers that allowed the GeForce 8 series and higher to implement PhysX processing back in August 2008. However, because the PlayStation 3's RSX GPU is based on the G70 architecture (GeForce 7800), GPU support for PhysX isn't even possible on the console. So, Nvidia, what gives? How will PhysX work on the PlayStation 3?
The answer stems back to 2006, when AGEIA originally released the PhysX SDK for the PlayStation 3, version 2.4, specifically optimized for the Cell processor. The company said that it offloaded several components of the PhysX pipeline from the PlayStation PPU (Power Processor Unit) to the SPUs (Synergistic Processing Units), generating a 50 percent reduction in maximum PPU load. That indeed is probably what's going on now with the new Nvidia PhysX SDK release: the middleware is utilizing the Cell processor, not the RSX GPU.
"PhysX on PS3 uses the CPU in PS3 and SPU which are the cores of the cell. We do not use the NVIDIA GPU in the PS3 for PhysX acceleration," said a spokesman from Nvidia in an email to Tom's. "PhysX is also supported on many platforms which do not use GeForce GPUs for acceleration. For example, PhysX is available on the iphone--running on the arm processor core. This versatility is what is driving PhysX adoption across multiple platforms, including consoles and PCs."
So with all this techno-babble, what does this mean for PlayStation 3 gamers? It means virtual game worlds come to life in a very realistic way: trees bend in the wind, water flows with body and force, spent shells roll across the floor as players move over them in a frantic run. For developers taking advantage of the PhysX SDK, it's all created in the name of realism, to pull gamers into a suspended reality where anything is possible, only limited by the imagination of the developers.
BackBreaker might finally be getting released. Real Time Physics, FTW!
Physics on the other hand is right up the Cell Alley... the PS3 SHOULD be able to finally lever some differential advantage from the cell investment if this takes off (and if they can keep the Cell's pipleine full... which is tough since THERE'S NOT ENOUGH MAIN MEMORY!).
That's by far not a Windows specific issue.
Given the fact that most console games are written in very low level languages to get the most out of the hardware, the memory you would need to run a game for the PS3 is far less then the amount you would need in Windows.
For example, in Windows (or most any computer OS), if you perform an arithmatic function, that naturally takes up memory. And I know for a fact, even when that variable is no longer needed, few bother to actually deallocate that variable from RAM (part of the reason being how few languages in use support memory deallocation). For consoles (or any lightweight embeeded software), you make sure to clean up after yourself. If its not being used, it cleaned up to free up resources.
In short, what you need 2GB for in Windows could easily be accomplished with half that amount, if the OS were optimzed and programmers made sure to clean up after themselves when done. I argue that Far Cry 2 could be modified to run on 512MB RAM without any major performance loss, if anyone ever wanted to put that much effort into the work.
Back on topic, I'm fully expecting an announcment by the Backbreaker team regarding Backbreaker by months end. That will be the game that determines what goes on with PhysX, hence the partnership with NVIDIA.
The point is that with 256meg of CPU memory, the PS3 is ill suited for general processing tasks, which is a shame because in every other respect it is reasonably capable. With even 1GB of main memory the PS3 would have made an excellent Linux box (well... it would have if Sony hadn't chosen to block access to most of the GPU). Furthermore, the Cell processors ability to pipeline huge quantities of data are going to be hampered by that lack of memory ... the system is likely to have problems keeping them fed.
The result is that I will be surprised if any game producer is ever able to fully leverage the Cell's... which is a shame.
Umm... no.
Modern development tools have very detailed memory management approaches. Develop in .NET and you get variable scoping at a very granular level (within virtually ANY element you can have a memory scope), when a variable goes out of scope it is collected and its memory is freed. If your program is properly structured... it won't have any dead variables lying around taking up space. Decisions can be made to scope variables globally... but those decisions tend to be made for performance reasons, thus those variables should be increasing code efficiency not hurting it.
Furthermore... the problem the PS3 has with lack of memory is less to do with optimization and more to do with the nature of the beast. Games tend to deal with processing large blocks of data, large blocks of data take up a LOT of memory.
The throughput of a Cell processor (or worse yet... a group of them) working on a large array operation is pretty phenomenal... and all that data has to come FROM somewhere and go TO somewhere. That somewhere is memory... and the PS3 aint got enough to allow game makers to allocate large buffers to keep the Cell's fed. What that means is a limit on the volume of data, or lots of paging to disk. Limit the volume of data and you limit your options, simpler texture, less complex physics. Page to disk and your paging operation becomes your bottleneck - leaving the Processing system waiting for work.
Additionally... there have been plenty of studies around the idea that people can 'hand optimize' to improve the efficiency of code produced by modern compilers. The result: in VERY SELECT cases, hand optimized code can perform better than compiler optimized code... but in general it does not. Modern processors are simply too complex for most programmers to effectively optimize. Programmers COULD have optimized Far Cry 2 to run optimally in 512meg of memory, but not without making tradeoffs in performance, and why would they want to? When memory costs a few buck a gig, programmers are (as they should be) working to most effectively leverage the amount that the target system has (which for a new gaming rig these days will be 4-8gig)... there is no free lunch in programming, forcing the entire Far Cry 2 package to run in 512meg would be nonsensical. Is Far Cry 2 the most efficient code out there? Doubtful... but unless the technical direction on the project was totally incompetent, there were design decisions behind every choice to increase memory footprint.
It's popular for the uninformed to complain about the memory requirements of modern operating systems (they tend to pick on Microsoft because it's easy I think). The fact of the matter is that modern operating systems are designed to run on modern hardware... Vista 64 runs like gangbusters on my new quad box with 8 gig of ram, I don't have it installed on my old Athlon 64 machine.
Why would Microsoft want to design their OS to run on 512meg? That would just mean that their system would be sub-optimized on my 8 gig machine. Design the OS for a window of system capabilities... optimize it there. ROUGHLY, Vista is happiest on 4-8gig, XP was happy with 1-4gig, 2000 was happy with 256meg to 1gig, NT ran quite nicely on 128meg to 256meg. Every time a new OS is released, a few people complain about how it takes up 'SO MUCH' memory.
You want an OS that runs well in 256meg of memory, install NT or 2000... why would you EVER expect Microsoft to work to make their new flagship OS run in that little memory when it would clearly compromise the efficiency and power of the OS when running on new machines?
Should you not also heed your own advice? And Not trying to run an OS in a memory footprint that it won't fit?