id Software has been around for two decades, and 2011 is shaping up to be a banner year for the Dallas-based game developer. The release of Rage is but a few months away, as is the debut of id Tech 5, the latest game engine to be hatched by John Carmack. Both are the first truly significant products - mobile offerings aside - to come out of id since Doom 3 in mid-2004. Chances are you know who Carmack is and what he's accomplished during his tenure with id Software, a company he co-founded in 1991. The Wolfenstein, Doom and Quake franchises are each a cornerstone in PC gaming, as well as the first person shooter genre, and the technology behind those games has always been ground-breaking at the very least, if not totally revolutionary.
At this point it's become painfully obvious that I am indeed an id Software fanboy, and by proxy a big fan of John Carmack's work, so interviewing the Master of Doom at Quakecon 2011 a few weeks ago was a privilege, to say the least. Since virtually everything you need to know about Rage is already floating around online, our discussion focused more on computer hardware, consoles and the mobile space. Enjoy!
John Carmack: It’s almost impossible to go wrong with high-end video cards now, since the hardware is just so incredibly powerful. The exciting, or interesting, segment really is at the low end. Intel integrated graphics parts have been the butt of a joke for many years – they’re just not something that you could consider using for games. But right now, with Intel’s current generation of integrated graphics parts, they have very good feature parity and they’re fully programmable. They’re not bandwidth monsters or anything, but the fact that the performance and feature set is there means you can scale resolution in a lot of ways. Hopefully we’ll wind up in a world where, on a high end graphics card, you’re running a 2.5K resolution screen with 16X MSAA, but you should still be able to crank all that down to the point that you’re running some resolution sub-sample of 720p on integrated graphics. That will be a very good target for game developers if you can scale bandwidth. Now if you’re an incredibly geometry-heavy game, that makes it much more difficult because it’s harder to scale geometry the way you can on fragments.
DC: You said during your Quakecon keynote address that Rage is running on the latest generation of Intel IGP parts.
JC: Rage is running on Intel's IGP parts, but it’s not yet at what we would consider a valid target rate. We’re at 30ish frames per second, and it’s supposed to be a 60 Hz game. Now there’s no reason why it shouldn’t be something that can run at 60 frames per second at a low enough resolution, but Intel hasn’t invested the effort tuning the driver paths that Nvidia and AMD have.
DC: Is driver support one of Intel’s biggest struggling points?
JC: They’re working on it, but they don’t have as many people or as much in the way of resources. But they also don’t have as broad of a platform that they have to work on. I’m very excited by the fact that they’re open to exposing more of the low-level details than AMD and Nvidia. It’s also great just seeing the fact that they’ve released their chip specs for open source drivers. As a developer that’s a wonderful resource, to be able to go in and say, “Well, I don’t know how the driver is screwing this up, but this is what the actual hardware is supposed to be doing.” And then you can say, “Well, it should be doing this,” you talk to the driver people and say, “What is in our way from having this happen?” They have a lot of work to do there, but if they give us the ability to just map all of our textures into user address space and not go through texture update routines when we properly fence everything, that will be a significant win. And there are the possibilities that you could have cases where a crazy expensive graphics card is more bound up in driver overhead for certain things like all these texture transcodes that you might be going at incredible resolutions, but you might have a harder time holding a 60 Hz or higher frame rate than an integrated graphics part with direct access.
DC: What do you think about the newest CPU’s? What are your thoughts on having the graphics baked into the same piece of silicon as the CPU, like AMD’s new Fusion processors?
JC: So the current Fusion chips are a GPU packaged with a CPU on the same die. Yes it’s two things fused onto one, but the vision is having them integrated almost at a functional level where they’re at least sharing cache bandwidth, and possibly even sharing some functional units.
DC: Does that design give you any advantages? Or are there any drawbacks?
JC: There are still a few aspects to that where just at its base level; if it’s used where you still program through DirectX or OpenGL, it’s going to be a very cost effective target for the given amount of performance. Having that sharing of bandwidth is always a good thing, rather than having the dedicated buses. But where it becomes really interesting is if they are able to nail down the actual hardware interfaces, and you can treat it like an instruction set, just like we can write SSE code. If we can start writing inline Fusion code, it opens up a lot of doors for doing things in a much more tightly controlled way. It’s tough when you’re looking at our kind of tripartite world here, where we’ve got Intel, AMD, and Nvidia. We are willing to do specific extension level work for each vendor. We’re trying to get this direct access from Intel, which would allow us to poke at the textures; we have multitext sub-image update from ATI and we've worked on and with Nvidia for forever, basically. If the next-gen Fusion stuff offers us some really interesting possibilities, how much are we willing to specialize towards that given it’s going to be this fraction of a fraction of the market? Unless something like that winds up being targeted for console use.
JC: When it comes to decisions on provisioning CPU resources, there’s not much that we would do as a developer to target directly for that. I’ve mentioned before how CPUs were interesting feature-wise only in the really early days when we went to things like the first 32-bit address spaces or the first FPUs, where it really made a difference. Since then it’s kind of fallen off, as Intel’s gone between P4 and Core architectures, not much has changed for us because the CPU guys have always done a great job. It’s always going to be at least as good as what we had before and it will go faster than what it's replacing. We do have little bits of SSE3-optimized code and we look at AVX and all that, but the differences in the total performance of different CPUs aren’t that big.