1996-2007: All plans and interviews from John Carmack
Every John Carmack plan from 1996 to 2007: here.
Every John Carmack interviews from 1996 to 2007 : here.
2012 Q&A with John Carmack
A few questions asked while reading the source code.
Fabien Sanglard - What motivated to move the team to C++ for idTech4 ?
John Carmack - There was a sense of inevitability to it at that point, but only about half the programmers really had C++ background in the beginning. I had C and Objective-C background, and I sort of "slid into C++" by just looking at the code that the C++ guys were writing. In hindsight, I wish I had budgeted the time to thoroughly research and explore the language before just starting to use it.
You may still be able to tell that the renderer code was largely developed in C, then sort of skinned into C++.
Today, I do firmly believe that C++ is the right language for large, multi-developer projects with critical performance requirements, and Tech 5 is a lot better off for the Doom 3 experience.
Fabien Sanglard - So far only .map files were text-based but with idTech4 everything is text-based: Binary seems to have been abandoned. It slows down loading significantly since you have to idLexer everything....and in return I am not sure what you got. Was it to make it easier to the mod community ?
John Carmack - In hindsight, this was a mistake. There are benefits during development for text based formats, but it isn't worth the load time costs. It might have been justified for the animation system, which went through a significant development process during D3 and had to interact with an exporter from Maya, but it certainly wasn't for general static models.
Fabien Sanglard - The rendering system is now broken down in a frontend/backend: It reminds me of the design of a compiler which usually has a frontend->IR->backend pipeline. What this inspired by the design of LCC which was used for Quake3 bytecode generation ? I wonder what are the advantages over a monolithic renderer like Doom, idTech1 and idTech2.
John Carmack - This was explicitly to support dual processor systems. It worked well on my dev system, but it never seemed stable enough in broad use, so we backed off from it. Interestingly, we only just found out last year why it was problematic (the same thing applied to Rage’s r_useSMP option, which we had to disable on the PC) – on windows, OpenGL can only safely draw to a window that was created by the same thread. We created the window on the launch thread, but then did all the rendering on a separate render thread. It would be nice if doing this just failed with a clear error, but instead it works on some systems and randomly fails on others for no apparent reason.
The Doom 4 codebase now jumps through hoops to create the game window from the render thread and pump messages on it, but the better solution, which I have implemented in another project under development, is to leave the rendering on the launch thread, and run the game logic in the spawned thread.
Fabien Sanglard - Quake3 VM was converting the bytecode to x86 instruction at loadtime ; combining the security of Quake1 and the speed of Quake2. In idTech4 the bytecode is always interpreted: Why not have a "onload" bytecode to x86 compiler ? Did you elect the speed gain was not worth the development time ?
John Carmack - Q1 and Q3 implemented all of the “game code” in the (potentially) interpreted language. D3 was only supposed to use the interpreted code for “scripting” events. It still got overused, and we did have performance issues related to it. Our takeaway was to severely deprecate its use for Rage – there is still a scripting engine there, but it is really only used for commanding things to happen in the levels, not anything resembling enemy or weapon behavior. We still believe this is the correct call – real programming should be done in real programming languages, with proper debugging and tool support.
Fabien Sanglard - The frontend/backend form a pipeline which is very friendly to SMP systems/functional programming: Is it an approach that was satisfactory and then generalized in idTech5 to every subsystems (physics, renderer, network, etc...) ?
John Carmack - D3 was set up to have game code and the rendering front end run on one core, and the rendering back end that actually issued OpenGL calls on another. This provided good balance on the PC, where OpenGL driver overhead is high. For Rage, we optimized more for the consoles where graphics API overhead is very low, running all rendering on one thread and just the game code on another thread. In most performance limited areas, the game code still dominated.
More CPU cycles in Rage are spent in a general “job system” that takes lists of relatively fine grained work and parcels them out between all available cores. This was pretty much required for taking good advantage of the cell processors on the PS3, but it is generally a better direction than manual thread scheduling once you are above two or three cores.
Fabien Sanglard - Is there any aspect of the design and architecture of the code you were particularly proud of in idTech4 ?
John Carmack - I think the in-game GUI system is also worth mentioning – it added a lot to the character of the game.
2004 (October): Interview for "The making of Doom3" book .
In October 2004 was released a pretty good book by Steven L.Kent: "The making of Doom III" where you can find a very insightful
interview of John Carmack in the last chapter. Since the book is out of print I don't think it is an issue to transcrit part of the interview here:
Steven Kent - Was the Doom3 graphic engine harder to create than past engines ?
John Carmack - That side of the development went really nicely. The features were all pretty much ready years ago, and I spent a
year or so tuning it up and adding the different options and parameterizations that people needed to get exactly the effects that they wanted.
All of that went kind of according to our original schedule.
Steven Kent - I understand you created Doom3 in C++.
John Carmack - DOOM is our first game programmed in C++. I actually did the original renderer in straight C working inside the QUAKE III Areana framework,
but most of it has been objectified since then, and all of our new code is set up that way.
It's been a mixed bag. There have been some bad things from going about it that way; but in general, it's been moderately positive for the development stuff.
Having as much as we do now, having the large objects, it's been a useful thing.
Steven Kent - Just how much code went into creating the doom 3 rendering engine ?
John Carmack - The actual rendering side of things, the core medium, is not all that big. It's not that much larger that the previous stuff we have done.
That's actually something that causes me a fair amount of concern. We have more programmers working on DOOM3...We've had five programmers at a time, which is much more
that we have had on previous projects. Perhaps even more significantly, individual programmers have been creating subsystems effectively from scratch, where in the previous
games I wrote the face of everything. I produced a functional system, and we would have usually a secondary programmer kind of flesh out of the stuff that I wrote; but
I wrote the entire basic framework, and it fit together nicely. I had consistent vision throughout everything.
With DOOM3, we started off with multiple programmers writing large subsystems form scratch, which means that things don't git together as nicely as when we started off with
one person setting everything up. There are always little inefficiencies you get when you have different people [who] dont' [always] think in exactly the same way. You look
for the synergies between the different areas and the ways you can simplify things down.
There's always a strong desire with functionality to kind of pile things down.
Historically, I always resisted this. I've been one of the big believers in keeping it as simple and clear as possible. It pays off in maintenance and flexibility and stuff
As we have more people working on things, a lot of features get added. That is definitely a two-edged sword. Sure, new features are great; but what's not always obvious is
that every time you add a feature, you are at the very least increasing entropy to the code, if not actually breaking things. You are adding bugs and inefficiencies.
that's one of my larger concerns with increasing the feature count and the number od developers. IN previous games, when it all came from me, any time there was any problem
I could go in very rapidly and find what the source of the problem was. Now, we've got lots of situations where if something is not right, it could be like "Oh that's Jan Paul's
code, or Jim's code, or Robert's code. It's not so much a case where one person can just go in and immediately diagnose and fix things. So, there is a level of inefficiency.
It's certainly manageable. Lots of projects that are managed in the world today require huge numbers of resources and complexity, but you add this additional layer of oversight
and accept this additional level of inefficiency.
Steven Kent - What new features have you added to the Doom3 engine ?
John Carmack - Well, the fundamental thing about it on the
rendering side is that it completely, properly
unifies the lighting of surfaces. With previous
games, you always had to use a collection of tricks
and hacks to do your lighting. We would do light
maps, blurring, ray-casted light maps for the static
lighting, and static lights on static surfaces in the
games. We used a different level-point Gouraud
thing doing the static lights on dynamic surfaces
moving around and then mushing together all of
the dynamic lights onto the dynamic surfaces to
modify the Gouraud shading.
There was this matrix of four things
you would have static surfaces, dynamic
surfaces, static lights, and dynamic lights, and
there were four different sorts of ways that
things got rendered. You might have lights
this way and that way for one, and you might
have shadows a different way and lighting a
different way for another thing.
That was more or less forced because of
the limitations that we had to work with in
terms of what the hardware and
processors could do. I
always thought that was
a negative thing.Things
Shaved differently as
to whether they were going to move
or not. i referred to it as the 'Hanna-
Barbera effect/You could always tell
these rocks were going to fall away
because they looked a little different
than the cell painting behind them.
The big goal for DOOM 3 was to unify
the lighting and shading so that everything
behaved the same no matter where it came
from, whether it's moving around or a fixed
Part of the world.
Lots of effort still goes into optimizing things
when they are static, but the resulting pixels
are exactly the same. Now, somewhat tied in
with that, lighting becomes this first-class object
rather than lots of lights mushed around with
the world, kind of painted with light like the
QUAKE series would do.The bump mapping
ties in with the lighting and the shadowing to
produce the DOOM 3 look and visuals.
This is the
In the next five
years, we'll see
there is the
standard sort of QUAKE level of rendering in
graphics, where you have light-mapped worlds
and Gouraud-shaded characters.That is pretty
much where the industry standard is right now.
The Industry standard will be basically
bump mapped surfaces and proper shadowing
for the next five years or so. That's what defines
the graphics side fundamentally on a technical
level. Now, what you do with that light surface
interaction crosses the bridge between what
the game does, what the scripting does, the
interactions with the renderer, and how models
are built, ties in with lots of areas that you
have as technical data points.
There are two different particle
systems in DOOM 3. One can be [used
for] moving things around and affecting
things dynamically like the smoke
drifting out of guns.The other is more
of a static effect type of thing like smoke
and bubbles coming in the world. You 11
differentiate those two for performance
reasons because the things that are just
effects in the world ... you don't want to
mess with them if they are not in view.
So, particle effects are just dynamic
models that get tossed in when necessary.
The animation system is a big part of it.
The animation subsystem is a big part of the
coding in DOOM 3, where the motion of the
characters determines a lot of things that
happen. All this requires very complicated set
of interaction, and that is a lot of what Jim
(Dose] has been working on. People look at
that and think of it as sort
of a render feature, but
the way DOOM 3 is set
up, it's not really part of
the renderer. It's mostly
11 Part of the game code.
| The renderer just looks $
at it as, "Okay, here
is something that is
generated as part of a
model surface. Now I
need to make lighting
and shadowing for it."
And that also
then ties in with the
animation system and
how it interacts with
the rag doll system
that Jan Paul wrote,
which interacts with
the physics and the
precise collision detection which does feedback
with the renderer and model data structure. AH
of that kind of goes back and forth a lot.
The scripting system that we have, to let
the level designers add more complex stuff,
is something that is actually an outgrowth of
fairly old technology.That took an interesting
developmental path. Back in the original
QUAKE days, the game code was done in the
U-C. interpretive language.
One of the licensees, Ritual where Jim
Dos used to work evolved and expanded
that [the scripting system] in a lot of ways
for their game Sin.Then that technology was
licensed for Alice and used in Heavy Metal.
They had been developing this branched path
while we had gone with QUAKE II back to the
in-code DLLs and stayed that way with QUAKE
We actually brought most of that evolution
ack in with DOOM 3.There was a rewrite
where we restructured and cleaned up and got
to apply the lessons learned. But that is not a
good way to write game code. I actually think
we made some mistakes by doing more stuff
in script than we should have for development
reasons. It [scripting] is a convenient thing
for level designers to be able to make more
interesting things happen than they could with
just tying things together in the level editor.
One of our big, not so much technological
improvements, but structural architectural
improvements, is the integration of all of our
utilities into the executable. And that was
something that actually saved a bunch of code.
I moaned and complained about the code size
for everything, but integrating the utilities
saved probably some tens of thousands of
lines of code that we used to have duplicated in
slightly modified forms.
We had three places that code could live:
the game itself, the level editor, and the off-line
utilities. All of them had similar sets of things
that were not quite similar enough that they
could share a library or something. Pulling
all this together was a nice way to unify all of
that, and one of the strong reasons for unifying
them was also to allow the editor to use the
renderer exactly as the game uses it, which is
something we have never done in our previous
titles. It allows designers to see exactly what
their level is going to look like with all the
lighting, shadowing, and bump mapping,
animated textures, animated particles, and all
of that stuff, without having to actually load it
ud into the game. One of the real gating factors
to creativity in the QUAKE generation of games
was these significant preprocess times that you
had to go through to get your simple, shaded
view in the editor into the game with all of the
In small areas, it might only have been
several minutes, but in the full-size levels, the
times were too long. Even when we were using
these big, expensive multiprocessor machines,
there were a lot of levels that would take over
30 minutes to process. Some of the licensees
did not make as effective decisions on the
complexity issues of the maps, they did not
have the big expensive processing machines,
and they would have levels that would take up
to eight hours to process.
There's an interesting slope of inactivity that
you get where the most creative aspect is when
you are messing with something interactively
where you are actually twiddling a kn<56 and
seeing something changing. You can only do
that type of thing when you've got sub-second
or hopefully sub-tenth-of-a-second feedback.
When you've got something that stretches
up into a couple of minutes between tries, like
reloading a level or something, you've got
another level of things that you are willing to
attempt to do. But when you start going down
to hours at a try, you just make a rough cut and
don't tweak and tune it nearly as much.That's
one of the things we have with DOOM 3 is the
ability to work interactively with things inside
the editor and inside the game until you get
to the level of quality that you are specifically
Steven Kent - Unified Lighting two particle systems, and an on-the-fly rendering inthe editor...is that everything ?
John Carmack - If we just walk down through the code,
there is just a ton of new things, but again...
People want bullet points and they want to
be able to categorize things. Reality is more
complicated than that.
If you give up bullet points for a lot of
stuff, you can say... a bunch of things about
the physics systems and exact hit detection,
and about the animation systems, and the
game scripting system, and the game editing
system, and the integration of the editor, you
know, and the integration of the rest of the tool
chains, and the ability to have video textures
on surfaces, and the remote camera views on
surfaces, and the synchronous-tick player stuff
on surfaces, and the remote camera views on I
surfaces, and the synchronous-tide player stuff
that avoids some of the frame-rate-dependent
issues that we had with previous game
There is just a ton of little things that I just
look at as partial aspects. Step back and look
at all the different things that you can do to
give feedback to the player on a basic action
like shooting a gun or hitting a monster. I
went through and counted, and there are 30
different effects that we can use. There's the
sounds that it makes when the guns fire
and there's the animation of the gun. There
s a potential kick of the hands, and there's
a muzzle flash, and there's a smoke trail
that comes out.
There's a projectile that comes out that
may make a sound. It may leave a particle trail
behind it It will probably have very dynamic
light with it. It may impact a character. They [the
designers] can put a blood decal on it. They
can effect a pain animation on the character.
They can turn it into a rag doll.They can have
a Wood particle stream ejecting from it.They
can have fragments ejecting from it.They can
have brass [jackets] ejecting from the gun. They
do have impacts on the wall. They can leave
a decal that can stay there. They can spawn an
FX system to have additional particles coming
off of that. They can have physics effects on the
target that is hit. There's just this huge list of all
of the stuff that goes down.
Steven Kent - Can you walk me through the evolution of your games engines ?
John Carmack - Catacombs 3D was the very first commercial
game that we did that had 3D aspects to it. That
was limited in that the entire map was made out
of nothing but tile blocks. You could put textures
on the blocks.There were limits like, there
were no doors. You just had blocks that would
disappear. It had scaled, bit-mapped creatures.
So that was the first 3D-action shooter.
The next step was Wolfenstein 3D, which
was still a block-based map, but it had a few
minor, new features in there. We had doors
that slid side-to-side and push-
walls and a few interactive
features; but the characters
and items were still
basically the same.
The internal rendering
was very different from
Catacombs 3D. Catacombs
used basically a line-
rasterization approach, whii
Wolfenstein used a much
more robust ray-casting
approach. But the end result
was that they rendered the
Steven Kent - What Wolfenstein the first game to use raycasting ?
John Carmack - I don't know. Things like that aren't that
important. I know people like to try and focus
in on a specific technique, and where is the
magic... hunt for the magic in something like
that. That's really not the way things are, you
know. It's never been the crucial part of the
It's not the first use of this technique that
makes it important For any given way of doing
things, there are many ways of approaching it Tin
visuals from Catacombs and Wolfenstein were
very similar; the process was completely different
That's a good example of the fact that there is
never a critical algorithm, there are always
multiple ways of approaching things. a
Another good example the next
major step for us was the original 1
DOOM engine, which brought in
some light-diminishing effects and it
took us away from block-based maps.
Everything now could be arbitrarily
designed and have different heights.
DOOM was known for using BSP
trees, more so than rasterization, but
that was a small part of what made
up the rendering.There was another
competitive engine at the time the
Build engine that was used for Duke
Nukem and Shadow Warrior, which used
a completely different rendering architecture
internally and produced pictures that were
basically the same effects.That's just another
example about how the exact particulars of
implementation are not that important on the
larger scale side of things.
Then, after DOOM, we made QUAKE.
QUAKE was our first arbitrary 3D system where
you could have a full, complete look up and
down. Some people have extended the DOOM
and the Build engines so you could look up and
down, but that's not really what you wanted or
there. With QUAKE you had the ability to do
much better lighting where the lighting could
cast fuzzy, blurry shadows.
The characters were no longer scaled bit
maps, they were actually 3D models. At that
point in time, they were really, truly crude 3D
models. They'd have less than 200 polygons,
or triangles in most cases, compared to the
thousands [of polygons] that we have now. But
that was a really important step for us.
that, we evolved to start taking advantage of
Since then, the moves from QUAKE to
QUAKE II to QUAKE III have been much more
evolutionary than the earlier steps. Lots of the
code stayed common between all of those
systems.That meant that the developers
and the rest of the team did not have to go
through a major relearning process from one
step to another. When we added the hardware
acceleration, it did not radically change the way
they were doing things. It just made things look
smoother, and more colorful.
The new DOOM 3 engine that was another
radical change where it fundamentally changed
the way you build the textures, the way you
light the levels and animate the characters,
and the way that things move around and
interact inside the world. It's been a really big
change, very much similar to what we had
to go through with earlier generations, but
magnified by the fact that there is just
so much more now that the games are
expected to do.
We're expected to have a better
sound system, a better physics
system, a better networking
system, more things going
on in the world, better
AH of this stuff is
expected to be
Steven Kent - Does the doom3 engine lend itself to creating organic environments ?
John Carmack - We can obviously make some really, really
cool organic environments, and we have a few
of those in the Hell scenes.The rendering of
those is spectacularly cool. In fact, I worry a lot
that our best foot is halfway through the game.
You always want to put your best foot
forward and get the great impression at the
beginning. I think that the Hell stuff that we
have, with the more organic settings, is much
more visually stunning than the base level stuff
that we have throughout the early part of the
game. So yes, I think that it is extremely well
suited for that.
What it is not necessarily as well suited
for organic environments is things like dense
foliage.There is a clear set of things that you
need to do [in order] to do that. If you want
to have a ton of foliage, you need to make it
non shadow casting. You want the leaves to be
on light interacting. You throw away a lot of
the cool stuff that the engine does if you do that;
but if you want to render 500,000 leaves, you
are going to want to make them single texture
without affecting light so that you can have
them wave around without having to regenerate
the shadow from the sun coming off of them
every time they move.
If I were to write an engine specifically
for rendering forest scenes, I would use a
completely different algorithm than what I
am using now. Some of the licensees have
already done some of this. You can
do a great job with DOOM 3
technology by manually
applying a lot of the hacks.
If you want a forest
scene, you render tons
and tons of little leaves.
Instead of letting the
leaves cast shadows, you
just make a texture of a
leafy shadow and project
You do the same
thing with motion blurs. It
would be nice if this were
It probably will be in the
next generation. But if you
wanted to do a propeller,
you don't just make a
propeller and spin it really
fast.You make a blurred
texture and you rotate that
at a slower rate. You do the
same thing for a shadow.
Vou make a blurry shadow
and project that.
Steven Kent - Of the games you have worked on, Which is your favorite ?
John Carmack - QUAKE III Arena. I've always felt that there
is a battle between what you want to do with a
single-player game versus the multiplayer side of
things. Making QUAKE III was being able to say,
"This is an activity. It has no sequence or a story.
It's a simple, straightforward activity for fun."
You sit down and can play it for a little
while. There's not a moving story with deep
characters or anything behind it. It knows what
it is. It's really simple and it's good at that.
It wasn't our most successful game, although
it probably was our most successful engine
license. I think that it was probably one of the
more pure experiences.The original DOOM was
a really big game, but in recent times, QUAKE III
Arena was the game that I was really happy with.
Steven Kent - Have you started a laundry list of new features for your next game engine ?
John Carmack - There are a bunch of things that are
analytically intractable problems... things like
soft shadows, proper anti-aliasing, motion
blur, order-independent translucency.These are
things that there aren't closed-form solutions
for arbitrary environments.
You can do tricks to address any one of
them; but I pretty strongly believe that with
all of these things that are troublesome in
graphics, rather than throwing really complex
algorithms at them, they will eventually fall to
raw processing power.
People write research
papers about, "Here's a really tricky algorithm to
do that," but it doesn't work in all cases.
who are synthesists and think of those complex
algorithms, they really pooh-pooh that. They
don't like to hear that because they want it to
fall to cleverness rather than raw power; but the
way things have consistently, undeniably fallen
over the years is to raw power.
We do very little procedural texturing,
we just use hundreds of megs of textures.
We don't use scan-line subdivision sorting
depth inclusion, we use depth buffers that
take megabytes. Right now we're using
subsampling for anti-aliasing.The same
thing will happen for soft shadows, better
translucency, motion blur... all of these things.
I think the "clever" part will be figuring out
ways that we can solve all of these at once.
We'll still take a ton of samples, but well also
be a little bit more intelligent about how we
The next generation engine should be
suitable for a lot of things that are done in off-line rendering right now. I don't expect that
you will see it used for motion pictures; but I
do think that the next generation game engine
technology will be completely capable of doing
a lot of the off-line rendering that goes into TV
and commercial-level production.
Some of that [graphics rendering] will run
in real time and that will be great. Some of
that stuff will be slower. Instead of running 30
frames per second, it may run at 1 second per
frame.That's still a monumental improvement
over the 30 minutes per frame that you get
with off-line rendering [today].
People are coming at this from different
I think one of the things that Nvidia
did that was really, really smart was that they
bought a company called Exluna, which had
just been formed by a bunch of people from
Pixar. They have people who worked on Final
Fantasy and Shrek all of these people are
big off-line renderer guys.They are coming
together to write a brand-new off-line renderer.
But they are writing it to be able to take some
advantage of hardware acceleration, so they
can start speeding things up.
Now, they are still planning on doing
everything that all the previous renderers
did and better. They're not built around 3D j|
acceleration. They're using the 3D acceleration
for the low hanging fruit to try to speed some
things up.They're going to be a few times
faster, which is still a big deal.
The game companies are coming from
the other end. They're built around doing
things really fast, and the quality is improving
We're going to be approaching the production
process from two different sides.They might be
ten times faster than the old software renderers
used to be, and we might work up to having
graphics with ten times higher quality-as an
Eventually you will reach a point where
you might say, "If I can spend ten minutes per
frame, I will use this type of renderer. If I can
only spend one minute or ten seconds per
frame, I will use this type of thing.The content
and the way you go about [creating] it will be
different, but it will be an interesting kind of
You are always working within your hardware
restraints and figuring out the technology.
Then, when you make a second game with
mature technology, youVe got a lot more
elbow room and the quality improves.
So it's not that far off. In fact, even the
resolution which they render on film, which is
very high between one and two megapixels,
depending on what they are doing-even that
quality of resolution is not that far off.
We are still a long ways from photo-
realistic.Things look a lot better, they have a lot
more detail, but they are still clearly synthetic
images. I think we have a ways to go yet before
we'll be past that.
If you throw enough texture detail at some
things, you can make images that look photo-
realistic from a little bit of distance with current
technology. It's just a matter of throwing very
large textures that are digitized on there.The
synthesis of those images is certainly possible
with off-line rendering today. Next generation
rendering technology will have a lot of scenes
that look effectively photo-realistic if you have
a little bit of distance from it.
You see that in games where they have
more limited constraints today, like the sports
games where they have got a very finite
amount of issues that they have to deal with.
Those are pretty damned close to photo-
realistic right now. When you are across
the room, sometimes you can't tell if you're
looking at a game or a broadcast with certain
We've got a harder job to do in the more
general, first-person environment, but it's
still not that far away.The next generation
first-person games, if somebody walks by the
outside of your office looking in, there will be a
lot of scenes where it's not clear whether it was
computer simulated or used a digital image.
Steven Kent - How long before we see games with the lord of the rings-quality graphics ?
John Carmack - It's a pretty clear path that we've got. There
is no fundamental magic that needs to be pulled
in. For individual scenes of that level of quality,
the next generation engine is going to be an
important thing. You will be able to throw larger
and larger amounts of stuff at it without having
to write a new engine to take advantage of it.
Once you cross the threshold of
programmability... RenderMan, for instance,
as a functional interface, has not changed
much in ten years. The next game engine is
going to be something like that. Once you have
that programmability, making fundamental
changes inside there does not really change
your data. There still may be generations of
improvements in the plumbing underneath
things, but the interface may stay constant for
, ten years. It's hard to say.
There are a lot of things that add
momentum to strategic decisions like this.
With all of the work going on in hardware
accelerators, there is a style of programming
that they (the latest graphics chips] encourage
if you want to take advantage of them. And,
of course, they have so much power that you
really want to take advantage of them.
When you were just doing things on the
CPU, you could have lots of divergent ways of
doing things. Some people were doing voxel
stuff, and some people were doing splatting
stuff, and some people were doing ray tracing
and ray casting versus triangle rasterizers.
You don't really have that many paths
open with the hardware. A lot of things got
more flexible just in this last generation with
programmability, but there is still sort of a
fundamental style of how the hardware wants
to work. I think there are better odds in terms of
something having long-term stability because
of these other forces out there.
Steven Kent - Do you foresee this as your last engine ?
John Carmack - Again, the next engine is going to be of
a different character than the previous ones.
Even DOOM 3 is set up by knowing that you
have these limited hardware features available
and you need to do the best you can with
them, while the next one is going to be much
poser to defining a programming language.
You reach a point where you don't
need to keep inventing new programming
There may be reasons to invent new
ones. One may be much easier to construct
certain problems in. There really is not a reason
to reinvent the C language every year. It may
turn out to be very much like the evolution of
programming languages, where every four or
five years there is a bump to the standard. You
include this bunch of things that people have
tried and it's worked out well. But it's not a
deep, fundamental change.
While there will continue to be new
engines, just like there are new programming
languages and off-line renderers, there is going
to be much less of a pressing need for them.
Some people will still do it because some
people like making new things like that. But the
chances of it becoming a landmark event for a
broader, industrial scale, I think, are less likely.
The author Steven L.Kent was also interviewed by the now defunct website "homelanfed.com"":
HomeLAN - Many people think of John Carmack when they think of id and Doom.
What was it like to talk with him for the making of Doom 3 book ?
Steven Kent - Talking to John Carmack was a "wham, bam, thank you ma'am" experience.
He was at work on his computer. I stepped in. He turned, smiled, and said nothing.
I asked my first question, and he launched into a very specific and detailed answer
with no warm-up. He was very generous with his time and answered every question
I could think of. When I said, "I think that takes care of me," he swung back to work
with out a good bye. It was sort of, "task complete." In truth, I really like
interviewing John Carmack because he answers questions very thoroughly. He does not
withold information in my experience. He works hard to be cooperative.
I can relate to Steven's experience: Everytime I emailed John Carmack the response had no header/footer (like when I reported a bug in Wolfenstein 3D for iPhone): Pure information with no noise.
2004 Quakecon keynote
Pre-recorded and played at Quakecon 2004
John Carmack: 2003 at id Software studio
A 20 minutes interview for GameSpot broken down in four parts: