Eyes-on With CryENGINE 3

At E3, South Hall and West Hall are the main attractions. Before they open each morning, people accumulate like moisture on the outside of a steaming kettle in what are less lines and more like video-game-hungry mobs that view those doors as the single ingress to nirvana. Once past the security and the black lights, you are inside the boiling kettle, assaulted from all sides by lights, sound, and swag-pushers. To maintain sanity in this environment is a practice in futility.

And then there’s Concourse Hall. It’s marked by a little side door located about halfway down a side hallway between the IndieCade area and South Hall and it’s about as far from those zoos in every way, shape, and form. It’s well lit, globs of people aren’t yelling, and there’a distinct lack of people toting around those orange G4/X-Play swag bags that simply and efficiently mark non-press (those and the god damn Oswald the Rabbit ears).

And it’s here that I sit down for one of the most graphically impressive and definitely most technically impressive display of the show: Crytek’s CryENGINE 3 demonstration. With animation technologist Kirthy Iyer at the helm, we get a peek at what the newest edition of the engine can offer. Overall, it’s not too dissimilar to what was shown at GDC this year (minus the sizzle reel and mermen), but with a very (very) passionate CryENGINE engineer at the helm, we’re shown a bit more.

We start with a simple rundown of the interface and how the editor works (it’s a “What You See Is What You Play,” a riff on the old What You See Is What You Get editor manifest) and jump into the new parallax course mapping. It’s applicable to any surface, from walls to the ground and, in our case, provides significant depth to the dirt road we’re walking around on (you can jump from editing to playing at any moment).

This leads us into checking out self-shadowing displacement, highlighted by the way you can get normally flat ears to protrude and extrude into the more realistic curves and waves through facial tessellation which can, unlike most engines, apply shadows to itself. This means that those moments when you notice that a character’s teeth aren’t affected by the lips’ or how light seems to effortlessly permeate noses don’t happen anymore.

Next we jumped into showing off the three options for hardware-based tessellation. The two less intensive options are Phong tessellation and RN tessellation. They both, for the most part, tend to smooth out edges (desired) and inflate geometry (not desired). Displacement mapping, however, offers straight up extrusion, directly recreating the intent as given by a displacement map and the level of which can be modified dynamically.

Then, I saw probably the single most impressive technological feat: dynamic texture application to tessellated geometry. Iyer showed it to use via a stone wall tessellated into realistic rocks and then applying a grassy moss texture to it. The texture warped itself to bend in and out of the crevices of the stone and would change based on how far into and where along the wall he moved it with his mouse. I don’t know if this was particularly difficult to implement, but it certainly made an impression on me.

Iyer then shows us a bit of the particle shadowing, self-shadowing, and blurring effects, but that’s pretty much displayed to the same extent with the aforementioned GDC video. We also see something else we’ve seen before in real time reflections, but we manage to get some additional technical insight. It turns out that unlike some of the global real time reflections in the new Unreal Engine, CryENGINE uses screen space reflections, meaning if you don’t see it on screen, it doesn’t get reflected. This works out fairly well for the most part, but on the occasion when you are standing in front of a mirror with a fire going on behind you, you don’t see the fire in the mirror.

We go on to see the “dynamic AI navigation” in the editor which now works with “multi-layer navigation meshes.” You simply define a bounding region of where you want an AI to meander about and then the engine will calculate navigation meshes, even over buildings with multiple floors. This is the only pre-calculation I’ve seen as from there on out, you can dynamically alter the pathways with extensions and obstructions and see the pathways update in real time.

The demo closes out with ocean and lighting effects. Instead of using sprites for the ocean, the new CryENGINE actually uses geometry and tessellation. We first see the water swelling and moving around but then Iyer switches to a wireframe view and we can still see the polygons undulating away. This means we get more enhanced visual representations of the physical interactions with the ocean, shown to use by dropping things into the water, turning on the rain, and fiddling with weather conditions.

We also see a bit of the real time sun movement/time-of-day effects that interact with the environment. It doesn’t just appear to be light reflections off the surface; it seems like we get entirely different light profiles that affect subsurface properties of the water, but that’s just speculation.

And also, just as an aside, it’s great listening to someone so passionate about their work talk about the things they produce. Iyer was pretty much a short John Carmack. You can check out some of the water stuff I was talking about at where CryENGINE director of business development Carl Jones walks us through some of the oceanic highlights.

, , , , , , , , , , , , , , , , , , , , , , ,