INDEX/WRITING/BUILDING A GPU FOG OF WAR SYSTEM FOR HALLS OF GREED

Building a GPU fog of war system for Halls of Greed

01Overview

Halls of Greed never shipped, but I still like the fog of war system I built for it.

The game needed dungeon visibility that cared about the actual rooms. A circle around the player was not enough. Walls, doorways, corners, and weird modular kit pieces all needed to block sight. I also wanted the result to be useful outside the fog material. If an enemy was hidden, its health bar and minimap icon should be able to know that too.

The system ended up as a small GPU pipeline: author blockers, bake them into a signed distance field, trace visibility from active casters, blur the mask, then expose one world-space texture for shaders and gameplay.

02the visibility texture

`WarFogRenderer` treats the fog as a square volume in world space. The object's transform says where the volume lives and how much map area it covers. World positions get remapped into that texture, so everything samples fog in the same coordinate system.

The renderer owns three textures:

fog-textures.txttext
m_SceneSDF  distance to nearest blocker
m_PrePass   raw visibility mask
m_BlurPass  softened mask for shaders

That separation kept the frame work easy to reason about. Static dungeon geometry goes into the SDF once. Each update only uploads active vision casters, traces the raw mask, blurs it, and binds the result globally.

SDF DEBUG PLACEHOLDER

03authoring blockers

Vision blockers were normal Unity components. `WarFogCollider` supports rectangles, circles, and polygon data from `MapCollider`.

WarFogCollider.cscsharp
public enum ColliderType
{
    Rectangle,
    Circle,
    MapCollider
}

The rectangle and circle modes worked well for modular wall chunks. Some wall prefabs carried their own fog collider cubes, so the blocking data traveled with the art. For larger room shapes, the map collider path packed polygon points into a GPU struct.

That mix was useful in production. Small kit pieces could handle their own visibility, but room-scale silhouettes could still be authored as polygons when that made more sense.

The easy bug here is transform drift. The renderer has to convert world position, scale, radius, rotation, and polygon points into fog texels before the compute pass sees them. If that math is off, the whole thing still runs, but the shadow edge slides away from the wall. Miserable bug. I kept that conversion centralized for that reason.

COLLIDER AUTHORING PLACEHOLDER

04tracing through the SDF

The static compute pass writes distance to the nearest blocker. Circles, rectangles, and polygons all collapse into the same texture.

WarFogSDF.hlslhlsl
float merge(float shape1, float shape2)
{
    return min(shape1, shape2);
}

Once the dungeon is an SDF, visibility tracing gets simple. Empty space gives large ray steps. Tight areas near walls give small steps. If the trace crosses into blocker distance, visibility dies.

The dynamic pass loops over active `WarFogCaster` entries. Each caster is just texture-space position, radius, and a shadow flag. For each output pixel, the shader traces from that pixel toward the caster through `m_SceneSDF`.

WarFogComputeFog.computehlsl
for (int i = 0; i < 16 && rayProgress < light_distance; i++)
{
    float2 samplePoint = position + direction * rayProgress;
    float sceneDist = scene_input[samplePoint].r;

    if (sceneDist <= 1.1)
        return 0.0;

    lightContribution = min(lightContribution, sceneDist / rayProgress);
    rayProgress += sceneDist * (0.8 + 0.4 * random(samplePoint));
}

The random step jitter is not fancy, but it helps. Without it, the trace pattern starts to show up around reveal edges.

The pass writes visibility only. White is visible. Black is blocked or outside range. The material decides what that means visually.

05making it usable

The raw mask is too crunchy, so the renderer runs a small separable blur. I kept it restrained. It cleans up aliasing, but corners still read as corners. Pretty fog that makes a dungeon harder to read is a bad trade.

After the blur pass, the renderer binds the result as a global shader texture:

WarFogRenderer.cscsharp
Shader.SetGlobalTexture(WarFogBufferSample, m_BlurPass);
Shader.SetGlobalFloat(WarFogScale, transform.lossyScale.x);
Shader.SetGlobalVector(WarFogPosition, new Vector4(position.x, 0, position.z, 0));

Any shader with world position can sample the same fog value. The main fog plane uses it, but custom lighting, VFX, decals, and object materials can read it too. That was the part I cared about most. The visibility answer exists once, then the rest of the game can reuse it.

There was also a CPU-side sampling path. `WarFogSampler` could ask the GPU whether a point was in fog, then update `InFog` through async readback. `WarFogHider` and UnityEvent hooks could use that for enemies, health bars, minimap icons, or reveal effects.

I would call that part a hook, not a finished production feature. In the checked-in renderer, the sampler update calls are commented out in the main loop. The shape is there, but it still needed decisions about sampling rate, latency, and which systems actually deserved CPU side fog state.

06what I would clean up now

The first thing I would fix is resource lifetime. Runtime graphics code always finds the one enable/disable path you forgot about, so buffers and render textures need boring defensive cleanup.

Dynamic blockers also need a real policy. The SDF is built on enable, which is fine for static dungeon walls. Doors, moving walls, and destructible props would need either a scheduled SDF rebuild or a separate dynamic blocker layer.

I would also move the debug helpers into editor tooling. Saving the SDF to PNG and previewing the fog texture are useful, but they should be explicit inspection tools, not random runtime leftovers. I would probably try to find a way to render the texture in screen space, and occlude the blockers I don't care about, and just have everything be dynamic. Maybe I'll revisit this someday.

07why I still like it

The part that holds up is the flow of data. Level authored blockers become one worldspace visibility texture. The GPU does the texture sized work. Materials decide the final look. Gameplay can sample the same answer when it needs to.

It solved the actual dungeon problem without turning into a pile of CPU raycasts, hand-authored reveal zones, and one-off shader hacks. That is enough for me to still want it in the portfolio.

Related
<- All writing