INDEX/WRITING/APE GFX, THE GRAPHICS LAYER I ACTUALLY NEEDED

Ape GFX, the graphics layer I actually needed

01Overview

Ape GFX started from a small frustration that gets bigger the longer you build rendering code: drawing a triangle is easy, but keeping the graphics contract honest after the triangle grows into render targets, depth textures, shader reload, storage buffers, and compute is where the API starts to matter.

The goal was never to build a full engine in one pass. That would force decisions about materials, cameras, scenes, assets, and renderer architecture before the lower layer had proved itself. The useful target was smaller and harder to fake: a low level graphics framework for Odin desktop games, with enough validation to catch bad usage before it turns into backend behavior.

That is the shape Ape GFX has now. It is D3D11 first, Slang first, Odin first, and deliberately narrow. Vulkan is still a pressure test for the public contract, not the thing driving every early decision.

02The problem we are solving

Game rendering code wants control. It also wants guardrails.

Raw D3D11 gives you control, but it lets every caller rediscover the same sharp edges. Which resources need views? Which shader slots are required? Is this texture currently an attachment, a sampled input, or a storage target? Did this handle come from the current context? Did the shader layout match the vertex buffer layout, or did the driver reject it ten calls later?

Ape GFX takes those questions seriously at the API boundary. The framework keeps graphics programming visible, makes resources explicit, validates descriptors before backend calls, and turns Slang reflection into runtime checks and generated Odin helpers. Common mistakes should fail early, with messages that point back to the bad descriptor, binding, or pass.

The current public frame shape is intentionally familiar:

snippet-01.odinodin
gfx.begin_pass(&ctx, {
	label = "main pass",
	action = gfx.default_pass_action(),
})

gfx.apply_pipeline(&ctx, pipeline)
gfx.apply_bindings(&ctx, bindings)
triangle_shader.apply_uniform_FrameUniforms(&ctx, &frame_uniforms)
gfx.draw(&ctx, 0, vertex_count)

gfx.end_pass(&ctx)
gfx.commit(&ctx)

That flow borrows the good part of Sokol: a small state machine that is easy to read in a frame loop. Ape keeps that surface, then fills in the contracts needed for a native desktop renderer.

03The line around the framework

The package boundary is plain on purpose.

`engine/gfx` is the low level graphics API. `engine/shader` loads `.ashader` packages and converts them into `gfx.Shader_Desc`. `engine/app` is sample grade windowing. The samples and tools prove the contract, but they are not the framework API.

That boundary keeps `gfx` below the renderer. There is no material system. There is no scene graph. There is no camera layer. There is no automatic resource manager. A game or renderer layer can build those pieces later, but `gfx` should not guess what they need before real callsites exist.

This also explains the v0.1 scope. D3D11 is the production backend because it is available, debuggable, and good enough to test the public API against real GPU behavior. The null backend exists for smoke tests. Vulkan exists as a visible future target, but not as a runtime contract yet.

04Handles, contexts, and why lifetime stays explicit

Every GPU object in Ape GFX is a distinct opaque handle:

snippet-02.odinodin
Buffer :: distinct u64
Image :: distinct u64
View :: distinct u64
Sampler :: distinct u64
Shader :: distinct u64
Pipeline :: distinct u64
Compute_Pipeline :: distinct u64

Zero is always invalid. Live handles are generational IDs that include a slot, a generation, and the context that created them. That gives the framework enough information to reject three common classes of bugs: invalid handles, stale handles, and handles used with the wrong context.

Resource creation follows the same pattern everywhere:

snippet-03.odinodin
vertex_buffer, ok := gfx.create_buffer(&ctx, {
	label = "triangle vertices",
	usage = {.Vertex, .Immutable},
	data = gfx.range(vertices[:]),
})
if !ok {
	fmt.eprintln("vertex buffer failed: ", gfx.last_error(&ctx))
	return
}
defer gfx.destroy(&ctx, vertex_buffer)

The older Sokol style `make_*` helpers still exist as compatibility aliases, but new code should prefer `create_*`. Returning `(handle, ok)` makes failure visible at the callsite, which fits Odin better than returning only an invalid handle and hoping the caller checks it.

Above that, lifetime policy belongs to the renderer or asset layer. `gfx` reports leaked live resources at shutdown and rejects stale handles, but it does not decide when a texture or buffer should die.

05Views are the center of the resource model

The view model carries most of the design. Images and buffers do not bind directly. A `View` is the public object that describes how a resource is used.

A single image can have one view as a color attachment and another view as a sampled texture:

snippet-04.odinodin
offscreen_image, image_ok := gfx.create_image(&ctx, {
	label = "offscreen color",
	usage = {.Texture, .Color_Attachment},
	width = 768,
	height = 768,
	format = .RGBA8,
})

color_view, color_ok := gfx.create_view(&ctx, {
	label = "offscreen color attachment",
	color_attachment = {
		image = offscreen_image,
		format = .RGBA8,
	},
})

sample_view, sample_ok := gfx.create_view(&ctx, {
	label = "offscreen sampled color",
	texture = {
		image = offscreen_image,
		format = .RGBA8,
	},
})

That may look like extra work in a tiny sample, but it is the right model once the same resource can move between passes. D3D11 already thinks this way with SRV, UAV, RTV, and DSV objects. Vulkan also wants image views and descriptor metadata. Ape exposes the idea early instead of hiding it behind backend code and then needing to add it later.

The current view flavors are sampled textures, storage images, storage buffers, color attachments, and depth stencil attachments. `apply_bindings` only accepts sampled or storage views. `begin_pass` only accepts attachment views. If a render pass tries to sample an active attachment, or if two bound views create a read and write hazard over the same resource, validation rejects it before the backend sees the command.

That is a small rule with a large payoff. It makes pass usage readable in user code, and it gives a future Vulkan backend enough public information to insert resource transitions internally.

06Descriptors are contracts, not loose bags of fields

The descriptors use Odin struct literals and `bit_set` usage flags because those callsites read naturally:

snippet-05.odinodin
texture, ok := gfx.create_image(&ctx, {
	label = "albedo",
	usage = {.Texture, .Immutable},
	width = image_width,
	height = image_height,
	format = .RGBA8,
	data = gfx.range(pixels[:]),
})

A zeroed descriptor is invalid unless its defaults are documented. `Buffer_Desc.size` can be inferred from initial data. `Image_Desc.kind` defaults to `.Image_2D`. `Sampler_Desc` defaults to nearest filtering and repeat wrap. Those defaults are useful because they are explicit in the contract, not because zero happens to slip through.

The validation layer checks shapes before native calls. It rejects storage buffers with update flags, multisampled sampled views, missing shader stages, noncontiguous color formats, bad vertex layouts, invalid attachment dimensions, and a long list of smaller mistakes. Backend code still validates backend limits, but the public layer handles as much as it can without knowing native objects.

That line matters. A thin wrapper mostly changes spelling. Ape GFX makes the legal shapes of the API visible.

07Shaders are part of the runtime contract

The shader pipeline sits inside the runtime contract.

The normal path is:

snippet-06.texttext
assets/shaders/*.slang
  to tools/ape_shaderc
  to build/shaders/*.ashader
  to assets/shaders/generated/<shader>/bindings.odin
  to engine/shader.load
  to engine/shader.shader_desc
  to gfx.create_shader

`ape_shaderc` uses the Slang API rather than relying on command line only `slangc`. It emits D3D11 DXBC for the current runtime backend and SPIR-V for future Vulkan work. It also writes an `.ashader` package with bytecode and compact reflection records, then generates Odin bindings for the shader.

Those generated bindings cover the boring but error prone parts:

snippet-07.odinodin
textured_quad_shader.set_view_ape_texture(&bindings, offscreen_sample_view)
textured_quad_shader.set_sampler_ape_sampler(&bindings, sampler)
cube_shader.apply_uniform_FrameUniforms(&ctx, &cube_uniforms)

They also generate simple vertex layout helpers and compute dispatch helpers. In the GFX Lab sample, the user side asserts that the Odin vertex structs match the reflected shader layout:

snippet-08.odinodin
#assert(u32(size_of(Cube_Vertex)) == cube_shader.VERTEX_STRIDE)
#assert(offset_of(Cube_Vertex, position) == cube_shader.ATTR_POSITION_OFFSET)
#assert(offset_of(Cube_Vertex, color) == cube_shader.ATTR_COLOR_OFFSET)

This keeps shader metadata connected to host code. The D3D11 backend then checks required uniforms, views, samplers, vertex buffers, uniform sizes, view kinds, storage access, and pipeline layout compatibility before draw or dispatch.

The generated path is intentionally narrow. It supports simple packed vertex inputs, representable uniform fields, sampled textures, samplers, storage images, storage buffers, and compute thread group sizes. Manual `Pipeline_Desc.layout` overrides still exist for compact formats, multiple streams, instancing, and engine specific layouts.

That split feels right. The generated path should make normal shaders pleasant. The manual path should stay available when a renderer needs more control.

08Why D3D11 comes first

D3D11 is the first hard backend for the abstraction. Vulkan can stress it later.

It gives Ape a real device, real swapchain, real views, real debug names, real compute, and real validation pressure without forcing Vulkan memory allocation and synchronization design into the first public API pass. The current backend proves swapchain rendering, resize handling, vertex and index buffers, immutable and dynamic textures, mip chains, sampled depth, render to texture, multiple render targets, MSAA resolve, storage images, storage buffers, compute passes, and buffer readback.

That is enough surface area to find bad API ideas. If the D3D11 sample code becomes awkward, the API needs work. If the D3D11 validation cannot express a resource rule cleanly, the public contract needs work. Vulkan should arrive after those problems are boring, because Vulkan will multiply unclear decisions.

The backend stays behind the public package boundary. Native objects live in D3D11 state maps. Labels become debug names. HRESULT failures map to typed error categories, including device loss where D3D11 can report it. Public query helpers expose backend features and limits without leaking native handles.

09Why not the alternatives

Sokol is the strongest influence on the callsite. It is small, direct, descriptor based, and built around explicit handles. Ape keeps that part. The reason not to use Sokol directly is scope. Sokol has to care about a broad matrix of platforms and APIs. Ape is narrower: Odin, desktop, Slang, D3D11 now, Vulkan later. That narrower target lets the API lean harder into Odin struct literals, generated bindings, typed errors, and a view model designed around the renderer we want to build.

WebGPU has excellent ideas around binding layouts, resource usage, feature limits, and diagnostics. Ape should borrow those contracts over time. It should not copy the browser shaped parts: promises, WGSL as the center of the world, mandatory security constraints, or heavy device setup ceremony. The post v0.1 design notes already point toward optional binding groups and pipeline layouts, but the simple render loop should stay direct.

Raw D3D11 or raw Vulkan would give maximum control, but every sample would pay for it. A renderer should spend its complexity budget on render graph policy, materials, visibility, batching, streaming, and content. It should not repeat handle lifetime checks, shader slot mapping, offscreen pass validation, and descriptor defaults in every subsystem.

A full engine would move too fast in the other direction. It would answer questions that are still open. Ape GFX deliberately stops below that layer so the renderer can grow from real needs.

10What the GFX Lab sample proves

The useful sample is `samples/d3d11_gfx_lab`, because it exercises composition instead of one isolated call path. It creates an offscreen color target and depth target, renders a rotating depth tested cube into them, samples the offscreen color in a swapchain pass, updates data on resize, uses generated Slang layouts and bindings, and keeps shader reload in the sample layer.

The frame has two passes:

snippet-09.odinodin
gfx.begin_pass(&ctx, {
	label = "lab offscreen cube pass",
	color_attachments = {0 = offscreen_color_view},
	depth_stencil_attachment = offscreen_depth_view,
	action = offscreen_action,
})
gfx.apply_pipeline(&ctx, ape_sample.reloadable_shader_program_pipeline(&cube_program))
gfx.apply_bindings(&ctx, cube_bindings)
cube_shader.apply_uniform_FrameUniforms(&ctx, &cube_uniforms)
gfx.draw(&ctx, 0, i32(len(cube_indices)))
gfx.end_pass(&ctx)

gfx.begin_pass(&ctx, {
	label = "lab swapchain display pass",
	action = swapchain_action,
})
gfx.apply_pipeline(&ctx, ape_sample.reloadable_shader_program_pipeline(&texture_program))
gfx.apply_bindings(&ctx, texture_bindings)
gfx.draw(&ctx, 0, i32(len(texture_indices)))
gfx.end_pass(&ctx)
gfx.commit(&ctx)

That sample stresses the intended API composition. The code leaves the graphics work visible while removing D3D11 boilerplate. It shows the current balance: explicit resources, generated shader help, visible pass boundaries, and clear failure checks.

11Validation as part of the product

The validation script is part of the framework story:

snippet-10.powershellpowershell
.\tools\validate_all.ps1

It compiles shaders, runs public contract tests, checks generated API docs, tests descriptor and error behavior, builds every D3D11 sample, runs every D3D11 sample for a few frames, and finishes with `git diff --check`.

That is heavier than a smoke test, but it matches the risk. Graphics code fails in small mismatches: one wrong view kind, one stale handle, one missing uniform, one resource used for read and write in the same pass. The validation gate makes those contracts repeatable.

12What is intentionally missing

The missing pieces are as important as the working ones.

The current contract leaves out Vulkan runtime, web and mobile, multiwindow swapchains, automatic GPU lifetime management, bindless resources, descriptor arrays, graphics pass storage writes, async shader compilation, KTX2 or Basis texture assets, materials, and renderer systems.

Some of those will come later. Some may stay out of `gfx` forever. The useful rule is simple: add the feature when a real renderer or sample exposes the API question it solves.

13The next design pressure

The first post v0.1 design pass should focus on binding layouts and binding groups generated from Slang reflection. That is the WebGPU idea most likely to help Ape without turning the API into WebGPU. The framework already has flat transient `gfx.Bindings`, which is great for samples. A renderer will eventually want reusable groups for frame data, material resources, and draw data.

The trick is keeping both paths. Simple code should stay simple. Engine code should get stronger contracts when it needs them.

That is the direction worth keeping: small public surface, explicit ownership, Slang as the source of truth for shader contracts, D3D11 as the first proof, Vulkan as the later stress test, and validation close enough to the API that bad usage fails early.

14Repo

<- All writing