Background
I already know Swift, and recently began learning Metal (on macOS). While the tutorials I found online were good for learning Metal, they all tend to use NSView
etc, while I'd prefer to use SwiftUI.
The approach I found works well enough, but after watching Paul Hudson's tutorials on YouTube (in particular SwiftUI + Metal – Create special effects by building your own shaders), I now suspect my approach may be obsolete, and that I should always use the API Paul demonstrates in that video.
In the video, Paul calls methods like colorEffect
and distortionEffect
on a UIView, passing stuff like ShaderLibrary.passthrough()
.
My approach is much more involved, defining a MTKViewDelegate
as a coordinator for a delegate that conforms to NSViewRepresentable
and wraps the MTKView
(with a draw
method that uses a command encoder, and presents a drawable to a command buffer etc etc):
Was my approach made obsolete by the API Paul is using, or is that API only meant to apply filters to views?
Objective
I'm ultimately looking to make a 2D video game, with graphics like these (from ShenZhen I/O):
I'd like to write a shader that renders the bulk of the graphics as a classic tiled background, with different shader programs for rendering the screens on the in-game displays, and the monospaced text on the programmable chips etc.
I could easily write the various shaders I need, as standalone shaders, but cannot figure out the best way to combine everything (blitting the output from one shader program onto the output from another) to produce the final image.
I'd be very grateful for any advice anyone has on how to combine pixel shaders to composite 2D graphics. Thank you for taking the time.