Late Summer 2017 Conferences and Availability

A little over two weeks ago I finally was able to submit the final chapter for the rough draft of my book. I started the book back in October and it’s been a real trip. One thing I have not been able to do during the time I have been writing the book is have a stable full time job. Book writing is a full time job in and of itself, but it sadly doesn’t pay super well. One of my goals in the next few months is to line up a job so that I can start digging myself out of the hole I’m in.

I have two months to line something up. I am going to be speaking at a lot of conferences and doing a lot of traveling over the next two months. If you’re interested in seeing me, here are some of your options:

  • That Conference: That Conference is a spin off of Code Mash. It is a multi-platform conference that takes place at the Kalahari Resort in Wisconsin Dells. My talk on graphics programming will be on August 8th, which also happens to be the same day the final editing pass on all of my chapters is due. Busy times.
  • 360iDev: After missing the conference last year due to various work commitments, I am looking forward to coming back and seeing all my old indie friends. I will be doing a similar talk on graphics to the one I did at That Conference, but more tailored for iOS. That will be on August 16th in Denver, CO.
  • iOSDevUK: I get to take my second trip across the pond, both this year and in general, at the beginning of September. This conferences is in Aberystwyth, Wales and I will be presenting a two-hour pre-conference workshop on ARKit on Monday, September 4th.
  • Strangeloop: My final conference will be another multi-platform conference, but this time about cutting edge technology. I will be giving a talk on GPGPU programming on the iPhone using Metal and will try to talk a bit about CoreML. Strangeloop is in St. Louis, MO.

The book is set to release on December 4, 2017. I am working on the sample code that will accompany the book. My focus in writing the book was to provide more conceptual information about how Metal can be used rather than just cataloging the API. One of my frustrations in trying to learn OpenGL was the focus on the API with the assumption that everyone knows what a texture is and what Euler angles are. It is my intention that anyone buying the book use the sample code I am creating as canon since both Metal and Swift change so rapidly. I will maintain it and keep it up to date and I hope to add to it as new features become available.

I feel incredibly lucky to have had a chance to write this book. From the moment Metal was introduced in 2014 I felt like it was my thing. I worried I waited too long to get involved with it, but it seems like it’s been rather difficult for people to approach it due to the vast amounts of other concepts one must be familiar with before one can use Metal. I am hoping that this book helps open Metal up to other iOS developers.

I am planning my next steps right now. Beyond just finding a job and getting a paycheck, I have a few goals over the next few years that I would dearly love to fulfill. I will be sure to post more about them when they become more tangible. So far over the last ten years things have simply worked themselves out. I am hoping that this streak continues and that I know my next step when I see it. Until then, I am going to focus on the tasks ahead of me and do my best.

The Metal Programming Guide Pre-Order

After many long months of work, “The Metal Programming Guide” is available for pre-order. Many people have been asking me questions and here are the answers to the most frequent ones:

  • The book is in Swift.
  • I don’t know if this will be available in an eBook format. I would be greatly surprised if it wasn’t. Every other book in the Red Book series has a Kindle or a PDF version available. If the book is available as an eBook, I believe it will be accessible.
  • I will be taking into account what happens at WWDC. As of today, the rough draft of the book is 75% complete. That translates to 15 chapters completed and five to go. I have a placeholder chapter for whatever new and shiny thing may be introduced at WWDC.
  • There is going to be sample code. I had some really tight writing deadlines and it was not possible for me to write the code concurrently with writing the book. I intend to spend the time between when the book is completed and when it’s released to ensure there is good informative sample code. I hope to continue to add to this sample code and maintain it as Swift and Metal evolve.
  • The overall composition of the book is about 50% graphics and 50% GPGPU programming. There are a few chapters in the graphics section that you will need to read if you’re only interested in GPGPU programming. Those are detailed at the beginning of that section.

One thing that I have learned while working on this book is that it’s impossible for this to be everything to all people. There are chapters in this book that have entire books dedicated to them. It wasn’t possible to write all of the implementation details of complex operations such as facial detection. My hope with the book is to basically prime the mental pump. I hope that if you encounter a topic you find interesting that I am giving you just enough information about it that you can somewhat wrap your head around it and seek out dedicated resources for it.

One of the biggest questions I have gotten over the last year is “Why should I know Metal?“ I am hoping that my conceptual chapters do a good job of answering that question for you.

I’m incredibly excited for this book. This is the book I have wanted to write since WWDC 2014. I thought that I waited too long and I missed out on being the person to write this book. I feel incredibly grateful for having the opportunity to take a year and really dive deeply into Metal. I knew since I started programming I wanted to learn and understand graphics. Getting to take that knowledge and apply it to thinks like data analysis and machine learning.

I loved math as a child. I felt like it was the language that helps us understand the Universe. I strayed away from it as a young adult because I had a bad experience with it and figured I was stupid and that it wasn’t for me. By bashing my head against vectors and matrices and seeing how you can use them to do amazing things has been a mental renaissance for me.

In life you don’t get a lot of opportunities to work on something you’re passionate about. I have been fortunate in my career to have several of these opportunities and I cherish every one of them.

Primitive Drawing and Assembly in Metal

One of the main reasons I got interested in iOS was because I wanted to learn graphics and audio programming.

I got really interested in OpenGL, but after a year of trying to learn it I wasn’t making any progress.

Anyone who has tried to learn OpenGL have gone through the same frustrating experience I have. You find a tutorial on Ray Wenderlich and you write a bunch of code and at the end you have a spinning multicolored cube. That’s really cool!!

But then you realize that you have absolutely no idea how to do anything else.

For a really long time, I thought that I had to enter all of my vertices by hand because every tutorial I saw had you write your vertices by hand. My brain was scrambled by the idea of trying to create a large, complex 3D model by hand coding the vertices. It’s hard enough coding things with auto complete, what if you miss a value?! Do you have to keep building it and keeping track of the vertices and try to figure out which one needs to be moved when there are a thousand of them?!

Eventually I was told that you import a pattern file into your application, but until the release of Model/IO in iOS 9, you had to write your own parser from scratch to import a file from Blender or Maya.

So I faced an incredible amount of frustration trying to figure out how you get vertices into an application and how they work together to create something. I kept being told that those projects of the spinning cube introduce you to all the things you need to know in order to make OpenGL work, but it was something that was not intuitive.

I am seeing people go through similar frustration while trying to learn Metal. I am hoping to do a series of blog posts over the next few months about aspects of Metal and 3D graphics programming that I don’t feel I have seen explained very well in other places. This stuff is complicated and difficult to explain, so no judgement on anyone who produces technical materials on Metal or OpenGL.

Star Project

I decided to do a project that was slightly more complicated than a triangle or a cube but not as complicated as a 3D pug model. I want to figure out how to explain this stuff in a way that the reader can extrapolate and scale the complexity while still understanding how the fundamental concepts work.

I decided to try and draw a two-dimensional star. It’s slightly more complicated than a triangle but it’s still simple enough for a person to sit down and conceptualize.

Yes, I screwed up the vertex labels at the bottom.

Yes, I screwed up the vertex labels at the bottom.

I really hoped that I could find a simple CAD program to generate a pattern file for my 2D star, but after multiple frustrating conversations with various people, I decided to bite the bullet and just plot out the vertices by hand.

The Metal rendering space is slightly different than what one expects coming from something like Core Graphics. I am used to the idea that the phone has a normalized coordinate space where the height is one unit and the width is one unit.

Metal still utilizes a normalize coordinate space, but it’s two units high and two units wide, so the center of the screen is coordinate (0,0,0). So the upper left corner of the screen is at coordinate (-1, 1, 0). The lower right corner is (1, -1, 0). It’s not complicated to understand, but it’s slightly counterintuitive for someone coming from the idea that everything is a value between 0 and 1.

I created my coordinate space using graph paper. I made each square on the paper worth 0.2 units and made the space ten squares by ten squares.

I understand that because the phone screen is not perfectly square that the star is not going to look like this when it’s finally rendered. One of the things I want to do later is figure out how to constrain the drawing area to be square so that the star renders properly, but that’s a task for a later time.

Writing out vertices by hand like a savage.

Writing out vertices by hand like a savage.

In Metal and OpenGL, shapes are composed of triangles. All the big scary 3D models that make up a Pixar movie are composed of meshes of triangles. Everything can be broken down into triangles.

So let’s think about a star. It’s obvious that the points are composed of triangles, but what about the middle? The middle is a pentagon. This pentagon can be composed of five triangles by drawing out from the center to each of the vertices between the points.

So if you think about how to describe the star to the renderer, you are going to describe ten triangles using eleven vertices. There are five vertices at the points, the five between the points, and finally one in the middle.

Metal Primitives

When you package and pass your vertices to the vertex buffer, you need to describe to the vertex buffer what type of shape it’s drawing. I know I just went off on how everything can be broken down into triangles, but there are a few flavors of shapes you can draw with Metal.

Metal has a enum of Metal Primitive Types. There are five different primitives available to you:

  • Point: Rasterizes a point at each vertex. You have to define a point size in the vertex shader.
  • Line: Rasterizes a line between a pair of vertices. These lines are separate and not connected If there are an odd number, then the last vertex is ignored
  • Line Strip: Rasterizes a line between a bunch of adjacent vertices, resulting in a connected line.
  • Triangle: For every separate set of three vertices, rasterize a triangle. If the number of vertices is not a multiple of three, either one or two vertices is ignored.
  • Triangle Strip: For every three adjacent vertices, rasterize a triangle.

So the easiest way to think about how to describe the star to the vertex shader is to hand enter ten sets of three vertices that describe a triangle.

let vertexData:[Float] =
    [
        // Internal Triangles
        0.0, 0.0, 0.0, 1.0,
        -0.2, 0.2, 0.0, 1.0,
        0.2, 0.2, 0.0, 1.0,
        
        0.0, 0.0, 0.0, 1.0,
        0.2, 0.2, 0.0, 1.0,
        0.3, 0.0, 0.0, 1.0,
        
        0.0, 0.0, 0.0, 1.0,
        0.3, 0.0, 0.0, 1.0,
        0.0, -0.2, 0.0, 1.0,
        
        0.0, 0.0, 0.0, 1.0,
        0.0, -0.2, 0.0, 1.0,
        -0.3, 0.0, 0.0, 1.0,
        
        0.0, 0.0, 0.0, 1.0,
        -0.3, 0.0, 0.0, 1.0,
        -0.2, 0.2, 0.0, 1.0,
        
        // External Triangles
        0.0, 0.6, 0.0, 1.0,
        -0.2, 0.2, 0.0, 1.0,
        0.2, 0.2, 0.0, 1.0,
        
        0.6, 0.2, 0.0, 1.0,
        0.2, 0.2, 0.0, 1.0,
        0.3, 0.0, 0.0, 1.0,
        
        0.6, -0.4, 0.0, 1.0,
        0.0, -0.2, 0.0, 1.0,
        0.3, 0.0, 0.0, 1.0,
        
        -0.6, -0.4, 0.0, 1.0,
        0.0, -0.2, 0.0, 1.0,
        -0.3, 0.0, 0.0, 1.0,
        
        -0.6, 0.2, 0.0, 1.0,
        -0.2, 0.2, 0.0, 1.0,
        -0.3, 0.0, 0.0, 1.0
]

I want my star to be kind of flashy. I would like to set a pseudo-radial gradient on the star where the tips are red, but the middle is white. So I need to set up another array of floats describing the color data as it correlates to the positional data.

let vertexColorData:[Float] =
    [
        // Internal Triangles
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        
        // External Triangles
        1.0, 0.0, 0.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        
        1.0, 0.0, 0.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        
        1.0, 0.0, 0.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        
        1.0, 0.0, 0.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        
        1.0, 0.0, 0.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0
]

Over in the view controller, you need to create a buffer to hold the position data and one for the color data for your vertices:

var vertexBuffer: MTLBuffer! = nil
var vertexColorBuffer: MTLBuffer! = nil

Now you need to connect those buffers to the arrays of vertex data. The buffers don’t know how much data they need to store, so you have to calculate how large these vertex arrays are so the buffers know how much space they need to allocate for the vertex data.

let dataSize = vertexData.count * MemoryLayout.size
vertexBuffer = device.makeBuffer(bytes: vertexData, length: dataSize, options: [])
vertexBuffer.label = "vertices"

vertexColorBuffer = device.makeBuffer(bytes: vertexColorData, length: dataSize, options: [])
vertexColorBuffer.label = "colors"

img_5660Since the vertex position and color arrays are the same size, you can reuse the data size variable for both buffers.

At the end of the process, this data is scheduled by the render encoder to be sent to the vertex shader and be processed by the GPU.

renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, at: 0)
renderEncoder.setVertexBuffer(vertexColorBuffer, offset:0 , at: 1)
renderEncoder.drawPrimitives(type: .triangle,
                                   vertexStart: 0,
                                   vertexCount: vertexData.count)

Build and run the app on a phone and you get a nice star! Huzzah!

Wrap Up

I created a GitHub repo for this project here. Honestly, most of this project was basically me just using what comes in the template but changing the vertex data.

Looking this over, it seems kind of inefficient. The middle set of triangles feel like they could be a triangle strip. I would like to add a nice outline to the star to make it look nice and neat. I also would like the star to not be morphed by the screen.

I am planning to update this project periodically to make it more efficient and customizable.

At the very least, I hope this adds some understanding to how Metal breaks down larger shapes into triangles and how it’s able to go through and build the shapes back up again.

Beware of Publishers Bearing “Free” Gifts

I buy a lot of programming books. Like, a lot a lot. If you’re a publisher and producing any books on OpenGl, VR, robotics, etc… I am probably giving you money.

One place that I buy a lot of programming books is Packt Publishing. They were one of the first publishers to have books out on the Unreal 4 engine. They have a lot of graphics and game programming books and their prices are fairly reasonable.

Back in May they had a deal on a set of five books on game development. Two were books I was planning to buy anyway for the price of the other five, so I bought the set of books. I noticed at the end of my invoice that they gave me a 10-day free trial of their online library of books.

One of these things is not like the other...

One of these things is not like the other…

I am already a Safari Books Online subscriber and have access to the Packt library, so I just ignored this add on to my purchase.

(Yes, I do go out and buy books I am paying to have access to through Safari. I know I could save a lot of money by not buying a bunch of programming books I probably won’t read, especially when I am paying to have access to them. Don’t judge me.)

A week later I got an email from Packt telling me my trial was almost over and they hoped I was enjoying their books. I was kind of miffed. I never initialized the trial. I have gotten free trial offers for Safari that I have never been able to use because I wasn’t a new member, but they always had a code that you needed to use in order to start the trial. I didn’t know that the trial would start automatically.

I had somewhat forgotten about this until I got an email yesterday telling me that Packt had charged me $12.99. I went to check on what the charge was for and guess what? It was for a monthly subscription to their online library.

So, they signed me up for a service I didn’t want, gave it to me without my permission, and because I was unaware that they were doing this they started charging me for something I never authorized.

I was incredibly annoyed. I feel this is a really sleazy way to do business. I cancelled the subscription immediately and wrote an email to complain. Here is the response I got:
email

They basically tell me that if I don’t want their subscription I have to cancel it. I grok that. Already done it.

There is no acknowledgement that what they did was underhanded or sneaky. Their response basically treats me like I am an idiot who didn’t know what I was doing.

I know that most services like Amazon and Apple Music and whatever offer you a free trial period after which your credit card gets charged. They hope that you forget that you signed up for a free trial period and they can charge you because you forgot to cancel when the trial was over. That’s kind of sneaky, but it’s still something where you are choosing to opt in. You are saying “I want this and I agree to pay money for this if I forget to cancel my subscription.” I have avoided free trials of things for this very reason.

It is not okay to “sell” someone something they didn’t choose and then charge them for something they didn’t opt into.

There is a bit of shady behavior on this site. They recently released a $50 OpenGL book that is so out of date that it does not mention shaders, which have been around since 2004. People have complained and gotten a “We’re sorry, we’ll pass your criticism on to the author.” This book is still available and does nothing to warn the reader about how out of date it is. Good publishers like the Pragmatic Programmers remove out of date books all the time.

Their royalty structure also leaves much to be desired. The 16% royalty is not necessarily bad, but considering how many times a year they sell every book on their site for $5, I find it incomprehensible that anyone working on a book ever outearns their advance.

It’s really too bad. They have a lot of books on rather obscure and esoteric topics that most people don’t cover. They have one of the few books on the OpenGL Shading Language on the market. As far as I know they are the only publisher producing any books on LLVM. I would like to think there are better ways of producing a broad range of interesting content without screwing over both the authors and the customers.

Streaming WWDC 2016

I have never had the privilege of attending WWDC. Most years (including this one) I never bothered to apply to the lottery because I couldn’t afford to go. The one year I could afford to go, I didn’t win a ticket and I decided I would rather have the money as a buffer than go out to WWDC. This was the correct decision.

I attend a lot of conferences. I speak at a lot of conferences. Unfortunately, I have had some difficulty actually attending sessions at conferences. I have panic attacks when I am trapped in a room full of people and I can’t get up and walk around. This was one reason I was never super disappointed about going to WWDC the last few years. The idea of being stuck in a room for a whole week makes me feel like curling in a ball and crying. I go to conferences to network and drink with my friends. Now I am at the point where it’s just networking since I gave up drinking.

One thing I had forgotten about was discovering new things by attending sessions I hadn’t thought to go to. When I went to my first CocoaConf, I encountered a lot of interesting things because I wanted to watch Jonathan Penn and Josh Smith present.

When Swift was introduced two years ago, most of the conference sessions revolved around talking about Swift. I like Swift, it’s a neat language, but I am sick of talking about it. I am tired of hearing people talk about side effects and protocols and immutable state. I miss the first few years I was an iOS developer when people talked about frameworks and weird little nooks and crannies of the Cocoa architecture.

Taken together, this has created something of a perfect storm where I got burned out on iOS development. I got sick of talking to people about it because it always boiled down to Swift and arguing about code purity and a bunch of other bullshit.

I saw the Keynote this year and I had absolutely no enthusiasm for anything this year. I was irritated and cranky and didn’t want to deal with anything. But I noticed that this year Apple decided to stream most of the sessions live. The sessions were always available online later and last year they started showing select sessions. I watched the Swift ones because it was for my job and was still new and exciting. But I rarely watch the sessions afterward because when I watch the sessions, I sit there and pause every few minutes to try and process the vast amount of information that is being presented. There is a massive backlog of lots of sessions I think would be nice to watch but I never get around to watching. I did not think I would do anything this year.

I was wrong.

Streaming the sessions live has completely changed my life this week.

I work from home and so I just kind of threw the live stream on while I worked on stuff. I have it on in the background. I can’t pause the live stream, so I am not poring over every second of each video minutely. I am getting an overview of what they are talking about so I can go and research things later. I also have a team of people on various Slack channels who are watching it with me that I can chat with about the things we find new and exciting.

There were five whole sessions on Metal this year. The last two years I only got through the first Metal video because I felt like I didn’t understand it well enough to move on to the next video. This year, since they were just on, I could passively leave it on and get through all the videos. If this was a normal year, I would not have encountered the thing that has excited me the most this year, which is doing neural networks in Metal. That was introduced in “What’s New in Metal Part 2,” which was the fourth Metal video streamed. I did not need all the context from the first three videos to get excited about the new stuff in Metal.

I got to watch all the videos about GameplayKit, Photos, SpriteKit, etc… All of these technologies that I have been interested in but in a passive way were all just there for me to listen in on. I got introduced to so many things I didn’t know about in obscure frameworks that don’t get a lot of love because most people need to pay the bills and so they don’t do sessions on SceneKit.

This is what it was like at the beginning when I started going to conferences. I would discover so many new things that I would go home excited to get working on something. I haven’t felt this way for the last two years.

I worked for Brad Larson for a year. He told me that the reason he got into making Molecules and got into OpenGL and doing GPUImage was because he had a free period at WWDC and just decided, on a whim, to watch a session on OpenGL. It’s crazy to me about how things you do on a whim or by chance can completely change your life. By not being exposed to these sessions over the last few years, I have been cutting myself off from these chance encounters to find something truly special that I can learn and make my own.

It has been a great gift to get to participate with WWDC from home. Being able to get up and walk around during a session and cuddle with Delia while listening to people give their talks has helped me tremendously. I can talk to people on Slack from all over the world about the sessions as they happen so we can all be excited together. I know that people get something out of being there and getting to talk to the engineers, but for someone with mental health issues that prevent them from being able to be comfortable with massively large amounts of people, this has been a godsend.

I am planning in the future to go back and watch all the videos from previous years that I never watched because they took too long. I can have them on in the background while I work on other things. I can pick out the parts that interest me and look into them further.

For the first time in a really long time, I am excited about iOS development. Thank you Apple for giving that back to me.

How to Add A New Shader to GPUImage

Last year I wrote this article about image processing shaders.

I explain how several shaders in GPUImage work and what the point of a vertex and a fragment shader are. But since this has come out, I have realized there is a lot more to this than I was able to go into in this article.

In spite of the fact that I worked for Brad Larson for a year, I never wrote a shader for GPUImage. When I started trying to figure it out, it became incredibly intimidating. There was a lot of stuff going on in the framework that I was less than familiar with. I also knew there was a lot of stuff he and I spoke about that was not really made available to the general public in the form of documentation.

So, in this blog post, I would like to extend that post somewhat by talking specifically about GPUImage rather than shaders in the abstract. I want to talk about the internal workings of GPUImage to the point that you understand what components you need to connect your shader to the rest of the framework. I want to talk about some general conventions used by Brad. Lastly, I would like to talk about how to construct complex shaders by combining several more primitive shaders.

I am sending this blog post to Brad for tech reviewing to ensure that everything in this post is factually correct and to ensure that I am not propagating any false information. If there is something I am missing here, I would appreciate having someone reach out to me to ask about it so I can include it here.

Fork GPUImage

If you want to contribute to GPUImage, you must create a fork of the repository. I am actually using this post to recreate all of my work because I didn’t fork the repo, tried to create a branch, and it was turtles all the way down.

I had a fork that was two years old and 99 commits behind the main branch. I am only using my forks to create projects to contribute to the framework, so I destroyed my old fork and created a new one. If you’re better at Git than I am and can keep pulling in the changes, then awesome. My Git-fu is weak so I did this, which is probably making a bunch of people sob and scream “WHY?!?”

Adding Your New Filter to the Framework

There are two different instances of the GPUImage framework: Mac and iOS. They both share a lot of the same code files, but you need to add the new filter to both manually.

My Solarize filter is one that modifies the color, so I dragged and dropped my filter into the group of color modifying shaders. The source files go into the Source folder in the Framework folder in the file you clone from GitHub.

Where the shader files go

Where the shader files go

While you are looking at your .h file, you need to make sure that it is set to Public rather than Project. If you don’t do that, it won’t be visible for the next step in this process.

This is the wrong privacy setting for the header file.

This is the wrong privacy setting for the header file.

This is the correct privacy setting

This is the correct privacy setting

Next, you need to go to the Build Phases of the GPUImageFramework target of your project. You need to add the .h file to the Headers in the framework and the .m file to the Compile Sources.

I had some trouble doing this. When I would click on the “+” button at the bottom of the filter list, I would only be able to find the opposite file. So for example, when I tried to add my header to the Headers list, I would only be able to find the solarization implementation file. The way I got around this was to drag the file from the list and drop it into the build phases target.

Drag the file from the left to the correct build phase on the right.

Drag the file from the left to the correct build phase on the right.

Lastly, you need to add your filter to the GPUImage.h header file so that the framework has access to your shader. (By the way, wasn’t C/Objective-C a pain??)

The GPUImage.h is different for the Mac and the iOS project, so if you want your filter to be available on both platforms, you need to remember to add it to both GPUImage.h header files.

Adding Your New Filter to the Filter Showcase

First off, want to warn you to only use the Swift version of the Filter Showcase. I had a lot of stuff that I was adding in here to try and get the Objective-C version working. That was the main reason I split this into two blog posts. It is a pain in the ass and it was made far simpler in the Swift version of the filter showcase.

When you open the Filter Showcase, you will be asked if you want to update to the recommended project settings. Don’t do this!

The only place you need to make a change is in the file FilterOperations.swift. This file is shared between both the iOS and the Mac Filter Showcase apps, so you only have to change this once. Huzzah!

You need to add your new filter to the filterOperations array. There are a few things you need to set in your initialization:

FilterOperation (
        listName:"Solarize",
        titleName:"Solarize",
        sliderConfiguration:.Enabled(minimumValue:0.0, maximumValue:1.0, initialValue:0.5),
        sliderUpdateCallback: {(filter, sliderValue) in
            filter.threshold = sliderValue
        },
        filterOperationType:.SingleInput
    ),

I looked at the parameters used in the GPUImageLuminanceThresholdFilter initialization because my shader was primarily based on this code. As such, your filters might have different parameters than mine did. Look around at other instances to get an idea of how your filter should be initialized.

When you build and run the Filter Showcase, you might encounter an issue where the project builds but doesn’t run. You might see this pop-up:

Update the what now??

Update the what now??

If this happens, don’t update anything. Xcode is trying to run the wrong scheme. Look up at the top of Xcode near the “Play” button and check the scheme. It should say Filter Showcase. If it says GPUImage instead, then you need to change the scheme and it should work okay.

Wrong schemes are full of fail.

Wrong schemes are full of fail.

GPUImage Style Guide

I am going to go through my Solarize filter to make note of some style things that you should keep in mind when you are writing your own filters to try and keep things consistent.

The entire implementation file for shader is represented in this section, but I am showing a chunk at a time and explaining each part.

#import "GPUImageSolarizeFilter.h"

As with all C programs, you need to import the header for your shader in the implementation file.

#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageSolarizeFragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordinate;
 
 uniform sampler2D inputImageTexture;
 uniform highp float threshold;
 
 const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);
 
 void main()
 {
     highp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
     highp float luminance = dot(textureColor.rgb, W);
     highp float thresholdResult = step(luminance, threshold);
     highp vec3 finalColor = abs(thresholdResult - textureColor.rgb);
     
     gl_FragColor = vec4(finalColor, textureColor.w);
 }
);

Since GPUImage is cross platform, you need to check if your shader is running on an iOS device or not. Since iOS uses OpenGL ES and not just plain OpenGL, there are a slight difference that you need to take into consideration.

Notice how a bunch of the variables are described as highp. In OpenGL ES, since you have more limited processing power, if you don’t need a lot of precision, you can optimize your code by lowering the necessary precision. You do not do this in the Mac version of the shader:

#else
NSString *const kGPUImageSolarizeFragmentShaderString = SHADER_STRING
(
 varying vec2 textureCoordinate;
 
 uniform sampler2D inputImageTexture;
 uniform float threshold;
 
 const vec3 W = vec3(0.2125, 0.7154, 0.0721);
 
 void main()
 {
     vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
     float luminance = dot(textureColor.rgb, W);
     float thresholdResult = step(luminance, threshold);
     vec3 finalColor = abs(thresholdResult - textureColor.rgb);

     gl_FragColor = vec4(vec3(finalColor), textureColor.w);
 }
);
#endif

One last thing I want to point out is the use of NSString *const kGPUImageSolarizeFragmentShaderString = SHADER_STRING. It is a convention to take the name of your shader, add a “k” at the beginning (“k” for “constant”, get it?), and follow it with “FragmentShaderString.” Use this convention when you write your own shaders.

In each shader that you create for GPUImage you need to remember to include the textureCoordinate and the <>. These two variables receive the exact pixel you are going to process, so make sure you have those in each of your shaders.

Now we move on to the implementation:

@implementation GPUImageSolarizeFilter;

@synthesize threshold = _threshold;

I have a public facing variable, threshold, that I need to get access to in order to implement the shader, so it needs to be synthesized. Again, this is a throw back from C/Objective-C that you might not be familiar with if you just started with Swift.

#pragma mark -
#pragma mark Initialization

- (id)init;
{
    if (!(self = [super initWithFragmentShaderFromString:kGPUImageSolarizeFragmentShaderString]))
    {
        return nil;
    }
    
    thresholdUniform = [filterProgram uniformIndex:@"threshold"];
    self.threshold = 0.5;
    
    return self;
}

All Vertex shaders will look similar to this. They all have an initializer that attempts to initialize the fragment shader from that string you created back at the beginning of the if/else statements.

If you have a public facing variable like threshold is here, you need to set that up before you finish your initialization.

#pragma mark -
#pragma mark Accessors

- (void)setThreshold:(CGFloat)newValue;
{
    _threshold = newValue;
    
    [self setFloat:_threshold forUniform:thresholdUniform program:filterProgram];
}


@end

Lastly, if you have a public facing variable, like we do with the threshold, you need an accessor for it.

Copy/Paste Coding

I normally really do not condone “Copy/Paste” coding, but did want to mention that there is a lot of repetitive stuff in all of the shader programs. This shader only has like five lines of code that I changed between the Solarize filter and the Luminance Threshold filter.

Generally speaking, copying and pasting code is bad, but looking through a piece of code and figuring out what everything does while you are learning and being able to process how to do this on your own in the future isn’t always the worst thing in the world. I could have written my shader without having an understanding of how the two contributing filters worked.

So, use these as learning materials and there’s nothing wrong with looking at how they’re put together until you feel comfortable with the process on your own.

Wrapping Up

I know that trying to start something unfamiliar can be really intimidating. Especially if there are a bunch of esoteric steps that you’ve never done before and you don’t want to have to ask about it because you feel stupid.

I went back and forth with Brad a dozen times while writing this blog post and none of my questions were about the actual shader code. It was all about how to add this to the framework, why my shader wasn’t showing up in the header, etc…

This stuff is not intuitive if you haven’t done it before. This might be familiar to older Mac programmers who had to do this stuff all the time, but if you primarily work with Swift where everything is just there and you don’t have to add things to build phases, then this can be extraordinarily frustrating.

I hope that this is helpful to those who have wanted to contribute to GPUImage but got frustrated by the hoops that were necessary to jump through to get something working. I hope that this means that people have a resource besides Brad to get answers to common questions that aren’t really available on Stack Overflow.

How to Write a Custom Shader Using GPUImage

One of my goals over the last year or so was to do more coding and to specifically do more GLSL programming. You learn best by actually working on projects. I know that, but I have had some trouble making time to do things on my own.

I want to go over my thought process in figuring out a filter I wanted to write, how I was able to utilize the resources available to me, and to hopefully give you an idea about how you can do the same.

You can clone GPUImage here.

Solarize Image Filter

One thing I had trouble with in GPUImage is the fact that it is too comprehensive. I couldn’t think of an image filter that wasn’t in the framework already.

So I started thinking about Photoshop. I remembered there was a goofy filter in Photoshop called Solarize.

Since I knew that Brad was more concerned with things like edge detection and machine vision, I figured that it would not have occurred to him to include a purely artistic filter like that. Sure enough, there was no solarization filter. Jackpot!

After figuring out something that wasn’t there, the next question was how to create one. I initially was going to use this Photoshop tutorial as a jumping off point, but I wondered if there was a better way. I Googled “Solarize Effect Algorithm” and I found this computer science class web page that gave an agnostic description of how the effect works.

What is Solarization?

Solarization is an effect from analog photography. In photography, one important component of a photograph is its exposure time. Photographers used to purposely overexpose their photos to generate this effect. When a negative or a print is overexposed, parts of the image will invert their color. Black will become white, green will become red.

There is a threshold where any part that passes that threshold will receive the effect.

One of the things Brad told me about GPUImage was that many of the filters in GPUImage are composed of many smaller filters. It’s like building blocks. You have a base number of simple effects. These effects can be combined together to generate more and more complex effects.

Looking at the algorithm for solarization, I noticed it requires two things:

  • An adaptive threshold to determine what parts of the image receive the effect
  • An inversion effect on the pixels

I opened up GPUImage to see if there were already filters that do those things. Both of these functions already exist in GPUImage.

Now I needed to figure out how they work so that I could combine them into one, complex filter.

GPUImageColorInvertFilter

Since the color invert filter is the simpler of the two filters, I will be looking at this one first.

Since this is a straightforward filter that does one thing without any variables, there are no publicly facing properties in it’s header file.

Here is the code for the vertex shader that we see in the implementation file:

NSString *const kGPUImageInvertFragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordinate;
 
 uniform sampler2D inputImageTexture;
 
 void main()
 {
    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    
    gl_FragColor = vec4((1.0 - textureColor.rgb), textureColor.w);
 }
);

These two lines will exist in every vertex shader in GPUImage:

varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;

These lines are the lines where you are bringing in the image that is going to be filtered and the specific pixel that the fragment shader is going to work on.

Let’s look at the rest of this:

void main()
{
   lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
   gl_FragColor = vec4((1.0 - textureColor.rgb), textureColor.w);
}

The first line is simply obtaining the specific RBGA values at the specific pixel that you are currently processing. We need that for the second line of code.

The gl_FragColor is the “return” statement for the fragment shader. We are returning what color the pixel will be. Texture colors are part of a normalized coordinate system where everything exists on a continuum between 0.0 and 1.0. In order to invert the color, we are subtracting the current color from 1.0. If your red value is 0.8, the inverted value would be (1.0 - 0.8), which equals 0.2.

This is a simple and straight forward shader. All of the actual, shader specific logic is in the gl_FragColor line where we are doing the logic to invert the colors.

Now let’s move on to the more complicated shader.

GPUImageLuminanceThresholdFilter

The Luminance Threshold Filter is interesting. It checks the amount of light in the frame and if it is above a certain threshold, the pixel will be white. If it’s below that threshold, the pixel will be black. How do we do this? Let’s find out.

The Luminance Threshold Filter, unlike the color inversion filter, has public facing properties. This means that the header file has some actual code in it that we need to be aware of. Since this one is interactive and depends upon input from the user, we need a way for the shader to interface with the rest of the code:

@interface GPUImageLuminanceThresholdFilter : GPUImageFilter
{
    GLint thresholdUniform;
}

/** Anything above this luminance will be white, and anything below black. Ranges from 0.0 to 1.0, with 0.5 as the default
 */
@property(readwrite, nonatomic) CGFloat threshold; 

@end

Our threshold property will receive input from a slider in the UI that will then set a property we will need to calculate our shader.

NSString *const kGPUImageLuminanceThresholdFragmentShaderString = SHADER_STRING
( 
 varying highp vec2 textureCoordinate;
 
 uniform sampler2D inputImageTexture;
 uniform highp float threshold;
 
 const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);

 void main()
 {
     highp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
     highp float luminance = dot(textureColor.rgb, W);
     highp float thresholdResult = step(threshold, luminance);
     
     gl_FragColor = vec4(vec3(thresholdResult), textureColor.w);
 }
);

This has a few more components than the color inversion code. Let’s take a look at the new code.

uniform highp float threshold;
const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);

The first variable is the value we are receiving from the user for our threshold. We want this to be very accurate because we’re trying to determine whether a pixel should receive an effect or not.

The second line has what looks like a set of “magic numbers.” These numbers are a formula for determining luminance. There are explanations for it here and in the iPhone 3D programming book by Philip Rideout. We will use this constant to map over the current value of the fragment in the next bit of code.

void main()
{
    highp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    highp float luminance = dot(textureColor.rgb, W);
    highp float thresholdResult = step(threshold, luminance);
     
    gl_FragColor = vec4(vec3(thresholdResult), textureColor.w);
}

The first line does the same thing as it did in the color inversion filter, so we can safely move on to the next line.

For the luminance variable, we’re encountering our first function that doesn’t exist in C: dot(). dot() takes each red, green, and blue property in our current pixel and multiplies it by the corresponding property in our “magic” formula. These are then added together to generate a float.

I have tried hard to find a good explanation of why dot product exists and what it does. This is the closest thing I can find. One of my goals with this series of blog posts is to take things like dot product where you can explain what it does, but not why you are using it and what functionality it fulfills. For now, hopefully this is enough.

Next up, we have the threshold result. This is where we are doing our only conditional logic in the shader. If you recall, this shader makes a determination if each pixel should be white or black. That determination is being made here.

The step() function evaluates two different floats. If the first number is larger, then the threshold result is 1.0 and that particular pixel is bright enough to pass the threshold requirements. If the second number is larger, then the pixel is too dim to pass the threshold and the result is 0.0.

Finally, we map this result to our “return” statement, the gl_FragColor. If the pixel passed the threshold, then our RGB values are (1.0,1.0,1.0), or white. If it was too dim, the then RGB values are (0.0,0.0,0.0), or black.

GPUImageSolarizeFilter

According to the algorithm that describes the solarization process, you use a luminance threshold to determine if a pixel receives an effect or not. Instead of the effect of making the pixel either white or black, we want to know if the pixel should be left alone or if it’s color should be inverted.

The only line of code that the color inversion filter has that the threshold doesn’t have is a different gl_FragColor:

// Color inversion gl_FragColor
gl_FragColor = vec4((1.0 - textureColor.rgb), textureColor.w);

// Luminance Threshold gl_FragColor
gl_FragColor = vec4(vec3(thresholdResult), textureColor.w);

I am embarrassed to say how long it took me to figure out how to combine these two filters. I thought about this for a long time. I had to think through all of the logic of how the threshold filter works.

The threshold filter either colors a pixel black or white. That is determined in the thresholdResult variable. This means that we still need the result in order to figure out if a pixel receives an effect or not, but how do we modify it?

Look at this part of the gl_FragColor for the color inversion:

(1.0 - textureColor.rgb)

Where else do we see 1.0 being used in the threshold shader? It’s the result of a successful step() function. If it fails, you wind up with 0.0. We need to change the gl_FragColor from the color inversion filter to have the option to return a normal color or an inverted color. If we subtract 0.0 from the texture color, then nothing changes.

This is the final code I came up with to implement my Solarize Shader:

NSString *const kGPUImageSolarizeFragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordinate;
 
 uniform sampler2D inputImageTexture;
 uniform highp float threshold;
 
 const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);
 
 void main()
 {
     highp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
     highp float luminance = dot(textureColor.rgb, W);
     highp float thresholdResult = step(luminance, threshold);
     highp vec3 finalColor = abs(thresholdResult - textureColor.rgb);
     
     gl_FragColor = vec4(finalColor, textureColor.w);
 }
);

This is primarily composed of the same code to create the luminance threshold shader, but instead of mapping each pixel to be either black or white, I am using that result to check to see if I am inverting my colors. If the colors need to be inverted, then the thresholdResult is 1.0 and our formula moves forward as usual. If thresholdResult is 0.0, then our texture color remains the same, except now it is negative, which is why I wrapped it in an abs() function.

Completely solarize shader. Will try to get a better picture later.

Completely solarize shader. Will try to get a better picture later.

Takeaways

One of the big things I keep harping on with this blog the last year or so is that to be a good engineer, you must have an understanding about what you’re trying to accomplish.

Breaking down a shader that you want into an algorithm helps you to tease out what pieces of functionality you need to get the result you want. In a lot of cases, the smaller pieces of functionality are already out there in the world somewhere and you can reuse them to build more complex things.

You also need to be able to read through those shaders to figure out how they work so you can know where the pressure points are to change the code. Since shaders are so small, you can usually take one line at a time and break down what it does. If there is a function being used that you’re unfamiliar with, Google it. You don’t always need to understand all of the why something is being used in order to implement it, such as I did with the dot() function. I was able to have a good enough grasp of what it did to understand why it was needed in my shader which was all I really needed.

This stuff can be intimidating, which is why it’s important to spend some time figuring out why something works and, more importantly, figuring out what YOU want to do with it.

I will be following this post up tomorrow with instructions for how to add a shader to the GPUImage framework, including some other parts of the shader code that I did not go over here.

Getting Metal Up and Running: Metal Template

Note: The code from this tutorial is based on this tutorial from Ray Wenderlich. This was written in 2014 and was not updated for Swift 2.0, so there are a few changes I have made to my template to make it up to date. I am also writing my own explanations about what each part of this process does. If you just want something that works, you can download to template. If you want something easier to skim, I suggest looking through the Ray Wenderlich tutorial.

Metal was announced at WWDC 2014. It was the most exciting announcement of WWDC for approximately five minutes until Swift was announced.

I was doing a high level Swift talk in 2014 when I didn’t know the framework very well yet and I just wanted to give people an idea about why Metal was important. People were understandably unhappy that I didn’t show them how to code Metal. I am rectifying that mistake.

As was the case when Metal was first announced, you can’t test Metal code in the simulator. You have to build on a device with an A7 chip or later. The most primitive iOS device you can use for this is the iPhone 5S.

The goal of this project is to create a template that can be used as a base to just get Metal up and running. This doesn’t do anything other than render a color to the screen. I will take this template and add vertex buffers and geometry in a later post to explain how that process works. I didn’t include those in this project because I didn’t want to include anything that would need to be deleted by the programmer before it could be used.

Let’s get started!

Create Project and Import Frameworks

If you want to walk through the process of building the template rather than just downloading it from GitHub, you can follow along with the directions.

Open Xcode and create a Single View Application. You can name it anything you want. If you are using this as a basis for a project, go ahead and name it whatever it will eventually be. Choose Swift as the language. I made mine Universal, but if you know it will only be on an iPhone, go ahead and just choose iPhone.

There are a few frameworks that you will need to import before you can get Metal up and running.

Add these import statements at the top of your ViewController:

import Metal
import QuartzCore

Metal is obvious. You need to import the framework to do anything with Metal. The QuartzCore is a little less obvious. We’ll get to that soon.

Most of the work we are doing will be in the ViewController class. Unless otherwise specified (as in the Shader code), add all code to the ViewController.

MTLDevice

First thing you need to set up for a Metal project is the MTLDevice. The MTLDevice is the software representation of the GPU. I go over the properties of MTLDevice in a previous blog post. We don’t need access to everything that MTLDevice does for this simple template.

At the top of the ViewController class, add the following property:

// Properties
var device: MTLDevice! = nil

You will be seeing and using the device property a lot in this project. The MTLDevice is the manager of everything going on with your Metal code. You will be instantiating this (and all other properties) in the viewDidLoad() method.

There is only one safe way to initialize your device property:

device = MTLCreateSystemDefaultDevice()

At this point in time, every Metal-capable device only has one GPU. This function returns a reference to that GPU.

CAMetalLayer

Note: You might see an error at some point in this code block that says CAMetalLayer not found. This drove me crazy for a really long time. I downloaded Apple’s template that utilized MetalKit that does not have an instance of CAMetalLayer because it is implemented behind the scenes. I thought it was something that was phased out.

This is an ambiguous compiler error. At some point Xcode switched from saying the build device was a physical device to the simulator. Rather than saying the code won’t build on the simulator, it says the CAMetalLayer was not found.

If you ever get weird, ambiguous compiler errors while coding Metal that say something doesn’t exist, check the build target!

Remember back at the beginning of the blog post where I told you to import QuartzCore? You are importing that for one purpose: To get access to CAMetalLayer.

In iOS, everything you see on your screen is backed by a CALayer. Every view, every button, every cell is backed by a CALayer. CALayer is like the canvas you use to render things to the screen. If you want to know more about CALayer there are a few good tutorials on it here and here.

Since Metal is a different beast than a normal UIView, you need to create a special kind of CALayer: CAMetalLayer. This layer doesn’t live in the Metal framework, it lives in the QuartzCore framework, along with the special layer for rendering different flavors of OpenGL ES.

Create an CAMetalLayer property under your MTLDevice property:

var metalLayer: CAMetalLayer! = nil

Initializing the CAMetalLayer is a little more complicated than initializing the MTLDevice. There are four setable properties on the CAMetalLayer:

  • Device
  • Pixel Format
  • Framebuffer Only
  • Drawable Size

Device is self explanatory. It is simply the MTLDevice we created in our last step.

Pixel format is your chosen MTLPixelFormat. There are over a hundred items in the MTLPixelFormat struct, but there are only three options available on the CAMetalLayer: MTLPixelFormatBGRA8Unorm, MTLPixelFormatBGRA8Unorm_sRGB, and MTLPixelFormatRGBA16Float. The default pixel format is MTLPixelFormatBGRA8Unorm, so I am just going to leave it there unless I have some reason to change it.

Framebuffer only is an optimization option. There are two kinds of MTLResource types in Metal: MTLTexture and MTLBuffer. MTLTexture objects are less efficient because they need to be able to sample textures and do pixel read/write operations. If you don’t need those things you can make your code more efficient by telling the compiler this is something it never has to worry about.

Drawable size specifies how large your texture is. Since we are not currently using a texture, we don’t need to set this.

Add the following code to your viewDidLoad() method:

// Set the CAMetalLayer
metalLayer = CAMetalLayer()
metalLayer.device = device
metalLayer.pixelFormat = .BGRA8Unorm
metalLayer.framebufferOnly = true
metalLayer.frame = view.layer.frame
view.layer.addSublayer(metalLayer)

You’re initializing the metalLayer, then setting the properties on it that are relevant to this project. We’re leaving the pixel format to the default and since we don’t have any textures, we’re setting the optimization to true.

As with all layers, we’re setting the frame and adding it as a sublayer. Huzzah!

Command Queue

We will need an object that will organize the commands that we need to execute. In Metal, that object is the MTLCommandQueue. The MTLCommandQueue is our Space Weaver that keeps all of our command threads straight and running properly.

Add this property to the top of your View Controller class:

var commandQueue: MTLCommandQueue! = nil

This creates a command queue that is available to all of our methods within our View Controller. It will be used in a couple of different places and we want it to remain consistent and not go out of scope.

Up next, we need to set the command queue. Add this line of code into your viewDidLoad() function at the bottom:

// Set the Command Queue
commandQueue = device.newCommandQueue()

We’re almost done implementing all the code we need in our viewDidLoad() method. We just need to set up the functionality to actually draw to the screen. For that, we need a display link.

DisplayLink

We now have our CAMetal sublayer in place and we can draw to the screen. But how do we know when to trigger the screen to redraw?

It needs to redraw any time the screen refreshes. Since this is a common task in iOS, there is a built in class available to do this: CADisplayLink.

CADisplayLink exists to synch your drawing to the refresh rate of the display.

Add a new property to your ViewController class:

var timer: CADisplayLink! = nil

In order to set up your display link, you need to tell it what the target is and what code needs to be run every time the link is triggered. Since we are creating this for the view controller, the target is self. We just need the program to render, so I called this selector renderloop:

// Set the Timer
timer = CADisplayLink(target: self, selector: Selector("renderloop"))
timer.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSDefaultRunLoopMode)

After you initialize the display link, you need to register it with a run loop. We are just registering it with the main run loop on the default mode to get access to coordinate the redrawing of the screen.

You have specified renderloop as your selector, but it hasn’t been created yet. Go ahead and do that now at the bottom of the class:

func render() {
  // TODO
}
 
func renderloop() {
  autoreleasepool {
    self.render()
  }
}

So, we didn’t just create the renderloop method, we created another one as well. The renderloop method just calls a render method encapsulated within an autoreleasepool. We’ll be setting up that render method next.

Render Pipeline

Rendering is the process where the program takes all of the information it has about the scene and compiles it together to determine the color of each pixel on the screen.

This is a rich and immersive topic that I would like to explore more fully in time, but for now I am trying to make sure I include the most basic information necessary to understand the code needed to do the minimum here.

If you would like a better explanation of what rendering is and how it works, there is a great explanation of it and the math involved in Pixar in a Box on Khan Academy.

Now that we have our display link in place, we would like to be able set up the code necessary to take a bunch of numbers and small shader programs and turn them into something cool we see on the screen.

To do that, we need to set up a rendering pipeline. A rendering pipeline takes all the inputs you have (vertices, shaders, optimizations, etc…) and coordinates them to make sure that each vertex and fragment is produced properly on the screen.

This will require us to put a number of pieces in place. I will go over each one and explain its role in the process.

Render Pass Descriptor

We are going to go into that empty render method and start to fill it out. The first thing we are going to create in that method is the render pass descriptor. The render pass descriptor is a collection of color, depth, and stencil information for your renderer. In this simple template we are not concerned with the depth or the stencil properties, so we are focusing on the color properties.

Begin filling out your render() method with the following code:

func render() {
    let renderPassDescriptor = MTLRenderPassDescriptor()
    let drawable = metalLayer.nextDrawable()
    renderPassDescriptor.colorAttachments[0].texture = drawable!.texture
    renderPassDescriptor.colorAttachments[0].loadAction = .Clear
    renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 1.0, green: 0.0, blue: 1.0, alpha: 1.0)

First off, you’re creating a MTLRenderPassDescriptor(). You will need to set up the attachments for the render pass descriptor a little later. First you need to set up a property that you will need to hook up to the render pass descriptor.

The render pass descriptor needs a source for it to render. That source is maintained by the instance of the CAMetalLayer. The CAMetalLayer has a method on it called nextDrawable(). Internally, CAMetalLayer maintains a cache of textures for displaying content. This method grabs the next one in the queue available for use.

Our render pass descriptor needs to know a few things about its color attachments: texture, load action, and clear color. Our texture is whichever texture is up next in the CAMetalLayer’s pool of drawable textures.

We have three options for our load action: Don’t Care, Load, and Clear. Don’t care allows the pixel to take on any value at the start of the rendering pass. Load maintains the previous texture. Clear specifies that a value is written to every pixel. We want to use clear here because we want to over write the color that exists currently.

Lastly, we’re setting a clear color. I have arbitrarily set it to a nice magenta color. When you finally pull the switch on this application, you’ll know this succeeds when your screen turns a lovely shade of pink.

Create the Command Buffer

Everything that we do deals with buffers. You need to send commands to a buffer in order to get them to be executed and to see something happen on your screen.

In order to send your commands to a buffer, you need to create one. Let’s do that now.

Add the following code to the bottom of your render() method:

// Create Command Buffer
let commandBuffer = commandQueue.commandBuffer()

The MTLCommandBuffer, like the MTLDevice, is a protocol. It can’t be subclassed and there is only one way to instantiate a buffer object. That is calling the commandBuffer() method on and instance of MTLCommandQueue.

Earlier in this code we made MTLCommandQueue object that was available to the entire View Controller. This is why. We needed to be able to access the same command queue instance we created when the view loaded.

Render Pipeline State

We’re going to briefly jump back to our viewDidLoad() method to add one last piece that we need to set up our rendering pipeline. We need to monitor the pipeline state.

Create a new property at the top of the class along with the others:

var metalPipeline: MTLRenderPipelineState! = nil

Our pipeline state exists to interface with our shaders. Don’t worry, we will get to those.

In order to access our shaders, we are going to need some code:

// Set the Rendering Pipeline
let defaultLibrary = device.newDefaultLibrary()
let fragmentProgram = defaultLibrary?.newFunctionWithName("basic_fragment")
let vertexProgram = defaultLibrary?.newFunctionWithName("basic_vertex")

let pipelineStateDescriptor = MTLRenderPipelineDescriptor()
pipelineStateDescriptor.vertexFunction = vertexProgram
pipelineStateDescriptor.fragmentFunction = fragmentProgram
pipelineStateDescriptor.colorAttachments[0].pixelFormat = .BGRA8Unorm
        
do {
    try metalPipeline = device.newRenderPipelineStateWithDescriptor(pipelineStateDescriptor)
} catch let error {
    print("Failed to create pipeline state, error (error)")
}

The first thing you will notice is that we are creating a default library.

In Metal, shaders are instances of MTLFunction objects. A Library is a collection of MTLFunction objects.

The purpose of your MTLRenderPipelineDescriptor is to look at your collection of shaders and tell the renderer which fragment and vertex shader your project is going to use.

We’re creating a new default library. After that, we are creating instances of MTLFunction objects by reaching into the default library and looking for them by name.

After we have our MTLFunction objects, we need to tell the pipeline state descriptor which vertex and fragment shaders we want to use.

Since creating a Metal Pipeline is a failable operation, we need to wrap it in a do-try-catch block.

At this point you might be wondering where those vertex and fragment shader functions came from. Good question, you’re about to generate them.

Metal Shaders

Note: I am creating both a vertex and a fragment shader even though this template has no vertex buffer. I am not sure if you need a vertex shader if there is nothing for it to shade, but it builds and this is something you would be adding later when you did have a vertex buffer. It might not be strictly necessary, but I am leaving it here anyway.

In GLSL, you need both a vertex and a fragment shader since they were two halves of the same program. If the Metal Shading Language follows the same paradigms as GLSL, then this is also the case and both a vertex and a fragment shader are necessary for every program you build.

This section is going to be disappointing. I haven’t had a chance to figure this out very well yet, so I apologize for the lack of detail about how I came up with these parts. This will be rectified later.

This is the one part of your code that is not going into the ViewController class. You get to create a new file now!

Create a new file and choose the “Metal File” template. Name it “Shaders.metal”.

The Metal Shading Language is based on C++. I am not as familiar with it as I would like, so sadly this section is going to be less in depth than I would like. I promise there will be many blog posts about this in the next few months.

You are going to create two different shaders: vertex and fragment. At this point you might be wondering how these get into your default library. The way the default library works is that it automatically includes any function that is included in a .Metal file. You could create hundreds of vertex and fragment shaders in various files and all of them would be accessible from the default library. I believe that you can create multiple libraries, but that is just one of the thinks I am going to explore more fully in the future.

This is the code for the vertex shader:

vertex float4 basic_vertex(const device packed_float3* vertex_array [[ buffer(0) ]], unsigned int vid [[ vertex_id ]]) {
    return float4(vertex_array[vid], 1.0);
}

This is the code for the fragment shader:

fragment half4 basic_fragment(){
    return half4(1.0);
}

This code comes directly from the Ray Wenderlich tutorial. I don’t know how it was generated. I don’t want to speculate about how this works because I don’t want to have something out there that I am completely wrong about, so just know this works and I will explain why in a later blog post.

Render Command Encoder

Now that we have a buffer, we need to add some commands to it.

We’ve been creating a bunch of different things that haven’t been applied anywhere yet. We’re ready to put those things to work.

Add this code to the bottom of the render() method:

// Create Render Command Encoder
let renderEncoder = commandBuffer.renderCommandEncoderWithDescriptor(renderPassDescriptor)
renderEncoder.setRenderPipelineState(metalPipeline)
renderEncoder.endEncoding()

The command encoder needs to know about the render pass and the pipeline state. We already created both of these things and went into some depth about what they do. We’re simply adding these specifications to the render encoder. After we add those to the encoder, we need to let the render encoder know we’re done by ending our encoding.

Commit the Command Buffer

Almost done! Just two more lines of code:

// Commit the Command Buffer
commandBuffer.presentDrawable(drawable!)
commandBuffer.commit()

We are taking that top drawable texture from the CAMetalLayer and queuing it up in the command buffer to be processed by the render pipeline. Finally, we are committing the command buffer.

Bring it all Together

I have placed this code in a GitHub repository.

There are a lot of aspects of this project that I am looking forward to exploring in more depth over the next few months.

The purpose of this blog post was to try and flesh out some of the code a little bit. Most of the blog posts I have seen involving this kind of template don’t really get into the nuts and bolts about what each of these things does and why you need them in your project.

It would not be useful to delve into the nitty gritty of everything in the framework. This post was long enough as it is!

I hope that this was a little more helpful if you’re like me and you wonder what things do and why you need them instead of just being satisfied with having something that works.

This post took over a month for me to write. It’s possible there are aspects of this post that are disjointed or that I forgot to include. I would appreciate being made aware of any issues by being contacted on Twitter. I would also appreciate knowing if there are any specific topics in Metal that people are interested in hearing about.

Thanks for bearing with me through this incredibly long post!

The Space Weaver is busy rendering fragments and vertices.

The Space Weaver is busy rendering fragments and vertices.

Getting Metal Up and Running: MTLDevice

I was really hoping last week to get a post out about creating a default Metal template, but I have been sick for a few weeks. Last weekend I sat in my chair staring at the TV because that was all I could deal with.

I am also recovering from a migraine that I have had for the past four days.

I was hoping to be able to go through my Metal template and remove all of the bells and whistles that were added by the Apple engineers to make the template do something and not just be a blank slate. I don’t feel comfortable enough to go through this and delete things yet, so I would like to talk about various Metal objects and components necessary to create a baseline Metal project and what purpose each of these serve.

I am posting this project on GitHub. I will be modifying it in the next few weeks to make this into a functional template anyone can grab if they want to start a Metal project but don’t want to deal with deleting boilerplate code. It’s not at that point yet, but hopefully before the end of the year!

The first, and most important, property I want to talk about is the MTLDevice. The documentation for MTLDevice is here.

Why Does Our Code Need an MTLDevice?

The MTLDevice is the software representation of the GPU. It is the way that you are able to interface with the hardware.

You might be wondering why this is something you would need. The iPhone is a cohesive unit that just kind of works when you program it. Why do you need to initialize a variable to represent part of your phone?

Whenever you create software that is going to interact with external hardware, you need to have a way to interface with it in your code.

When I was working at SonoPlot, we were writing control software for robots. We had classes for the robotics system and for the camera we were using.

The camera class had to have functions for every action we needed it to have. It also had a property for each thing that we needed to either get or set. One example is resolution. Some of our cameras had variable resolution and others were fixed. We had to have a property to either retrieve what that resolution was or that would set the resolution on the ones that can be set.

Even though we don’t necessarily think of the iPhone in terms of “external” hardware, it is. There are different parts of the iPhone that you can interface through the Cocoa frameworks. Speaking of cameras, the iPhone has a camera.

The AVCaptureDevice in AVFoundation is the interfacing class that Apple provides to allow you to talk to the camera. It fulfills the same function that our Camera and Robotics classes had in the robotics software.

Just as we needed to create a camera and robotics class to talk to our hardware, you also need to create an object that talks directly to your GPU.

One of the promises of the Metal framework is that you, as the developer, would have far more control over allocating and deploying resources in your code. This is reflected in the methods associated with the MTLDevice protocol.

Protocol, Not Object

The first thing to point out is that MTLDevice is not a class, it’s a protocol. At this point I am not entirely certain why it was designed this way.

The code to create the device is as follows:

let device: MTLDevice = MTLCreateSystemDefaultDevice()!

If you hold option and hover over the device variable, it indicates that it is a MTLDevice object. In order to be an object, it must be an instance of a class. My best guess is that this is an instance of NSObject that conforms to the MTLDevice protocol.

I am fascinated as to why this was implemented in this manner. I plan to look into this further, but this is one of those things that having an understanding of why it was done this way doesn’t necessarily help me get things done, so I am going to try and leave it alone for now.

Functionality

The functionality that the MTLDevice needs to be able to do for you are the following:

Identifying Properties

There are many more Apple devices and chip types that support Metal programming now than there were when it was initially announced. Back when it came out in iOS 8, we just had the iPhone 5S and one of the iPad models with an A7 chip.

Now, we have a lot of devices with a lot of different chip sets that all support Metal programming, including the Mac.

We’re in a similar situation with these chips that I was in with supporting the legacy robotics software at SonoPlot. We had three different types of cameras that all had different properties. You can’t assume that all the GPUs in each device are the same.

So, just as we did with the robotics software, there are several properties that are retrievable from the GPU.

The properties you can access on each GPU are:

  • Maximum number of threads per thread group
  • Device Name
  • Whether this GPU supports a specific feature set
  • Whether this GPU supports a specific texture sample count

Looking over this list of properties, it looks like as the chips have progressed, they have more and better capabilities. The A7 chip in my iPhone 5S is not as powerful as the A9X chip in my iPad Pro.

If you want to target less powerful devices, it might be useful to get familiar with the idiosyncrasies of the chips in these devices.

Creating Metal Shader Libraries

As a protege of Brad Larson, I am very interested in exploring shaders in Metal.

One of the big, stupid questions I have with Metal is how the shaders are set up. Every project I have seen (including the template) has had one shader file named “Shaders.metal.” Does that mean that all shaders go into one file? Are these just projects that use one shader? Is this basically set up the same way that OpenGL ES 2.0 is set up?

This says that all of the “.metal” files are compiled into a single default library, so that tells me that I can have more than one shader file in a project.

I am looking forward to exploring the shader libraries in the future, but right now just making a note that this is a responsibility of the MTLDevice.

Creating Command Queues

This is interesting to see that Metal has built in command queues.

I know that Brad utilizes Grand Central Dispatch in GPUImage to make it work as efficiently as possible. OpenGL ES doesn’t have a built-in command queue structure to the best of my knowledge. I know that OpenGL ES can only exist on one thread at any given time and that one of the promises of Metal was multithreading. If you’re going to multithread something you need some way of keeping your threads straight.

Looking forward to exploring the Metal Command Queues further.

Creating Resources

The resources you are creating with the Metal device are buffers, textures, and sampler states.

Buffers should be familiar to anyone who works with graphics or audio. To the best of my memory, in OpenGL ES you have three buffers: One for the frame that is currently being displayed, one for the next frame to be displayed, and one in the wings waiting to be deployed when the top buffer gets popped off the stack.

Textures are also a familiar concept. In GPUImage the photo or video you are filtering is your texture, so this is fairly straightforward.

I have never heard of a sampler state, so I am interested to find out what this does.

Not a lot new or exciting here if you have any familiarity with OpenGL ES.

Creating Command Objects to Render Graphics

This is where you set up your pipeline to render your graphics. In OpenGL ES 1.0 this was a fixed function pipeline without shaders. In OpenGL ES 2.0, the programmable pipeline was introduced along with the ability to use shaders. Shaders gave the ability to really generate shadows and ambient occlusion in graphics on an iOS device. Since we’re not taking a step backwards, there are commands here to set up your programmable pipeline.

Creating Command Objects to Perform Computational Tasks

One of the things that really excited me about Metal was the ability to do general purpose GPU programming (GPGPU) in iOS. I had hoped in iOS 8 to hear that OpenCL would be made available on iOS, so I was rather pleased to hear that this functionality was made available.

Jeff Biggus has been speaking about OpenCL for a few years and using the GPU for something other than just graphics processing. It is one of those things on my giant list of things I am interested in but haven’t had a chance to explore yet. This excites me and I am looking forward to writing a Metal project that utilizes this extra functionality.

Thoughts

I am really excited about all of the functionality I see exposed to me in this protocol.

I know I basically just went through the documentation and didn’t necessarily tell you anything you couldn’t look up on your own, but I do think it helps to have some context about WHY this stuff is in here, not just that it exists.

I will be getting into the command queues and buffers in more depth in my next post because those are absolutely necessary for a minimum viable application. It’s helpful to know that these things are created and controlled by this master object.

I hope that this was interesting and useful. I know in most of the documentation I am reading it just mentions you need to create a MTLDevice without exploring what it’s role is in the application. It’s a really vital part of the application and I hope that you have a better understanding and appreciation of what it does within a Metal program.

Getting Metal Up and Running: Part One

My current side project is working on trying to write at least one graphics post each week. I noticed that people tend to be asked to speak about things they write about. I also noticed that the vast majority of my posts for the last few months have been about depression and cooking and cleaning my house. I would prefer not to be a lifestyle blogger, so I am going to make a better effort to write more about what I am interested in, namely graphics.

Over the next few months I would like to write about frameworks (like Metal and Scene Kit), 3D mathematics, and shader applications. I am going to try to write something about what I am doing. I might not be able to get a whole sample project up and running and it might take a few weeks to get something done. But I plan to try to write about what I have learned and to chart my progress through my various explorations.

Getting Started with Metal

For a while there was a really good OpenGL ES template available in Xcode. Or so I have been told. Now if you try to make an OpenGL template, first off it’s difficult to even find. Secondly, the template is full of a lot of garbage. It’s similar to the Sprite Kit template including the rocket ship and other various assets. You can’t just open a template that renders a solid color on the screen.

My goal initially is to get a good Metal Template without the garbage that I can post on GitHub and make available to people who just want to get up and running.

Metal By Example

I recently bought Warren Moore’s Metal by Example book. Warren was kind enough to complete this book before abandoning us to go and work for Apple on the Metal frameworks. I didn’t realize he was going back until he did, but he’s been kind enough to answer questions about the book and Metal.

My plan was to go through the first chapter of the book and set up a Metal project, but I ran into some issues.

The books is written in Objective-C. I do not want to be one of those people who can’t or won’t read a book in Objective-C, but it does make things a little difficult when you are not used to it. I have difficulty switching my brain from one language to another, so this was one difficulty for me personally.

I also could not get the code to build. The compiler could not find the CAMetalLayer. I realized I was supposed to import Metal. I forgot how to do this. Either this is already done for you in Swift or I have spent so much time on my robotics project with Brad Larson where we wrote most of our own code that I simply didn’t remember how to link anything and the directions were not included. I found the directions on Warren’s companion web site. So if you are going through the book, I highly recommend looking at the site because it has content that is not in the book.

(I don’t want this to come off as a complaint against Warren. I greatly appreciate his efforts with the book and as an author myself, I can totally see me just wanting to get the damn thing done and have it be gone. I am not saying this is what he did, but if this was my book I totally would have done that. I still highly recommend his book and hope I don’t hurt his feelings.)

Even with this additional help, I could not get my code to build. It had trouble finding the QuartzCore/CAMetalLayer.h file. I imported Quartz and Core Animation, but no dice.

At this point I was frustrated and thought about giving up, but I decided to try and load Apple’s base Metal template.

Apple’s Built-In Templates

The Metal template is hidden in the Game templates options. If you navigate through the game templates, there are four types of templates here: Sprite Kit, Scene Kit, OpenGL ES, and Metal. Just because these are in the Game templates doesn’t mean you can only make games with these!

gameTemplates

metalTemplate

options

So I navigated through and got Metal base project in Swift. Yay! The project had a compiler warning! WTF?!

I was on my second glass of wine and was massively annoyed. Nothing I was doing would even build.

I looked up the compiler warning and realized that Metal still does not build in the iOS simulator. When Metal first came out it did not work in the simulator in Xcode 6. I had forgotten that and assumed it would be fix, but apparently not. If you are working with Metal, bear that in mind if you have sample code that you see that comes preloaded with a compiler warning.

I needed to build on my phone, so I plugged it into the computer. It built, but then it would not run. I got a warning on both the phone and Xcode about there being a permissions/privacy issue. My phone was not set up to run my code because it was an unknown developer.

So that began the great search through the Setting on the phone to grant my developer account permission to load code on my phone.

Here are a series of screen shots of where I found the ability to do this in the Settings. I blocked out the number after my email address because I am not sure if it is something I shouldn’t make public or not. I am sure someone will frantically Tweet me about some proprietary information being in these screenshots that I should not share. BTW: I never check the email address on here, I just use it for my developer account, so please do not spam me there, I am plenty available on Twitter.

IMG_3104

IMG_3105

IMG_3106

I don’t remember running code on a device being this difficult. It’s possible I have never tried running my own app on my phone with my own developer account. I had an educational account in school and then most of my other apps were for other companies. I think it might have something to do with TestFlight, maybe?? I would be interested to hear if things got more difficult in Xcode 7.

Finally, after all of this, I got the code to build and run on the phone! Success!!

IMG_3113

The base template is the usual triangle with a different color at each vertex floating around in space.

At this point I could just deleted that stuff and have my base template that was my goal for this week, but I don’t want to do that yet. I would like to look over this code as is to try and figure out how the vertices are read into the program and how the rotation is applied.

I want to figure out how to import a 3D model from a program like Maya or Blender. One of the big things that freaked me out about 3D graphics programming was the idea that I would have to construct my shapes by hand in the code rather than importing XML file representation of 3D objects.

Probably my goal for next week will be to go over the functionality of how this base generated template works before removing the floating triangle and uploading this to GitHub. I would like to use this template as my starting point for all of my future projects. I would also like to figure out what I forgot to import and connect in Warren’s project for the CAMetalLayer object because it is bothering me.

Didn’t get as much done this weekend as I wanted to. I talked in an earlier post about having a rough week and just getting this written was a bit of a struggle. Hoping future weeks will be more productive. But just doing something is better than not doing anything, I guess.