Streaming WWDC 2016

I have never had the privilege of attending WWDC. Most years (including this one) I never bothered to apply to the lottery because I couldn’t afford to go. The one year I could afford to go, I didn’t win a ticket and I decided I would rather have the money as a buffer than go out to WWDC. This was the correct decision.

I attend a lot of conferences. I speak at a lot of conferences. Unfortunately, I have had some difficulty actually attending sessions at conferences. I have panic attacks when I am trapped in a room full of people and I can’t get up and walk around. This was one reason I was never super disappointed about going to WWDC the last few years. The idea of being stuck in a room for a whole week makes me feel like curling in a ball and crying. I go to conferences to network and drink with my friends. Now I am at the point where it’s just networking since I gave up drinking.

One thing I had forgotten about was discovering new things by attending sessions I hadn’t thought to go to. When I went to my first CocoaConf, I encountered a lot of interesting things because I wanted to watch Jonathan Penn and Josh Smith present.

When Swift was introduced two years ago, most of the conference sessions revolved around talking about Swift. I like Swift, it’s a neat language, but I am sick of talking about it. I am tired of hearing people talk about side effects and protocols and immutable state. I miss the first few years I was an iOS developer when people talked about frameworks and weird little nooks and crannies of the Cocoa architecture.

Taken together, this has created something of a perfect storm where I got burned out on iOS development. I got sick of talking to people about it because it always boiled down to Swift and arguing about code purity and a bunch of other bullshit.

I saw the Keynote this year and I had absolutely no enthusiasm for anything this year. I was irritated and cranky and didn’t want to deal with anything. But I noticed that this year Apple decided to stream most of the sessions live. The sessions were always available online later and last year they started showing select sessions. I watched the Swift ones because it was for my job and was still new and exciting. But I rarely watch the sessions afterward because when I watch the sessions, I sit there and pause every few minutes to try and process the vast amount of information that is being presented. There is a massive backlog of lots of sessions I think would be nice to watch but I never get around to watching. I did not think I would do anything this year.

I was wrong.

Streaming the sessions live has completely changed my life this week.

I work from home and so I just kind of threw the live stream on while I worked on stuff. I have it on in the background. I can’t pause the live stream, so I am not poring over every second of each video minutely. I am getting an overview of what they are talking about so I can go and research things later. I also have a team of people on various Slack channels who are watching it with me that I can chat with about the things we find new and exciting.

There were five whole sessions on Metal this year. The last two years I only got through the first Metal video because I felt like I didn’t understand it well enough to move on to the next video. This year, since they were just on, I could passively leave it on and get through all the videos. If this was a normal year, I would not have encountered the thing that has excited me the most this year, which is doing neural networks in Metal. That was introduced in “What’s New in Metal Part 2,” which was the fourth Metal video streamed. I did not need all the context from the first three videos to get excited about the new stuff in Metal.

I got to watch all the videos about GameplayKit, Photos, SpriteKit, etc… All of these technologies that I have been interested in but in a passive way were all just there for me to listen in on. I got introduced to so many things I didn’t know about in obscure frameworks that don’t get a lot of love because most people need to pay the bills and so they don’t do sessions on SceneKit.

This is what it was like at the beginning when I started going to conferences. I would discover so many new things that I would go home excited to get working on something. I haven’t felt this way for the last two years.

I worked for Brad Larson for a year. He told me that the reason he got into making Molecules and got into OpenGL and doing GPUImage was because he had a free period at WWDC and just decided, on a whim, to watch a session on OpenGL. It’s crazy to me about how things you do on a whim or by chance can completely change your life. By not being exposed to these sessions over the last few years, I have been cutting myself off from these chance encounters to find something truly special that I can learn and make my own.

It has been a great gift to get to participate with WWDC from home. Being able to get up and walk around during a session and cuddle with Delia while listening to people give their talks has helped me tremendously. I can talk to people on Slack from all over the world about the sessions as they happen so we can all be excited together. I know that people get something out of being there and getting to talk to the engineers, but for someone with mental health issues that prevent them from being able to be comfortable with massively large amounts of people, this has been a godsend.

I am planning in the future to go back and watch all the videos from previous years that I never watched because they took too long. I can have them on in the background while I work on other things. I can pick out the parts that interest me and look into them further.

For the first time in a really long time, I am excited about iOS development. Thank you Apple for giving that back to me.

How to Write a Custom Shader Using GPUImage

One of my goals over the last year or so was to do more coding and to specifically do more GLSL programming. You learn best by actually working on projects. I know that, but I have had some trouble making time to do things on my own.

I want to go over my thought process in figuring out a filter I wanted to write, how I was able to utilize the resources available to me, and to hopefully give you an idea about how you can do the same.

You can clone GPUImage here.

Solarize Image Filter

One thing I had trouble with in GPUImage is the fact that it is too comprehensive. I couldn’t think of an image filter that wasn’t in the framework already.

So I started thinking about Photoshop. I remembered there was a goofy filter in Photoshop called Solarize.

Since I knew that Brad was more concerned with things like edge detection and machine vision, I figured that it would not have occurred to him to include a purely artistic filter like that. Sure enough, there was no solarization filter. Jackpot!

After figuring out something that wasn’t there, the next question was how to create one. I initially was going to use this Photoshop tutorial as a jumping off point, but I wondered if there was a better way. I Googled “Solarize Effect Algorithm” and I found this computer science class web page that gave an agnostic description of how the effect works.

What is Solarization?

Solarization is an effect from analog photography. In photography, one important component of a photograph is its exposure time. Photographers used to purposely overexpose their photos to generate this effect. When a negative or a print is overexposed, parts of the image will invert their color. Black will become white, green will become red.

There is a threshold where any part that passes that threshold will receive the effect.

One of the things Brad told me about GPUImage was that many of the filters in GPUImage are composed of many smaller filters. It’s like building blocks. You have a base number of simple effects. These effects can be combined together to generate more and more complex effects.

Looking at the algorithm for solarization, I noticed it requires two things:

  • An adaptive threshold to determine what parts of the image receive the effect
  • An inversion effect on the pixels

I opened up GPUImage to see if there were already filters that do those things. Both of these functions already exist in GPUImage.

Now I needed to figure out how they work so that I could combine them into one, complex filter.

GPUImageColorInvertFilter

Since the color invert filter is the simpler of the two filters, I will be looking at this one first.

Since this is a straightforward filter that does one thing without any variables, there are no publicly facing properties in it’s header file.

Here is the code for the vertex shader that we see in the implementation file:

NSString *const kGPUImageInvertFragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordinate;
 
 uniform sampler2D inputImageTexture;
 
 void main()
 {
    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    
    gl_FragColor = vec4((1.0 - textureColor.rgb), textureColor.w);
 }
);

These two lines will exist in every vertex shader in GPUImage:

varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;

These lines are the lines where you are bringing in the image that is going to be filtered and the specific pixel that the fragment shader is going to work on.

Let’s look at the rest of this:

void main()
{
   lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
   gl_FragColor = vec4((1.0 - textureColor.rgb), textureColor.w);
}

The first line is simply obtaining the specific RBGA values at the specific pixel that you are currently processing. We need that for the second line of code.

The gl_FragColor is the “return” statement for the fragment shader. We are returning what color the pixel will be. Texture colors are part of a normalized coordinate system where everything exists on a continuum between 0.0 and 1.0. In order to invert the color, we are subtracting the current color from 1.0. If your red value is 0.8, the inverted value would be (1.0 - 0.8), which equals 0.2.

This is a simple and straight forward shader. All of the actual, shader specific logic is in the gl_FragColor line where we are doing the logic to invert the colors.

Now let’s move on to the more complicated shader.

GPUImageLuminanceThresholdFilter

The Luminance Threshold Filter is interesting. It checks the amount of light in the frame and if it is above a certain threshold, the pixel will be white. If it’s below that threshold, the pixel will be black. How do we do this? Let’s find out.

The Luminance Threshold Filter, unlike the color inversion filter, has public facing properties. This means that the header file has some actual code in it that we need to be aware of. Since this one is interactive and depends upon input from the user, we need a way for the shader to interface with the rest of the code:

@interface GPUImageLuminanceThresholdFilter : GPUImageFilter
{
    GLint thresholdUniform;
}

/** Anything above this luminance will be white, and anything below black. Ranges from 0.0 to 1.0, with 0.5 as the default
 */
@property(readwrite, nonatomic) CGFloat threshold; 

@end

Our threshold property will receive input from a slider in the UI that will then set a property we will need to calculate our shader.

NSString *const kGPUImageLuminanceThresholdFragmentShaderString = SHADER_STRING
( 
 varying highp vec2 textureCoordinate;
 
 uniform sampler2D inputImageTexture;
 uniform highp float threshold;
 
 const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);

 void main()
 {
     highp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
     highp float luminance = dot(textureColor.rgb, W);
     highp float thresholdResult = step(threshold, luminance);
     
     gl_FragColor = vec4(vec3(thresholdResult), textureColor.w);
 }
);

This has a few more components than the color inversion code. Let’s take a look at the new code.

uniform highp float threshold;
const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);

The first variable is the value we are receiving from the user for our threshold. We want this to be very accurate because we’re trying to determine whether a pixel should receive an effect or not.

The second line has what looks like a set of “magic numbers.” These numbers are a formula for determining luminance. There are explanations for it here and in the iPhone 3D programming book by Philip Rideout. We will use this constant to map over the current value of the fragment in the next bit of code.

void main()
{
    highp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    highp float luminance = dot(textureColor.rgb, W);
    highp float thresholdResult = step(threshold, luminance);
     
    gl_FragColor = vec4(vec3(thresholdResult), textureColor.w);
}

The first line does the same thing as it did in the color inversion filter, so we can safely move on to the next line.

For the luminance variable, we’re encountering our first function that doesn’t exist in C: dot(). dot() takes each red, green, and blue property in our current pixel and multiplies it by the corresponding property in our “magic” formula. These are then added together to generate a float.

I have tried hard to find a good explanation of why dot product exists and what it does. This is the closest thing I can find. One of my goals with this series of blog posts is to take things like dot product where you can explain what it does, but not why you are using it and what functionality it fulfills. For now, hopefully this is enough.

Next up, we have the threshold result. This is where we are doing our only conditional logic in the shader. If you recall, this shader makes a determination if each pixel should be white or black. That determination is being made here.

The step() function evaluates two different floats. If the first number is larger, then the threshold result is 1.0 and that particular pixel is bright enough to pass the threshold requirements. If the second number is larger, then the pixel is too dim to pass the threshold and the result is 0.0.

Finally, we map this result to our “return” statement, the gl_FragColor. If the pixel passed the threshold, then our RGB values are (1.0,1.0,1.0), or white. If it was too dim, the then RGB values are (0.0,0.0,0.0), or black.

GPUImageSolarizeFilter

According to the algorithm that describes the solarization process, you use a luminance threshold to determine if a pixel receives an effect or not. Instead of the effect of making the pixel either white or black, we want to know if the pixel should be left alone or if it’s color should be inverted.

The only line of code that the color inversion filter has that the threshold doesn’t have is a different gl_FragColor:

// Color inversion gl_FragColor
gl_FragColor = vec4((1.0 - textureColor.rgb), textureColor.w);

// Luminance Threshold gl_FragColor
gl_FragColor = vec4(vec3(thresholdResult), textureColor.w);

I am embarrassed to say how long it took me to figure out how to combine these two filters. I thought about this for a long time. I had to think through all of the logic of how the threshold filter works.

The threshold filter either colors a pixel black or white. That is determined in the thresholdResult variable. This means that we still need the result in order to figure out if a pixel receives an effect or not, but how do we modify it?

Look at this part of the gl_FragColor for the color inversion:

(1.0 - textureColor.rgb)

Where else do we see 1.0 being used in the threshold shader? It’s the result of a successful step() function. If it fails, you wind up with 0.0. We need to change the gl_FragColor from the color inversion filter to have the option to return a normal color or an inverted color. If we subtract 0.0 from the texture color, then nothing changes.

This is the final code I came up with to implement my Solarize Shader:

NSString *const kGPUImageSolarizeFragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordinate;
 
 uniform sampler2D inputImageTexture;
 uniform highp float threshold;
 
 const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);
 
 void main()
 {
     highp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
     highp float luminance = dot(textureColor.rgb, W);
     highp float thresholdResult = step(luminance, threshold);
     highp vec3 finalColor = abs(thresholdResult - textureColor.rgb);
     
     gl_FragColor = vec4(finalColor, textureColor.w);
 }
);

This is primarily composed of the same code to create the luminance threshold shader, but instead of mapping each pixel to be either black or white, I am using that result to check to see if I am inverting my colors. If the colors need to be inverted, then the thresholdResult is 1.0 and our formula moves forward as usual. If thresholdResult is 0.0, then our texture color remains the same, except now it is negative, which is why I wrapped it in an abs() function.

Completely solarize shader. Will try to get a better picture later.

Completely solarize shader. Will try to get a better picture later.

Takeaways

One of the big things I keep harping on with this blog the last year or so is that to be a good engineer, you must have an understanding about what you’re trying to accomplish.

Breaking down a shader that you want into an algorithm helps you to tease out what pieces of functionality you need to get the result you want. In a lot of cases, the smaller pieces of functionality are already out there in the world somewhere and you can reuse them to build more complex things.

You also need to be able to read through those shaders to figure out how they work so you can know where the pressure points are to change the code. Since shaders are so small, you can usually take one line at a time and break down what it does. If there is a function being used that you’re unfamiliar with, Google it. You don’t always need to understand all of the why something is being used in order to implement it, such as I did with the dot() function. I was able to have a good enough grasp of what it did to understand why it was needed in my shader which was all I really needed.

This stuff can be intimidating, which is why it’s important to spend some time figuring out why something works and, more importantly, figuring out what YOU want to do with it.

I will be following this post up tomorrow with instructions for how to add a shader to the GPUImage framework, including some other parts of the shader code that I did not go over here.

We Didn’t Start the Fire(Wire)

I have written about this before, but at my current job my boss and I are rewriting our robotics control software in Swift. This is an excellent blog post here that explains why we are doing this.

This is the camera setup we have on our robotics systems. Cameras help with dispenser positioning and we support both video and image capture for our users.

This is the camera setup we have on our robotics systems. Cameras help with dispenser positioning and we support both video and image capture for our users.

There have been a few projects we have open sourced after implementing them in this project. This blog post details the most recent project we have completed and open sourced, which was to write a wrapper class allowing us to connect to an external camera that conforms to the IIDC standard. This project can be found here.

Cameras are an important feature in our robotics systems. Users use the camera to help position their dispensers and to capture media. Videos and images of the dispensing process have been used in papers and documentation of scientific research, so continuing to support this functionality is vitally important.

What is the Goal?

Back when the code was initially written in 2007, AV Foundation and GPUImage did not exist. There was not really an easy way to hook up an external camera to an application. Additionally, the standard for rapid data transfer at the time was Firewire.

The fact that there were no easy solutions meant that our code was overly complex. There were much easier ways to connect to a camera and run the video through a filter that we simply couldn’t implement because our code touched too many other things. We set out to simply the code in our rewrite.

One major goal of this project was to make it easier to add additional cameras while still supporting the legacy cameras out in the field.

Since this company has been around for over a decade, we do have legacy hardware out in the field that we still need to support. Currently we have three different kinds of cameras out in the field associated with our robotics systems: Unibrain, Point Grey Flea2, and Point Grey BlackFly. At some point in the next year or so we will need to support a fourth camera because our current camera, the BlackyFly, has been discontinued.

What is IIDC 1394?

IEEE 1394 is a serial bus standard for high speed, real time data transfer. USB is another serial bus standard that is more widely adopted because IEEE 1394, aka FireWire, was proprietary to Apple.

Our first camera type, the Unibrain camera

Our first camera type, the Unibrain camera

Even though FireWire ports are no longer available on Macs being sold today, there are still many cameras that conform to the IEEE 1394 standard. Our current Point Grey BlackFly cameras have a USB 3 plug but they conform to the IEEE 1394 standard.

IIDC is the FireWire data format for live video. In order to be able to interface with an IIDC compliant camera, we have to conform to their standard.

There is a library to interface with IEEE cameras, libdc 1394. We have integrated that library into our project and adapted it in order to be able to communicate with our cameras. This library’s functionality is what we are wrapping in our GPUImageIIDCCamera class.

We did not integrate the GPUImageIIDCCamera class into the primary GPUImage framework. The libdc 1394 library has less permissive public licensing than GPUImage has, so for legal reasons, the class could not be merged into GPUImage proper and must remain a separate entity.

Objective-C? Why Not Swift

Taking a legacy piece of software that integrates with hardware is something of a challenge. Since Objective-C is a superset of C, there was a lot of low level C programming that could easily be integrated into the previous iteration of the control software that now presents some challenges when we attempt to implement them in Swift.

One such challenge was figuring out how to interact with our hardware. Prior to attempting to connect and control our camera, we had to determine how to talk to our micro controller. We were able to do this within the current constraints of Swift, but there is one feature of the C language that Swift does not yet support, which is mutable function pointers.

Since this was an integral part of our process, it was necessary to write this class in Objective-C. This, for the record, is the first time in our six-month process where we encountered a problem that we could not code in Swift. This didn’t prevent us from being able to implement this feature, it simply meant that we had to finagle a few things to fully integrate the Objective-C class into our control software code.

What do we Need the Code to do?

These are several things we needed this class to accomplish:

  • Connect to the camera
  • Capture frames
  • Set up the proper video format for the camera type
  • Remap the YUV colorspace to RGB colorspace
  • Get and set camera settings for things like brightness and saturation
  • Handle camera disconnection

Challenges

One of my personal challenges was simply understanding the code. Since much of our functionality would be done differently in the new code, I couldn’t just port it over from the old version of the software. It was important to get a sense of how to wrap the IIDC functionality in such a way that it would be easy to implement new cameras into our process. It was also important to figure out what lifting would be done by GPUImage and what would be done by the IIDC camera class.

Our current camera, the Point Grey BlackFly

Our current camera, the Point Grey BlackFly

Additionally, Brad did some extra work on our version of libdc1394 and his changes had not been documented. I couldn’t use the general documentation, what little of it there was, for the code.

Initially we thought that we would not need to use any OpenGL to process the video frames. It was later determined that a shader would be necessary for finding the frame size. This was beyond my present OpenGL experience, so Brad needed to write the necessary shader to accomplish this.

We also had to deal with different video modes. There are about thirty types of video modes we have access to, but all of these boil down to one of two types: Format 7 or anything else.

Format 7 allows you to set the frame size and the colorspace. All of the other video modes specify those things in their mode name.

Point Grey Flea2 camera mounted on our Desktop system

Point Grey Flea2 camera mounted on our Desktop system

Not all cameras support Format 7. Our first camera, the Unibrain, does not support Format 7. So we needed to make sure we were able to connect and use both Format 7 and non-Format 7 cameras.

We also had to deal with the fact that we were talking to a piece of hardware. Those settings, along with brightness, saturation, and others are all set on the physical piece of hardware. We can communicate with the hardware using C functions, but the point of wrapping this class is to avoid having to touch the messy underlying C library.

Each property associated with the camera that we can set has overridden getters and setters. We override them in order to make sure the camera and the application are on the same page about what each expects the settings to be. When you drop this class into another application, it appears to work the same way for the programmer with all the nasty bits tucked away in accessor methods.

Final Thoughts

When I worked on figuring out libxml2 at the beginning of the year I thought that was the hardest thing I would work on. That was just a warm-up for this project.

This was a huge challenge for me personally. I think trying to figure this out has been the hardest thing I have done in my career so far. In addition to how difficult this has been, not working with Cocoa since 2014 has made trying to get back in the swing of Cocoa development has been a bit of a challenge.

I hope that as I progress in my career it gets easier for me to pivot from low to high level development more easily. I wish I could have done this entire thing by myself, but I understand that we have deadlines that need to be met. I am proud of the amount I was able to do here and the growth I have experienced as a programmer by pushing myself to work on something this difficult.

You Kids, Get off my Virtual Realty!!

Over the weekend I was surprised with a gift I didn’t think I would ever get: New ports of a bunch of my favorite games from when I was in my impressionable tweenaged years. First among these games was “Sam and Max Hit the Road.” Closely following this cultural touchstone were “Indiana Jones and the Fate of Atlantis” and the “Legend of Kyrandia” trilogy.

I became acquainted with the point and click adventure game genre through my brother. When I was in junior high my dad bought my brother a computer for Christmas and bought me a wooden chess set. I am not bitter about this. Much…

Anyway, he was working through Day of the Tentacle. I would walk by and wonder what the hell it was he was playing. It looked weird and creepy. It is weird and creepy, but at the time it didn’t look weird and creepy in an endearing way.
Day_of_the_Tentacle_Founding_Fathers
One day I got curious and started asking him about what was going on. He was stuck on a puzzle in the game but he couldn’t explain to me what had happened up until then, so I went on the computer and started my own game.

Holy crap, this game was amazing! There are so many weird and surreal things going on this game that it may have irreparably warped my sense of humor. Possibly more so than it was warped before. A game with time traveling port-o-potties, a valley dude hanging out with George Washington, and a plot point that requires you to freeze and microwave a hamster is more than a little sick and twisted.

We worked in parallel on our game. One of us would make progress and we would share it with the other person. It took us a really long time to get through that game. It feels like it took months. It might have, I really don’t remember.

When an artifact comes along, you must whip it!

When an artifact comes along, you must whip it!

After polishing off Day of the Tentacle, we worked through Indiana Jones and the Fate of Atlantis. We thought there was only one path through the game and we had screwed it up by ditching Sofia halfway through the game. After working through the game a few times we realized that there were actually three successful paths through the game. That got us really excited to go through the game and replay it a few times to figure out how many different ways the game could be won.

I did want to make a brief mention of my bewilderment about the death of the Indiana Jones franchise. Fate of Atlantis proved that Indiana Jones could be a great franchise where you have a nice formula that is infinitely customizable without getting overly stale. I am saddened that the last few films felt like they had to do like character development or something. Indiana Jones totally could have been James Bond with archaeology. Such a missed opportunity.

The Tunnel of Love from Hell!

The Tunnel of Love from Hell!

It took us a lot longer to get through Sam and Max. There was a point in the game where you had to go into the Tunnel of Love and hit a specific place on the wall at exactly the right moment in order to find the Mole Boy who wanted pecan flavored candies. We went crazy trying to get past this point in the game. We knew something was there, but we never hit the wall at the right moment. I think we worked on this game on and off for months. I think we restarted the game just to be able to play the game up until that point because we enjoyed the twisted sense of humor so much. In fact, we replayed up until that point so many times that there is a full two thirds of the game I barely remember because I played it all the way through just once or twice.

I don’t remember which one of us got past that point or if we did it together. I do remember we were both elated that we could finally continue on with the game and we celebrated that moment together.

The summer between seventh and eighth grade I encountered two games: Myst and Legend of Kyrandia. Legend of Kyrandia was another SCUMM-based adventure game created by a company other than LucasArts. Our school had a summer enrichment program that many of us quickly realized meant that we could hang out at school and play computer games all day.
the-legend-of-kyrandia-screen-4
I don’t remember who found Legend of Kyrandia, but it very quickly became a favorite of everyone in the group of about ten of us. We were obsessed with this game. There is a point in the game where you get lost in these caves and if you don’t light them properly you get eaten by animals. We all worked together to piece together a map of the entire cave, along with all the objects that are hidden that you needed. When someone would make progress in the game we would quickly spread that new information to everyone else in the group. It took us a few weeks to work through the game and it worked as something of a bonding experience for all of us that immediately was forgotten when school started up again.

My experience with Legend of Kyrandia was vastly different than my experience with Myst. I had to work through that game alone. I played it a lot because I thought the graphics were pretty. Myst is in fact one of the things that got me interested in 3D graphics and texture mapping. I really wanted to know how the worlds were made. Unfortunately, I didn’t get as far into the game as I would have liked. I didn’t realize you could leave the island until I bought a strategy guide. I thought you were just supposed to wander around and look at all the pretty scenery. I couldn’t understand why everyone thought the game was so amazing. After figuring out you could leave, I was far more excited about the game.

At this point, you may be wondering why I am rambling on about my lost childhood gaming experiences. I have a point. If you read through this spiel, you will notice that not once did anyone ever check the Internet to see what to do when we got stuck. If we got stuck, we just didn’t progress in the game.

Pro tip: Don't stick your hand in a crack in the wall on an alien planet. Just don't.

Pro tip: Don’t stick your hand in a crack in the wall on an alien planet. Just don’t.

The only games I was able to get all the way through were ones that I worked on with at least one other person. I found a simulated version of Legend of Kyrandia and I tried working through it on my own, but I quickly got stuck in the caves, got bored, and just downloaded a map off the internet.

I find it mind boggling that my brother and I literally spent YEARS when I was a teenager working through these games. We would be stuck on puzzles for months. Yet we would sit there and just keep trying anything we could think of to get through the game.

When was the last time anyone ever spent a month working through a game? The last game my husband bought was Legend of Zelda: Skyward Sword. He spent about a week playing through the game, beat it, then threw it in a box and forgot about it.

Back when I used to work at Target I would bring my Nintendo DS to work. I had Lego Harry Potter to play on my breaks to blow off steam. I would only play that game when I was at work as something to help me get through my day without going insane. After I had been working on it for a month one of the back room guys came over and said, “Wait, you’re still working on playing that same game?!” It’s inconceivable that anyone would spend a month playing a game without either giving up or beating it.

I don’t pretend to be any kind of gamer, but are games easier than they used to be? It seems to me like people used to spend weeks or months working through games. I read a blog post by a guy talking about working through one of the first Zelda games by coming home from school and being glued to the TV for weeks.

Waiting for the smoke monster to show up with the polar bear.

Waiting for the smoke monster to show up with the polar bear.

I am kind of sad that I don’t really see games out anymore that take months to get through. I am also really sad that I don’t get to work through a game with other people anymore. That summer working through that game was a really awesome experience. I have felt rather isolated from my classmates in school. I always did group projects on my own. Having an experience where we all worked together on something that we were excited about was a gift.

I don’t have this experience of working through games anymore, but I have found that I can get something like it when I talk to people about code. Right now my boss is working through functional Swift programming using Haskell design patterns and syntax. Sitting with him looking at the stuff he is doing and trying to catch up so that I can help out is surprisingly emotionally fulfilling.

I wonder if people who grew up with the internet will ever get a chance to work through a problem with someone where the answer isn’t instantly available online. One reason I am finding working on the Swift problem so exhilarating is that there isn’t a “right” or “wrong” way to do things yet. Coming from a school background, I’m used to the idea that the person who knows more than I do has a right answer to the problem we are supposed to solve for class. Being in a situation where that answer isn’t known yet is somewhat freeing. It gives these things we are doing meaning. We aren’t just doing mind games or mental exercises. This is it. This is why I learned to code, to solve a problem.

Working through those silly adventure games really gave me tenacity to keep working at something that I knew there had to be an answer to, even if it wasn’t immediately available. It also taught me how important sharing knowledge and collaborating is. None of us would have gotten through the game in the time we did if we hadn’t worked together and pooled our knowledge.

Giving information to someone who doesn’t have it costs us nothing. Working together we can do things we couldn’t do separately.

I haven’t opened any of my games yet. I am afraid I won’t remember how to do anything and I won’t have anyone to play them with. Maybe I’ll find someone to play with. Maybe not. Either way, I’m sure they will be harder than I remember them being.

So are we, Bernard. So are we.

So are we, Bernard. So are we.

My Life in Stitches

So I have an embarrassing thing about myself I want to confess. I have an incredibly terrible and subversive hobby. I have been living in fear of people finding out about it and judging me. Here goes…

One of my favorite hobbies is cross stitching.

At the point, you might be wondering why I think this is some subversive thing to confess to. I will tell you why.

I have been cross stitching since I was seven. Pretty much my whole life I have been lead to believe this is something I should be embarrassed about.

My collection of projects finished but not framed over the last six years.

My collection of projects finished but not framed over the last six years.

My father would continually tell me that I should stop my cross stitching hobby and take it back up again after I retire. Looking at how tiny all the holes and the patterns are, I am highly skeptical that this is a good course of action.

I would bring my cross stitching to school to do during study halls and I would be constantly ridiculed by my classmates for doing it. So, like a good teenaged girl, I caved to peer pressure and hid my hobby away.

When I got married I had several very large and complex pieces that I worked years on framed. My husband wouldn’t let me hang them in the house for several years because he hated them. I still have a multitude of projects that I have finished and thrown into a bag that is slowly getting larger and larger over the years.

I have always felt like I was a weird, socially aberrant person because I have had a fascination with filling in little boxes with color and making a pattern out of them. I hide my carefully organized and structured projects in metal lunch boxes and pray that no one asks me what is inside.

So what does this have to do with anything?

Over the weekend I attended CocoaConf Columbus. Our first keynote speaker was Mark Dalrymple. During his excellent keynote, he talked about people embracing their hobbies. One of the hobbies he threw out was cross stitching. This threw me for a loop. Cross stitching has fallen out of favor over the last ten years. Also, this was a tech conference! People don’t talk about sewing at a tech conference!

I have painfully learned over the years that tech people are not supposed to cross stitch. Back when I was less experienced, I would go to interviews and be asked what I did for hobbies. I would say I cross stitch and there would be an immediate reaction on the face of the interviewer. I could tell that they mentally determined that I was not a tech savvy person.

There is this stereotype that women who cross stitch (and it is mostly women) are usually stay at home mothers or elementary school teachers. I am a British history buff and one very painful memory I have was reading about an attempted coup of Mary, Queen of Scots. Mary was an accomplished needleworker. When she was locked up in the tower, one of her captors sneered at her that she would have plenty of time for her needlepoint now. Society see needlework as something inherently tainted. People who enjoy doing needlework can’t possibly be fit to do anything important like run a country. Leave that to the other people who are more able to take on that responsibility.

You don’t see a lot of tech people talking about their cross stitching projects. Hell, knitting is much more socially acceptable than cross stitch! That might be because a lot of men do it, but that is a topic for another time.

Dragon project requiring over 50 threads, including metallics and beads.

Dragon project requiring over 50 threads, including metallics and beads.

Cross stitching is a far more concentration heavy task than knitting is. Cross stitching, specifically counted cross stitch, requires a tremendous amount of organizational skills. I regularly complete projects that include fifty different shades of thread and can include over a hundred symbols that contain some combination of those colors. You learn very quickly to get organized or you give up. Over the years I have learned to organize my thread to prevent it from tangling or becoming confused.

Counted cross stitch also requires you to look at a symbol on a grid, translate that symbol into a color, and render it onto a fixed rectangular surface of squares. Does this sound at all familiar? It is very similar to the process that takes place on the computer to render an image, except instead of bits I am using thread. I have been a human fragment shader for 25 years.

Do it yourself Doctor Who lunch box sewing kit! *Not guaranteed to be bigger on the inside.

Do it yourself Doctor Who lunch box sewing kit! *Not guaranteed to be bigger on the inside.

Every skill that makes me a good programmer is a skill I learned from counted cross stitch. I learned to be patient while working on a very large project that takes several years. To give an idea of scale, the dragon picture in this post is a project that I draped over my 15-inch Mac Book Pro and the edges spill over the sides by several inches. I learned how to mentally break down the project into manageable parts so that I did not get overwhelmed and confused. I learned how to organize my space and my tools to optimize my time. I learned to “debug” my designs because no matter how hard you concentrate, you will make mistakes. If you just keep following the pattern like a robot, your design won’t render properly.

This weekend was the first time I brought a counted cross stitch project to a conference and worked on it while listening to a session. I find that I can focus far better while cross stitching than I can while I have a computer in front of me because I get so focused on the screen that I tune out what is being said. I have been told it is rude to cross stitch in class or at conferences even though it is not considered rude to chat on Twitter.

I want to thank Mark D. for giving me the courage to write this post. I am tired of feeling ashamed of a hobby that has been a large part of my life for 25 years that has given me all the tools I need to continue to do what I want to do. I hope that one day people won’t be judged on their hobbies or how they decide to spend their free time, because often those are the things that shape us into the people we are.

Final Countdown to CocoaConf Columbus 2014

After months of prep work and a roller coaster of changes, I am in the final day before heading off to the first of my three August conferences.

I have encountered more issues with Metal than I was hoping to find. This is the first time I have had a paid developer account during the beta period. Prior to now I was so busy just trying to establish a foundation that I somewhat ignored the new stuff that was coming out. This is the first time I have participated in the early release of not one but two new groundbreaking technologies on the ground floor.

I had to move more of my GPU programming talk over to OpenGL ES than I was planning to. I don’t think that is a bad thing per se. The most important thing I wanted to do was to answer a very specific question about one aspect of OpenGL programming. The fact that Apple came and changed everything about that made my talk both easier and harder. A lot of time was spend explaining why Metal is necessary and that fit into the parameters I wanted to address.

I will be giving this talk again in December. There will be a golden master of Xcode 6 at that point in time. I hope that it will be stable enough at that point that I can speak more about how to do things in Metal specifically rather than just ambiguously saying “This is how this would work if it were working, but it isn’t.”

I am giving my talks later today at Bendyworks. Bendy has been very kind to let me come and practice my talks there. I have found the feedback I get from them to be invaluable. I have also found that I am far less nervous once I have performed the talk at least once in front of real people and not just my pugs.

Speaking of my pugs, I am not going to see them for a week and I am very sad about that. I am going to miss my little buddies. Such is life.

I still have not packed. I need to pack sometime today. I also have to go to our Swift user group meeting to make arrangements with the people I am carpooling to Ohio with.

So I have a half dozen tasks to do today. Just need to take them one at a time to avoid feeling overwhelmed.

This is really stupid, but I keep forgetting that I do these talks because I love traveling to the conference and meeting new people. It’s hard to remember that this is going to be an amazing and awesome experience because I am putting a lot of pressure on myself to do a good job with my talks. I need to make sure I take some time to chill out and not worry so much about what I am doing.

Don’t panic.

Looking forward to seeing all my peeps at CocoaConf Columbus and That Conference in Wisconsin Dells!!

Lexical or Preprocessor Issue

So, today was the day I decided to bite the bullet and start working on my Metal demo for CocoaConf Columbus and 360|iDev.

Since a large focus of my talk is on GPUImage, I am hoping to put together a light Metal version of GPUImage that processes an image using a series of filters. I want to write between three and five filters that are easily stacked on one another that have a GPUImage counterpart in order to test how fast Metal processes images compared to GPUImage.

I went to look at what sample code is available from Apple for Metal. To my delight, I saw that there was an image processing base project. It includes one filter to change an image to black and white and that is hardcoded. I should be able to go into this project, add my filters, and add some UI elements allowing me to add the filter shaders I write.

Today I opened the sample code. Immediately, there was an error.

“Lexical or Preprocessor Issue: QuartzCore/CAMetalLayer.h not found.”

This is why we can't have nice things!!

This is why we can’t have nice things!!

Huh. That is inconvenient.

Did some digging. Refrained from asking this question on Stack Overflow because the last time I asked a question about the betas I got a snide person telling me to go somewhere else. Headed to the Dev Forums and found this thread.

Apparently, for the time being, there is no support for Metal in the simulator. There should be support for Metal if you have an A7 device like the iPhone 5S (which I have) that is running the iOS 8 beta.

I have not yet updated my phone to the beta. I know we are getting close to the point where it will be released, so it isn’t a huge thing to update to the beta, I just feel like I have no guarantee that stuff will work on there properly even after I update to the beta.

I must say that this latest wrinkle is not doing anything to sell me on Metal.

Metal only works on iOS A7 chips and now further won’t even work in the simulator. I usually use the simulator in my talks to demonstrate things I am doing, but now I have to get it on my device. I think I can use Airplay to show what the screen looks like, but that is one more step that can go wrong in my process.

The other things I am noticing in the sample applications is that most of the class implementation files end in “.mm”, which means that they are explicitly telling the compiler that there is going to be C++ code in them.

I have not worked with Swift as much as I should have, but I am wondering if this is going to be a problem with trying to write an app in Swift. I know that theoretically Swift is supposed to behave like Objective-C in that you can include C and C++ code, but I have not tried to write straight C code in a Swift class yet. Can you write C code in a Swift class, or is the support just that I can import a C class into a Swift-based project? How is this going to work with Metal?

At least with OpenGL ES you have the GLKit framework with should work with Swift. I am interested to know more about this, but sadly I don’t believe I will be able to explore these issues before I give my talk in Columbus.

I am also trying to figure out just how much C++ I need to know to fully work with Metal. I thought that I needed to know about the same amount of C++ as you need to know of C to work with GLSL, but after seeing the number of classes that are implementing C++, I am slightly worried that I am going to be out of my depth for a while.

These are things I am going to have to take into consideration and disclose during my talk. I know most of these issues will resolve themselves in the next few years, it is just slightly frustrating to sit on the sidelines trying to figure out how to make it work here and now.

Fortune favors the brave.

Heavy Metal

Hair Force One announcing Metal

Hair Force One announcing Metal

I know that the big new hotness from WWDC 2014 for most people is the Swift programming language. Swift has a large impact on me and on the project I am working on that I can’t publicly announce yet, but that was not the most intriguing thing announced to me. The most interesting thing that captured my attention was Metal.

I have been interested in learning OpenGL ever since I heard about it. I had to make the terrible choice last year of choosing whether to learn OpenGL or Core Audio because it would be complete idiocy to try to learn both at the same time. Since Chris Adamson didn’t write a book on OpenGL, I made the choice last year to learn Core Audio. It was the first programming book I read cover to cover and I got to spend a day with him in Boston at CocoaConf doing Core Audio. That was an amazing experience, but it’s time to move on to the next thing.

I started to learn OpenGL ES in earnest back in March. I had a few books and I have primarily been reading the same materials over and over again hoping that my brain translates them.

GPUImage

GPUImage

One accepted way learn OpenGL ES is to work on the GPUImage framework. There is a great blog post about how to write a custom shader here.

I decided a good way to learn OpenGL ES was to do a talk on GPUImage. Many of the tutorials I have seen on the framework basically just tell you how to plug it into your project and use the built-in filters. I wanted to do a talk about how the framework actually works and how to write your own filters. The creator of the framework, Brad Larson, lives in town. He has been extraordinarily generous with his time and knowledge about OpenGL ES. I pitched this talk and got it accepted at two different conference: CocoaConf Columbus and 360|iDev in Denver. Both of these conferences are in August. I pitched these talks around May. I figured that would be a decent amount of time to figure all this stuff out.

Then, like everyone else, I got slammed by WWDC.

I know that I don’t have to talk about Metal. It’s only been publicly announced for a few months and it only works on a handful of devices. There was no reason I couldn’t just keep my original talk topic. No reason except I had some existential questions I wanted answered.

Every time I heard about GPUImage I heard it was faster than Core Image because it was programmed on the GPU. What does that mean? All of my research on OpenGL ES says to push as much work off the GPU as possible, but they never specify what work the GPU is doing. I read a whole book on OpenGL ES without having any real clue what work is being done on the GPU.

The Defending Champion, OpenGL ES!

The Defending Champion, OpenGL ES!

I really wanted to do a talk on how to optimize OpenGL ES. I also wanted to explore what exactly it was that Metal was doing that was so much better than OpenGL ES. I heard a lot of bemoaning about how slow and inefficient OpenGL ES was, but after talking to Brad about it for a little while, I wondered if the mob was wrong.

I am doing my first talk on Metal three weeks from today. I have exactly one slide from my talk done as of 1:00 this afternoon, but I am in the process of gathering the answers to my questions.

One resource I can’t recommend more highly is the video tutorial series done by Ray Wenderlich. I had a list of questions in my head that I now have answers to because of his series on OpenGL ES. I am a quarter of the way through it and subscribing to his video tutorials is the best money I have spent on tech resources this year. It is my hope that one day he will produce a 3D graphics programming, hopefully after I know it well enough to be able to contribute to it!

So, I am going to take some time, but not too much, cataloging my work on this talk. I also have a debugging talk to complete in three weeks along with some obligations for my unnamed project. I think this is doable if I don’t have a panic attack or get distracted by squirrels.

The Famous Utah Teapot

The Famous Utah Teapot

I am planning to include links in my blog to any resource I have found to be particularly useful.

My goal before going to CocoaConf is to have a working Metal application with a few of the GPUImage filters translated from the OpenGL Shading Language to the Metal Shading Language. I would like to show the performance differences between GPUImage and Metal using the same project. I would also like to be able to intelligently explain GPU programming to people who are coming into this without knowing anything about OpenGL.

Three weeks. Two talks. Git ‘er done!