Coincidence, luck and getting my first job in gamedev

I recently hit a milestone in my professional career – it’s been exactly 9 years since I made a move into games industry. This may not sound like something profound or significant but the circumstances of my transition from regular software development were rather interesting. I think it’s fair for me to say I was quite lucky and found myself in the right place at the right time. Today I want to share something I always bring up when talking to students or people who want to get a job in games REALLY bad but don’t know who to ask or how to start. This is the story of how I unintentionally and accidentally started making in games.

In 2006 I moved to Linköping in Sweden, with the goal of finalizing my studies. It was my last year and apart from taking several courses, all I had to do was come up with an idea for a master thesis, get it done and be on my way to a spectacular career in IT. At the time, I didn’t know too well what I’d want to do in my life. I felt like programming was “the” thing for me to do since I enjoyed it and found it pretty lucrative but I had no clear idea on what exactly I’d want to focus on. I came to Sweden with roughly 40000 SEK in my bank account which I made doing some part-time work as a PHP developer which would last me for a few months. I was also backed by a small scholarship from my home university but other than that I was on my own. If there’s one thing a person from a middle-eastern european country can say about Sweden it’s that it’s relatively very expensive. My savings were soon starting to dry up and the remote contracting I did at the time was simply not enough for me to make it through a month, so I decided to start looking for a job in Sweden which would hopefully pay higher than what I was making. Soon enough, I managed to find a small consulting company several blocks away from where I lived and they decided to hire me as a Python programmer. The pay was a mindblowing 15000SEK a month, which was a completely different ballpark from the 2000SEK I was making as a contractor for a Polish company. I took the job and was very happy with it.

Sadly, this didn’t last very long. Soon before Christmas 2007 I was let go. The economy started going bad for the company and they had to cut costs, starting with low-tier employees. Anyone who went through being fired knows how unpleasant it is. It gets worse especially if you completely don’t see it coming, which was how I felt. I did manage to save up a bit over the course of few months I got to work there but it was not enough for me to last until my thesis was done. I was mentally pretty shattered and depression kicked in pretty quickly. To take my mind off this, I turned to the best remedy I could get at the time: alcohol, drugs and online games!

Miniclip.com is a website with small flash games that don’t require too much of either focus nor time to play them. Back then it was extremely popular and for many developers it was the best chance they could get to actually make some money on a game and get it noticed by wider audience. For me, however, it was an anti-stress device and something that kept me occupied and distracted from my everyday financial problems.
Among dozens of games that I played, there was a very specific bunch I used to replay over and over. Coincidentally, they were made by the same company: Hammock ADB and Numbat Studios. Now that I think about it, I can’t really say what exactly was the “thing” that made me stick with their games. They were simple in concept, pretty and elegant with a slight nostalgia factor since some had a retro-feel to them. The rules were intuitive enough for a player to learn without a tutorial and the controls were flawless. Niether of the games had a particularily involving story but the overall theme of each product was enough for me to spend hours on end playing them. Whatever the reason, I decided to look these guys up and learn a bit more about them. This is where things started getting interesting!


Games by Hammock ADB and Numbat Studios were what kept my spirits high during unemployment depression.

First suprising fact about Hammock and Numbat was that they turned out to be companies based in Sweden. But it got better than that. Turned out their office was 2 streets away from my soon-to-be-former office which completely blew me away! I enjoyed playing their games but it was then when for the first time I thought: “hm… I have nothing to lose, maybe the actually need a programmer?”. At this point finding contact e-mails was a no brainer, so I decided to brush up on my CV, write the best “hello!” email I could come up with and just give it a shot. I didn’t have high hopes since I thought I’d be dealing with AAA professionals who might just shrug me off. Remember, it was 2007 and the indie developer scene was pretty much non-existant with a few minor exceptions. Unity was around but was not yet as relevant as it is today and using it for your small project cost insane amount of money. Unreal Engine was outside mere mortals’ reach, so if you wanted to make a game you’d either have to team up with someone and make your own tech or use mediocre tools. Also, I wasn’t 100% sure making games was what I really wanted to do. Despite my doubts, this is what I sent them:

Few days had passed and I got a resonse from Tomas, though it was not what I silently had hoped for:

So no job but there’s still hope and I should contact “someone who may know something” at a company I didn’t even know existed in the area (something that made me realize I should improve my Google-fu!). I wrote another email and soon enough got a response:

And this my friends is when I felt like everything was predestined for me. Power Challenge was looking for a person with my exact profile and experience, so naturally I followed up on it and in the end got hired. This was also my first exposure to a brilliant interview process where you don’t take a written test and there’s no “whiteboarding” involved. I got a task to do at home, 2 days to complete it and report back. For a freshman out of the university with very little professional experience it was completely unbelievable. February 2008 was my first day at work and it was also my first “real” full-time. This also led me to extend my stay in Sweden from the planned 1,5 to almost 3 years but that’s a tale for a different day… 🙂

The moral of the story and one big thing I learned is never to underestimate yourself and keep trying no matter what. Not gonna lie, in some circumstances it might also require a strike of luck or knowing the right people, which in my case was a subtle mix of the two, since Tomas from Hammock pointed me in the right direction and he happened to live and work in the same city as I did. Today it might not necessarily make that much of a difference since our online presences have no physical locations but being able to meet someone in person will definitely help. Another thing: getting a job in the industry you want to work in might not start off from the exact position you want but getting your foot in the door is always the first step. Even though PHP development wasn’t my dream gig (I wanted to work with C++, not traumatic web development!) it was still an invaluable experience I wouldn’t swap for anything else. You may not work with the things you want to right away but given enough time and persistence, you’ll get there eventually. If it worked for me, it will definitely work for you too. My first gamedev job also opened doors to meeting people in the industry. I got a chance to work with folks from DICE, Ageia, NVidia and a bunch of other companies I wouldn’t even dream to come across in my professional career. A lot of these people have switched jobs since then – some moved to Apple, AMD, Microsoft or big gaming companies across the world. With some our paths have divereged, with others I’m still in touch. Never burn bridges and always try to live as peacefully with your co-workers as possible. You may never know how your or their fate might turn in the future!

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

Teaching is hard

Even though it’s been many years, I can still remember my first days in school after I started education as a little kid. I truly admired and looked up to teachers who, to me, were the living embodiment of knowledge. I think I was really lucky because I was taught by people who truly had the calling and for a brief moment I even wanted to become a professor myself. However, as the years passed on, I developed a feeling that I would not make a good person to share what I know with others. My patience was low and I found it tremendously difficult to discuss things that were obvious to me but a novelty to others. Becoming a teacher turned into a nightmare job for me and I quickly realized this is something I want to stay away from as far as possible.

And yet, 25 years later, I started giving lectures about game programming at a university and it’s WAY worse than I imagined it all those years ago.

You don’t realize how difficult some things are unless you try them. Out of sudden, you move from the position of a student taking notes to being this single person standing on the scene, talking to others and hoping nobody out there will publicly shame you for the 10 mistakes you made when talking about X 15 minutes earlier. For a newbie teacher it’s a load of stress, especially if you want to impress the audience with your knowledge and, more importantly, keep their attention span at maximum. Being a person who rather listens than talks, I found speaking challenging, or rather: being able to speak in an interesting way. I quickly realized that it’s relatively easy to dictate a math book over the span of 1,5 hour but how do I do it so that the students actually learn something and don’t fall asleep? Conveying a story about “this one time when I solved this neat problem Y using quaternions” and smuggling knowledge right along with it is something I consider craft in itself. As such, it takes me several days to prepare a lecture and roughly 20 PowerPoint slides to help me out. It’s incredibly intense and mind draining. The day before my lectures I sometimes question whether I’m really a person fit for the job and having all those mixed feelings about what I’m about to do doesn’t help me at all. The feeling of doubt persists right until the very next day. Even 5 minutes just before the lecture starts, I still have doubts.

And then something magical happens. I walk through the door, get smiles from the entire room of students and some of them even ask questions about previous lectures. I feel someone caught up with what I wanted to share with them. Somebody cares and listens, somebody wants *ME* to tell them what *I* know. The motivation boost begins and along with it, my lecture. I talk. I show slides. I share stories that I have with relation to what I want to teach them, hoping this will help the people remember and correlate things with each other. Suddenly, they start asking questions – now I’m confident that someone is really listening to me. And let me tell ya – the questions they sometimes ask can get mind boggling! One aspect of talking about things that are common knowledge to us is that we soon forget what it’s like *NOT* knowing about it. When learning a new skill or an algorithm, we still have the “fresh look” and it’s more intuitive to question things or dig deeper to get the underlying meaning. Once we start applying the knowledge, it’s easy to forget the questioning part and a lot of things start being taken for granted – until you have to re-explain it to someone who wants to understand everything. This is what you’ll be getting with students – lots of uncomfortable questions, sometimes even about the aspects of a particular problem you may have never even thought about! Bottom line: if you want to know how much you don’t know about X – give a lecture about it to people who have no idea what X is!

As I’m still trying to figure out what the best balance is for a fun lecture is, there’s also the matter of maintaining a proper student/teacher relationship. Having lived and studied in 2 different countries, I find this to be a thing heavily dependant on a particular culture. Where’s the fine line between being a strict teacher and a friend? Should I be more formal or can I let the students call me by my first name? How do I make sure I’m not “too friendly”, so that the students don’t get too cocky with me and start caring less about education? I live in a country, where relationships like these are usually very formal, so my students were a bit surprised and intimidated with casual talks during breaks. On one side I found this to be a good sign – it tells me that they do have respect for me as a teacher. On the other hand, having no “bond” between a student and a teacher makes the former more shy and reluctant to ask questions. A sad state of affair in our formal education is that 99% of the time, the students don’t truly realize the teachers are there *PRECISELY* so that they can talk to them and deepen their knowledge. Being less formal and a bit more casual seems to do the trick in my case and I can see positive attitude changes with every passing week. Students want to talk to me, ask me about things and learn from me. It’s heartwarming but at the same time makes me a bit sad about the fact that I seem to be an exception in a generally unfriendly world of Polish education.

Yes, teaching is hard and can get stressful. Having only a few months of experience doing it, I still don’t think I’d ever want to be a full-time lecturer. However, despite the downsides and occasionaly losing my voice after talking non-stop, I would still decide to go for it had I the chance to turn back time. Even though it drains me physically and sometimes gets more difficult that my everyday programming job, it gives me a unique chance to help others with their future careers and possibly change their lives for better. This is my biggest reward.

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

Introvert’s survival guide to attending industry events

Disclaimer: this post is written based on my recent experiences and things that worked for me personally – your milage may vary!

I’ve been an introvert ever since I can remember. If you’re anything like me, big social events probably make you feel anxious, sometimes to the point of wanting to cancel your plans and just hide somewhere out of the public view. This was one of the reasons I felt on the fence about attending large community events, even the ones focusing around my work and interests. I recently broke that barrier and crossed over into the “social land”, made it out alive and even enjoyed myself! If you work in technical industry, you have likely already developed some form of social media presence, most likely on Twitter. You may have interacted with other tech people online, possibly made “virtual friendships” and if that’s the case – well done! However, sooner or later you will find yourself missing out on a lot of opportunities and new acquaintances unless you start attendind industry conferences and social events. That includes talking to strangers, making the dreaded small talk and talking about yourself. This enough is an obstacle to a lot of self-contained people and if you’re one of them – this text is for you!

1. If possible, choose an event that’ll cost you money to attend.

Earlier this year, Valve announced next edition of their Steam Dev Days conference, which I was very interested in due to it’s main focus on VR. Since I’m quite enthusiastic about it, I decided this would be *the* conference I would attend this year – giving me the chance to meet Twitter friends from Seattle at the same time, so I couldn’t have wished for a better location. I bought the ticket, planned the entire trip to the USA and was ready to go. All that time, the voice in the back of my head kept telling me: “What if it’s just a waste of time, you’re not good at socializing with masses of strangers, you won’t be talking to anyone since you’re so awkward!”. It wasn’t until the very day just before the conference, when I realized what I was about to do: I was literally going into a bee hive filled with people I knew nothing about and had no idea how I’d handle this! Communication itself was not an issue, since I feel confident when speaking English (it might be even harder emotionally if you don’t!). What kept me aback was my “social awkwardness” and the usual problems introverts run into when finding themselves in a public situation like this.

Once I got in, I took a few minutes to relax and calmly think over my “strategy”. The very first thing that popped into my head was: “Ok, I’m *here*, this is *really* happening and I *don’t* want all of my trip expenses to go to waste. I *have* to make the best of my time while I’m here!”. This is very subjective and depends on the person but being aware that you spent money to get to the event can become the initial motivator to stand up to your shyness. If it works for me, it might very well work for you too, so be sure to attend an event that cost you money – *your* money. Nobody likes to waste cash on getting to a place where you just stand against a wall!

2. Watch people (discreetly!).

Walking around the entire conference area is a great chance to look at people, listen to their conversations and see how they behave in general. Finding myself in new surroundings, I usually take my time to get to know the place and familiarize myself with other people’s faces. If I get a chance to “eavesdrop” on conversations, this very well helps me to know a bit more about the topics they might be interested in once I’m “ready” to approach them. Give yourself an hour or two for this, you should start feeling a bit more comfortable after that.

3. Call Twitter friends.

If, like me, you’re unfortunate enough to travel alone, asking people on Twitter whether they’re attending the event is your first step! If you follow someone who shares the same interest, chances are they might be attending the same conference as you do – this step is best to be performed a few days before the conference starts to plan your meetup. Meeting people from Twitter (especially if you’ve “known” them for a long time) is a surreal and great experience and it’s always easier to explore together.

4. Sit at an empty table.

If the event you’re attending is serving meals, it’s likely there will be places to sit where you can have your food and drink. During breakfast time at Steam Dev Days, I quickly discovered that sitting alone at a table attracted people to me – an excellent option if you’re too shy to approach people first! I usually grabbed a coffee and a sandwich, sat down and “scouted” the area casually. Very soon afterwards, people were coming up to me asking if it’s ok to sit down with me and once they did, the conversations just took off naturally. A great sideeffect of this is that once I started talking to someone, people they knew began coming over which created opportunities to meet new people the easy way. In the end, it took me about 10 minutes of sitting alone at the table to meet 10 new people without effort!

5. A conference is where people come specifically to meet other people.

This may be stating the obvious but us, introverts, seldom realize that conferences are primarily about meeting people, not going to lectures – especially if the latter are being recorded and later made accessible on YouTube! This is easily overshadowed by the false notion that others are there to judge you by your looks, the way you talk or move or possibly any other reason you might come up with. The “table session” quickly made me realize that what people were primarily interested in was my work, the things I was working on in VR and the games I played. Once you dismiss your worries, it becomes a lot easier to approach people – just follow the same drill they do! If there’s someone demoing a game, walk up to them and start asking technical details, like the engine they’re using, how big the team is etc. Past that point, the converstation will go smoothly and you may meet few extra people along the way! Soon you’ll begin noticing familiar faces nodding and smiling at you as you pass through the corridors, which is really great and makes the rest of the event a lot easier to handle.

6. Give yourself a break if you need it.

Let’s face it, even if we “break through” the antisocial wall, it’s still draining and exhausting in the long run. If the organizer offers a quiet room where you can cool down, don’t hesitate to use it. If there’s no such place, go outside for a bit and give yourself some alone time. Taking a short walk works for me every time.

7. Casually approach groups of (also popular) people.

I mentioned walking up to folks that demo their games but approaching groups of people in other situations is in my eyes a different beast to handle. At this point I had enough encounters to get over my reluctance to talking with strangers but it still felt a bit awkward at first. What worked for me was casually walking over with a drink and just listen to the conversation, possibly adding something from myself if opportunity arose. Most folks will be happy to share a talk, especially if it’s about a common area of interest/expertise. Just be sure not to force yourself into conversations and as soon as you realize the topic doesn’t interest you at all, try to bail out politely (or change subject if you feel it’s possible).
It gets a bit more complicated with “popular” people, since most often than not you may feel a bit stressed – especially if it’s someone you look up to. In times like this, it’s cruicial to remember that even though they might be your personal heroes, they’re still as human as you and may even share the same feeling of awkwardness when approaching strangers. Keeping that in mind made it easier for me to talk to Valve employees and other “celebrities” I “knew”. Do not get discouraged if you get shrugged off though – some people get spoilt by their popularity or may simply not have time to talk to you. Just keep your head up and keep moving!

8. Beware of alcohol!

I don’t drink alcohol too often, in fact in most social encounters I typically stay away from it. That is not to say that alcohol is inherently evil but one thing you should remember is to know when to stop. For many of us, alcohol helps to socialize and loosen our tongues and is frequently abused by shy people. Whatever you do, don’t come to the event intoxicated. Don’t drink too much during afterparties – at some point, even when you’re not completely “fixed”, you will start talking incoprehensibly, so it’s important to know your limit. Above anything else – DON’T GET DRUNK. You’re entering a place filled with professionals and the last thing you want to do is to ruin your own reputation and get yourself into serious trouble. Preferebly, don’t drink alcohol at all and just go with the flow the same way you did from start!

9. Report abuse.

This doesn’t really have anything to do with being an introvert but I felt it should deserve a separate mention. I was lucky enough to encounter great people throughout the entire event but there’s always a slim chance someone might want to abuse you or cause you serious discomfort. If this ever happens, either to you or someone else, never hesitate to report this to the organizers – this won’t make you look weak, rather you might be preventing another person facing the same abusive behavior. Stay safe and make others share the same, positive experience.

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

std::vector or C-style array?

I recently overheard a rather interesting statement concerning programming and thought I’d share it with the world via a Tweet and a small counter example. This started an interesting discussion:

What I believed would go by unnoticed, spawned a series of interesting views on this topic – some of them missing my original point. Therefore, I figured it’d be a good idea to expand my thought into something longer than 140 characters. Note however, that this is not a full dissection of pros and cons of one solution over the other but a general opinion!

Original statement and code example

So the original statement claims that you should always use std::vector instead of a C-style array for data containment. The keyword “always” is what triggered my Tweet, claiming that in every instance it’s a desired and better solution. To see how this works out, here’s an example – a short program to create a static container of 10 integers, assign them values and return. First, let’s use the “dogmatic” approach:

// approach 1: using std::vector
#include <vector>

int main() { 
  std::vector<int> foo;
  foo.resize(10);
  
  for(int i = 0; i < 10; i++)
	foo[i] = i;
  
  return 0;
}

Alternatively, let’s do the same thing with a classic C-style array.

// approach 2: using C array
int main() {
  int foo[10];
  
  for(int i = 0; i < 10; i++)
	foo[i] = i;
  
  return 0;
}

Running both pieces of code through a compiler (unoptimized – will soon explain why) creates interesting results when looking at the assembly. Approach 1 produces output too long to be put in this post, however with approach 2 we get this neat code:

main:
        push    rbp
        mov     rbp, rsp
        mov     DWORD PTR [rbp-4], 0
.L3:
        cmp     DWORD PTR [rbp-4], 9
        jg      .L2
        mov     eax, DWORD PTR [rbp-4]
        cdqe
        mov     edx, DWORD PTR [rbp-4]
        mov     DWORD PTR [rbp-48+rax*4], edx
        add     DWORD PTR [rbp-4], 1
        jmp     .L3
.L2:
        mov     eax, 0
        pop     rbp
        ret

“You forgot the -O flag!”

It’s true that the optimization flag can be passed to a compiler and in all likelihood, this is what you’ll be working with most of your time. There are, however, few specific reasons I deliberately choose not to demonstrate code with optimizations enabled:

1. Most developers work primarily with debug code during development process. While your mileage may vary, chances are that your debug session will be done with the use of debug symbols most of the time.
2. Optimizations (even for debug builds) cost you build time.
3. DON’T RELY ON A COMPILER IF YOU CAN WRITE BETTER CODE YOURSELF.

Why you should care?

The reason I put emphasis on the last sentence is because a lot of modern day programmers are taught to delegate all the heavy work to the compiler with an assumption that it will always turn out fine. What this belief leads to is crawling debug code in the best case. Take, for example, the first approach using -O3 and the counter example, also with -O3. My personal belief is that you should rely on the -O flag only as last resort (and even then take it with a grain of salt). To that extent, using complex structures such as std::vector, std::array and others should be only done when it’s justified and doesn’t incur unnecessary performance penalties on the executed code. It is not, in fact, the high count of assembly lines that’s the main problem, rather the heap allocations that take place when using STL containers. In many scenarios this is more penalizing than anything else, so unless a dynamic sized array is really needed, it’s not worth following established dogmas (whether STL is good for game programming is a completely different story).

Who should I do?

Don’t get too obsesive about the topic! I think it’s good practice to get your code to be as fast as possible without using excessive compiler optimizations but on the other hand – it is, after all, a tool that is handed to the programmer. And always be a bit cautious and sceptical when somebody tells you that solution A is “always” better than B – chances are, there’s a solution C that bests them all! 😉

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

Writing a raytracer in DOS

TL;DR; It’s not as hard as people think! Full source code on GitHub.

Disclaimer: this is not a step-by-step introduction to raytracing, rather the fundamental components I needed to get it working in DOS. Sorry! 🙂 Check out the GitHub link if you’d rather jump straight into implementation details. And now, with that out of the way…

Some time ago, I decided to finally write my first raytracer, seeing it as such a hot topic in realistic computer graphics. If you look around, you’ll find tons of examples on how to accomplish this. The task is even simpler if you only want to focus on primitive shapes such as spheres and planes, so for a moderately skilled programmer a basic raytracer shouldn’t take too much time to implement. Since that doesn’t sound too exciting, I figured I’d raise the bar a bit and write the entire thing for DOS and VGA graphics – the platform I never get to truly code on when I was younger!

1. Figuring out VGA screen access

Raytracing is all about calculating final color of each pixel on the screen. This intuitively makes us want to be able to manipulate each pixel in some nice, linear fashion. With modern APIs you can easily achieve this by accessing texture data, in DOS things get a bit more complicated. This is where mode 13h comes in!

Depending on the graphics mode, a sequence of consecutive pixels on the screen can be accessed in different ways. In mode 13h you get access to the start address of screen memory and from there you can get to entire screen data as if it were an array of pixels:


// pointer to VGA memory in mode 13h
unsigned char *VGA = (unsigned char *)0xA0000000L;

static const int SCREEN_WIDTH  = 320;
static const int SCREEN_HEIGHT = 200;

int main()
{
    // set graphics mode 13h
    _asm {
            mov ah, 0x00
            mov al, 0x13
            int 10h
    }
    
    for (x = 0; x < SCREEN_WIDTH; x++)
    {
        for (y = 0; y < SCREEN_HEIGHT; y++)
        {
            /* 
                Fetch pixel color here 
            */
            
            // draw the pixel!
            VGA[(y << 8) + (y << 6) + x] = pixelColor;
        }
    }
}

Setting pixelColor to an integral value in the range [0-255] will fill entire screen with respective color from VGA palette (more on that later). A good start! Now to get some actual raytracing done. 🙂

2. The Raytracing

One excellent property of mathematical principles is that they can be applied to any programming language and platform, now matter how old or obsucre it is. Here, it’s no different – in order to start off with raytracing, we need some basic representation of the shapes we want to put in the scene – planes and spheres in this particular case. We will also need to represent the ray itself to perform the tracing (and to make surface bouncing a bit easier):

typedef struct
{
    Vector3f m_origin;
    Vector3f m_dir;
} Ray;

typedef struct
{
    Vector3f m_origin;
    double  m_radius;
    int     m_reflective; // sphere is reflective - 1/0 
    int     m_refractive; // sphere is refractive - 1/0 
    double  m_color[3];   // RGB of the sphere
} Sphere;

typedef struct
{
    Vector3f m_normal;
    double   m_distance;
    int      m_reflective; // plane is reflective - 1/0 
    double   m_color[3];   // RGB of the sphere
} Plane;


// scene we're going to raytrace
typedef struct
{
    Sphere spheres[NUM_SPHERES];
    Plane  planes[NUM_PLANES];
    Vector3f lightPos;  // light source position
} Scene;

/* see dt_trace.c on Github repo for implementation details of the following functions */
Vector3f reflect(const Ray *r, const Vector3f *normal);    
Vector3f refract(const Ray *r, const Vector3f *normal);
double intersectSphere(const Ray *r, const Sphere *s, Vector3f *oIntersectPoint);
double intersectPlane(const Ray *r, const Plane *p, Vector3f *oIntersectPoint);

// raytracing function
int rayTrace(const Ray *r, const Scene *s, const void *currObject, int x, int y);

The structs should be self explanatory – every object is defined by the minimum amount of information needed to represent it mathematically. We also define a set of functions to perform reflection, refraction and intersection checks as well as the rayTrace function which will recursively call itself to determine where the ray eventually hits. Playing around with reflection and refraction is not an issue either, since like everything else it can be easilly determined with math. The final code will be written in C, so we’re using integers to store boolean flags (though some will likely argue it’s a waste of space and a plain char or a short would suffice!). With all of the above implemented, I was able to trace my first sphere:


First render of a solid, raytraced sphere.

3. Shading in VGA

Having mastered rendering of geometry, it was time to add some light and shading to the scene. In modern graphics doing that is (mostly) trivial – all color calculcations can be easily done using the RGB channels, so it’s pretty straightforward to get the final pixel with all light sources accounted for. With VGA things are a bit more involved, since instead of RGB we’re operating with palettes.

Without going into too much detail, a VGA palette is a set of 256 integers (starting with 0), each one representing a single color out of the available pool of 256 different values. One may wonder at first how was 256 colors enough “back in the day” and most certainly a lot of games looked like they could handle a lot more than that! When DOS programming was still a big thing, there were number of tricks circulating in the game industry. Palette swapping, color cycling and the fact that you could create your own palettes made it possible to fool people into believing they see a lot more colors than what standard VGA could provide. Different graphics mode had also different capabilities and some games were notorious for switching between them to get higher screen resolutions and more color values (Bullfrog’s “Syndicate” was such an example). However, I digress…


Standard VGA palette. Courtesy of Wikipedia.

For the purpose of this demonstration I decided to use the standard VGA palette. For the the available test scene Lambert shading was sufficient and pretty easy to implement. The only problem to solve was getting RGB values for each color and its respective palette index. One way to do this is to create a simple mapping array:

// standard Lambert shading (see dt_trace.c for implementation details)
// iRGB - input color
// oRGB - output color calculated with the consideration of light pos and normal vector
void lambertShade(const Vector3f *light, const Vector3f *normal, 
                  const double *iRGB, double *oRGB);

// RGB values of default VGA palette (mode 13h)
 
int VGAPalette[][3] = {
// R     G     B      // pal index - color
{ 0x00, 0x00, 0x00 }, // 0 - black
{ 0x00, 0x00, 0xAA }, // 1 - dark blue
{ 0x00, 0xAA, 0x00 }, // 2 - dark green
{ 0x00, 0xAA, 0xAA }, // 3 - dark cyan
// all remaining colors go here
(...)
{ 0x00, 0x00, 0x00 }  // 255 - black
};

So far so good! But considering that Lambert shading properly determines the final color, how do we map it back to a palette index to display it properly on the screen? One (naive) way to do it is to search for the color closest matching the RGB values of the final, calculated pixel and return its index in the VGA palette which will give us the “highest fidelity” using standard colors. This is called finding the Euclidean distance between two points, only in this case we’re not matching the (x, y, z) coordinates bur rather the (R, G, B) values of two different colors. The one with smallest “distance” to the desired source color will have its indexed in VGA palette returned:

// color quantisation using Euclidean distance
// srcColor is a set of 3 doubles: R,G and B values respectively
int findColor(const double *srcColor)
{
    // define max Euclidean distance as 3 * 256^2 + 1
    long int nd = 196609L;
    int i, palIdx = 0;

    // cycle through the entire palette and find color closest to srcColor's RGB
    for (i = 0; i < 256; i++)
    {
        long int r = (long int)(srcColor[0] - (double)VGAPalette[i][0]);
        long int g = (long int)(srcColor[1] - (double)VGAPalette[i][1]);
        long int b = (long int)(srcColor[2] - (double)VGAPalette[i][2]);

        // sqrt() not needed: it won't change the final evaluation
        long int d = r * r + g * g + b * b;

        if (d < nd)
        {
            nd = d;
            palIdx = i;
        }
    }

    return palIdx;
}

There are several optimizations that could be used to improve search speed of the nearest color. First, there are duplicated colors in the palette, so it’s not really necessary to search through the entire 256 array of values. Second, remember we’re using the standard VGA palette, so all colors are pretty much scattered through the entire range of 256 values. To make the lookup faster, one way would be to create your own palette with all similar colors placed right next to each other. Using custom palettes is also encouraged, since it gives you the possibility to tweak what the user sees on the screen and as such can improve the quality of final image.

Putting all of the components together and adding a lightsource was now enough to produce the following result:


Notice how the white sphere seems to look better than the others – this is due to there being 16 shades of gray in the standard VGA palette.

4. Final result

The beauty of work I did up until this point is that it made everything “just work” as you’d expect it (provided of course that refraction and reflection functions were implemented correctly). With just a little work of creating a new scene, adding planes to it and setting some reflection and refraction attributes, I was able to come up with the following image:


Final rendered image with reflective and refractive surfaces.

It doesn’t stop there, though! Full source code on GitHub also comes with a simple implementation of dither to further improve image quality. To read more about VGA programming, check out David Brackeen’s website which was a major help in writing my code!


Dithered grayscale image. Higher “spectrum” of shadow values is a result of 16 different shades of gray being stored in standard VGA palette and the addition of dither.

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

So you want to be a programmer?

I recently spoke to a couple high-school students who were eager to learn how to become programmers. They wanted to jump into it without any indication on where or how to start, which made me realize how difficult it can be for people without any prior experience. This inspired me to write this post and share my thoughts on what any programmer initiate should know and realize. This is not a programming tutorial by any means, just a set of guidelines which I would follow myself, knowing what I know today.

1. Programming knows no no age, gender nor sexual orientation

Contrary to what other may tells you, you’re never too old/young to learn new things. It’s all in you and your dedication to doing something. You can become a proficient programmer at the age of 50 if you really want to and there’s no lower age limit – the sooner you start, the better! People may tell you that you can’t be a programmer, because you’re a woman. Yes, the industry is dominated by men but that in no way should hinder your goals. Remember – programming is a mental work, so being able to do it is all, literally, in your head.

2. Programming requires more practice than talent

This may be an unpopular idea but I strongly believe that programming requires little to no talent. Like every skill that we are not born with, it requires a lot of practice to master. No talent. Just practice. Some people may have better predespositions to thinking with numbers, math or numerical analysis in general but that does not make them good programmers out of the box. Coding is a craft and as such needs to be polished constantly. Work on your skills. Read and write code. With enough practice, you will flourish as a programmer.

3. Learn coding by doing something

When I first got into programming, the most difficult part was coming up with ideas on how to improve my skills. I was a teenager in mid-90s with no Internet access and as such – no ideas to be inspired by. Fast-forward 20 years later and I’m now surrounded by ideas all over. Once you get past the basic “Hello world” examples, the best thing you can do is start working on an idea for a program of your own. Knowledge is assimilated the easiest when it’s put to a real test and when it has to solve a real problem. Want to make an application that draws an image on the screen? Look up loading binary data and basic graphics. Feel like implementing a sorting algorithm? Get a good book on the topic and translate the idea to code. It’ll be a rough ride at first as you discover new features of the programming language of your choice but with experience things will start to get a bit more smooth. The best part about writing your own code is that it’s a challenge every time you try something new. Enjoy that feeling!

4. Talk to other programmers. Work with others. Get on Twitter!

What better way to further your knowledge than to talk to other people in the field? This is by far the fastest way to learn new tricks, experienced developers may also show you which things to avoid if you don’t want to get into a trap. If you don’t know any programmers in your area, join Twitter and follow people. It’s an excellent tool for this and has helped me many times – many programmers will also be happy to meet you if you’re in the area, so that’s an extra plus. Join programming forums, read newsletters, read everything related to the topic of programming that you find interesting! If you want to find collaborators, be sure to checkout Github and Bitbucket – a lot of people will be happy to work on something together! A word of warning though – don’t get depressed if someone’s knowledge overwhelms you. Remember that there is always someone better at programming than you and always will be, once you accept that fact it will become easier for you to learn and communicate with others. This may also motivate you to become even better at what you do!

5. Prepare to learn new things and forget everything you already know

Computer science is an extremely dynamic field. Be prepared to shuffle what you already know and learn new things. Over the course of your career you will see technologies coming and going – same with programming languages. Think of yourself as a craftsman who has to change his/her tools once in a while. Your basic skills remain but it’ll take some time to learn how to use your new hammer! Of course, this applies only to those programmers who don’t want to stick to a particular technology for the entirety of their careers, something that rarely happens nowadays.

6. Learn what’s under the hood

If you truly want to be a master programmer, learn how the computer works. What happens when an instruction is executed? How is your code translated for the CPU? How does memory behave when you run the program? What about the Operating System? All of this and much more constitute the driving mechanism for your program – once you understand how it works, you’ll be able to write more efficient applications.

I like to think that programmers are constantly “on the road to knowledge” – there’s never a point in your career when you can say that you know enough (let alone everything!). This is probably what keeps me doing what I do. If you’re still willing to follow that path, be ready for some rough moments as you learn. In the end, it will be worth your while. 🙂

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

Rendering in VR using OpenGL instancing

TL;DR; download code sample from GitHub!

In all of my VR applications thus far, I’ve been using separate eye buffers for rendering, seeing it as a convenience. Recently, however, I started wondering how I could improve drawing times and reduce unnecessary overhead, so my attention turned toward single render target solution and how it could take advantage of instanced rendering. Here’s a short summary of my results.

To briefly recap, there are two distinct ways you can use to render to the HMD (in this particular case I’ll be focusing on Oculus Rift):

1. Create two render targets (one per eye) and draw the scene to each one of them accordingly.
2. Create a single, large render target and use proper viewports to draw each eye to it.

The details on how both of these can be achieved are not specified, so it’s up to the programmer to figure out how to get both images. Usually, the first idea that comes to mind is to simply recalculate MVP matrix for each eye every frame and render the scene twice, which may look like this in C++ pseudocode:

for (int eyeIndex = 0; eyeIndex < ovrEye_Count; eyeIndex++)
{
    // recalculate ModelViewProjection matrix for current eye
    OVR::Matrix4f MVPMatrix = g_oculusVR.OnEyeRender(eyeIndex); 

    // setup scene's shaders and positions using MVPMatrix
    // setup of HMD viewports and buffers goes here
    (...)

    // final image ends up in correct viewport/render buffer of the HMD
    glDrawArrays(GL_TRIANGLE_STRIP, 0, num_verts);
}

This works fine but what we’re essentially doing is doubling the amount of draw calls due to rendering everything twice. With modern GPUs this may not necessarily be that big of a deal, however the CPU <-> GPU communication quickly becomes the bottleneck as the scene complexity goes up. During my tests, trying to render a scene with 2500 quads and no culling resulted in drastic framerate drop and GPU rendering time increase. With Oculus SDK 1.3 this can, in fact, go unnoticed due to asynchronous timewarp but we don’t want to deal with performance losses! This is where instancing can play a big role in gaining significant boost.

In a nutshell, with instancing we can render multiple instances (hence the name) of the same geometry with only single draw call. What this means is we can draw the entire scene multiple times as if we were doing it only once (not entirely true but for our purposes we can assume it works that way). So the amount of draw calls is reduced by half in our case and we end up with code that may look like this:

// MVP matrices for left and right eye
GLfloat mvps[32];

// fetch location of MVP UBO in shader
GLuint mvpBinding = 0;
GLint blockIdx = glGetUniformBlockIndex(shader_id, "EyeMVPs");
glUniformBlockBinding(shader_id, blockIdx, mvpBinding);

// fetch MVP matrices for both eyes
for (int i = 0; i < 2; i++)
{
    OVR::Matrix4f MVPMatrix = g_oculusVR.OnEyeRender(i);
    memcpy(&mvps[i * 16], &MVPMatrix.Transposed().M[0][0], sizeof(GLfloat) * 16);
}

// update MVP UBO with new eye matrices
glBindBuffer(GL_UNIFORM_BUFFER, mvpUBO);
glBufferData(GL_UNIFORM_BUFFER, 2 * sizeof(GLfloat) * 16, mvps, GL_STREAM_DRAW);
glBindBufferRange(GL_UNIFORM_BUFFER, mvpBinding, mvpUBO, 0, 2 * sizeof(GLfloat) * 16);

// at this point we have both viewports calculated by the SDK, fetch them
ovrRecti viewPortL = g_oculusVR.GetEyeViewport(0);
ovrRecti viewPortR = g_oculusVR.GetEyeViewport(1);

// create viewport array for geometry shader
GLfloat viewports[] = { (GLfloat)viewPortL.Pos.x, (GLfloat)viewPortL.Pos.y, 
                        (GLfloat)viewPortL.Size.w, (GLfloat)viewPortL.Size.h,
                        (GLfloat)viewPortR.Pos.x, (GLfloat)viewPortR.Pos.y, 
                        (GLfloat)viewPortR.Size.w, (GLfloat)viewPortR.Size.h };
glViewportArrayv(0, 2, viewports);

// setup the scene and perform instanced render - half the drawcalls!
(...)
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, num_verts, 2);

There’s a bit more going on now, so let’s go through the pseudocode step by step:

// MVP matrices for left and right eye
GLfloat mvps[32];

// fetch location of MVP UBO in shader
GLuint mvpBinding = 0;
GLint blockIdx = glGetUniformBlockIndex(shader_id, "EyeMVPs");
glUniformBlockBinding(shader_id, blockIdx, mvpBinding);

// fetch MVP matrices for both eyes
for (int i = 0; i < 2; i++)
{
    OVR::Matrix4f MVPMatrix = g_oculusVR.OnEyeRender(i);
    memcpy(&mvps[i * 16], &MVPMatrix.Transposed().M[0][0], sizeof(GLfloat) * 16);
}

Starting each frame, we recalculate MVP matrix for each eye just as before. This time, however, it is the only thing we do in a loop. The results are stored in a GLfloat array, since this will be the shader input when drawing both eyes (4×4 matrix is 16 floats, so we need 32 element array to store both eyes). The matrices will be stored in a uniform buffer object, so we need fetch the location of the uniform block before we can perform the update.

// update MVP UBO with new eye matrices
glBindBuffer(GL_UNIFORM_BUFFER, mvpUBO);
glBufferData(GL_UNIFORM_BUFFER, 2 * sizeof(GLfloat) * 16, mvps, GL_STREAM_DRAW);
glBindBufferRange(GL_UNIFORM_BUFFER, mvpBinding, mvpUBO, 0, 2 * sizeof(GLfloat) * 16);

// at this point we have both viewports calculated by the SDK, fetch them
ovrRecti viewPortL = g_oculusVR.GetEyeViewport(0);
ovrRecti viewPortR = g_oculusVR.GetEyeViewport(1);

// create viewport array for geometry shader
GLfloat viewports[] = { (GLfloat)viewPortL.Pos.x, (GLfloat)viewPortL.Pos.y, 
                        (GLfloat)viewPortL.Size.w, (GLfloat)viewPortL.Size.h,
                        (GLfloat)viewPortR.Pos.x, (GLfloat)viewPortR.Pos.y, 
                        (GLfloat)viewPortR.Size.w, (GLfloat)viewPortR.Size.h };
glViewportArrayv(0, 2, viewports);

// setup the scene and perform instanced render - half the drawcalls!
(...)
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, num_verts, 2);

First, we update the UBO storing both MVPs with new calculated values, after which we get to rendering part. Contrary to DirectX, there’s no trivial way to draw to multiple viewports using single draw call in OpenGL, so we’re taking advantage of a (relatively) new feature: viewport arrays. This, combined with the gl_ViewportIndex attribute in a geometry shader will allow us to tell glDrawArraysInstanced() which rendered instance goes into which eye. Final result and performance graphs can be seen on the following screenshot:


Test application rendering 2500 unculled, textured quads. Left: rendering scene twice, once per viewport. Right: using instancing.

Full source code of the test application can be downloaded from GitHub.

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

Why I think Oculus wins over Vive… for now.

Disclaimer: The following is based on experiences with early releases of hardware and software for both Oculus and Vive, so your mileage may vary!

I’ve been a VR enthusiast for quite a while now, starting my adventure with DK2 and following up on technology development since then. In 2016 VR has finally arrived and I believe it’s not going anywhere. Having received my HTC Vive just recently, I finally got the chance to compare it to Oculus in terms of quality, overall feel and… it’s been a mild dissapointment from consumer standpoint.
As a developer, I’m used to dealing with buggy software, unpolished hardware and bulky equipment that wouldn’t appease to general public. Putting myself in “your everyday buyer”‘s shoes, however, is a different story. Here are some of my thoughts on the consumer Vive and how in my opinion it’s going to be slightly diminished by Oculus CV1.

1. Setup

When I buy new equipment I expect it to work out-of-the-box with a minimum user intervention. Not counting the download times, Oculus setup process is pleasant and painless, once it’s done you’re ready to use the software and roam free in the VR. Enter: HTC Vive setup process.
I consider myself fairly advanced with computers, having worked with them most of my life. And yet – it took me almost 2 hours to get my $799 headset to work. Once all the necessary software and drivers were installed, the mandatory SteamVR application wouldn’t start, each time crashing with cryptic error messages. Once I finally got it working, to my horror I realized that only one Vive controller got recognized. Browsing through quite a few similar forum posts I finally managed to discover a key combination that would pair the controllers with the headset, something that has not even been mentioned as a required step during setup. 2 hours later (and a mandatory firmware update that failed the first few times) I was a proud owner of a working, room-scale VR headset.

2. Oculus Home vs Vive Home and SteamVR

Oculus Home delivers a good first-time experience and the navigation is intuitive and simple. Vive Home feels like an attempt to copy the Oculus solution and admittedly it does it quite well… if not for the fact that it still requires SteamVR. And boy, is that thing a wild ride.
So the first odd thing that happens once SteamVR starts is that it sometimes has the tendency to just shutdown altogether (and taking down Steam with it). Luckily, this doesn’t seem to happen too frequently but if mandatory software goes down without neither a warning nor an error message it sounds like either a critical bug or poor design. For some reason, the necessary services (such as VR dashboard) sometimes don’t start along with SteamVR either, which in turn leads to crippled experience when using the hardware (no camera preview or non-working system key). Really, HTC, where’s the QA team when you need it?

3. Controllers and immersion

Until the Oculus Touch arrives, Vive takes the cake. While potentially not as ergonomic as OT, the ability to interact with the environment using your hands is invaluable and highly immersive. In terms of visual immersion, Vive feels slightly better to me but I’m biased by the fact that I find walking around with the headset on comfortable (and I developed a skill in avoiding stepping on/tripping over the bulky cable!). Screen quality differences are negligable and hardly noticable for an average user, though coming from DK2 I can’t get used to fresnel lens’ glare in high contrast sceneries.

4. Software stability

Stable software is key to happy consumers. With that being said, I have yet to find a game that would crash or negatively impact the Rift. Sadly, it’s a lot easier to do with the Vive and some of the applications available for it have ridiculous behavior. Valve’s “The Lab” is the prime example: I can’t run the main hub without either getting a SteamVR shutdown or “out of memory” error on a 8GB Win7 PC with a GTX970 graphics card. Error logs turn up empty and there’s really no pattern to the crash. This is hardly acceptable. To this moment I’m not sure if it’s related to the software itself or whether it’s a driver/SteamVR bug that pops up every now and then. At the time of writing this, I’m not the only one suffering from these problems so this is likely a global issue.

5. Conclusion

I still think that both Oculus and Vive have their place in the VR market. While many people consider them to be competing hardware, I personally think they complement each other. Oculus shines for stationary/sitting experiences, while Vive is clearly aimed at room-scale VR. However, at the time of writing this I find Oculus to be delivering a more polished and stable environment for relatively the same price (counting the upcoming Oculus Touch). If you’re a tech-junkie you will enjoy both. If you want to dive into the “hardcore” walking in VR, then HTC Vive is your choice, provided that you have the patience and skills to get it working in the first place. However, if you’re not very literate with computers, are looking for a good place to start your VR adventure and want to carefully spend your money, you should probably go with the less frustrating Oculus Rift.

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

Dealing with LinkedIn tech recruiters – 3 simple steps

It’s that time of year again – recruiters on LinkedIn are starting to send out messages and job ads faster than anyone can read them. This is something I think every tech person experiences after spending substantial amount of time registered there. What suprises me is that a vast majority of people I know despise getting this kind of mail which, at the first glance, seems contradicting to the purpose of being on a professional social network. While different people may have different reasons to being registered on LinkedIn, I seem to have a rather unpopular approach of treating it as an opportunity to possibly land my next job – something that happened to me before, twice. With that being said, I accept all contact invitations unless the account is clearly recognized as spam or completely unrelated to my line of work (and that doesn’t happen very often).

If you’re anything like me, you most likely have problems with replying to all non-urgent email right away, LinkedIn recruiter messages falling under that category. This is especially true when I’m comfortable with my work situation, when my interest in new job opportunities is low. Despite that, however, I try to follow these 3 simple steps:

1. Always write back, even if you’re not interested in the offer.

Unless you’re a rockstar who may never need to look for work again, it’s always polite to respond and say that you’re not interested. Further, invite the recruiter to keep you updated on the job market he/she is working with (unless, of course, it’s something you’re completely not into). Even if your job situation is stable at the moment, you may never know what happens in a few years time and help may come from least expected places.

2. Schedule one day a week/month to go over your professional social network messages.

Spend some time and go over all unread messages on specifically scheduled days. This will help you maintain your inbox clean and ease up on accumulating unread email frustration (yes, it’s a real thing!).

3. Be professional. Be polite.

If someone keeps spamming you with unsolicited mail and something that just won’t contribute to your career advancement – remove the connection. Never send outraged messages, don’t Tweet about it, just do it quietly. Better yet – polietly let the other side know that you don’t wish to receive specific types of messages – this tactic works more often than you may think. Badmouthing other people, even if you consider them to be “annoying recruiters”, may leave a mark on your professional appearance. Remember: the Internet is smaller than you think.

Most importantly, remember that on the other end there’s a living human being who is only trying to do their job. You may be one of many people he/she wrote to but even so, being civilized about it is something everyone should remember.

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

My experiences going Rust from C++

I’ve been experimenting with Rust for over 6 months now. Most of that time I spent playing around with a C64 emulator I wrote as a first project and initially I thought about creating a series on that topic. However, since there’s so much reading material on the Internet about it already, I figured maybe it would be a good idea to write an intro for C/C++ programmers on Rust. But then I found this article, so I decided to take a completely different route.

In this post I wanted to outline the problems/quirks I ran into when transitioning from C++ to Rust. This by no means indicates that the language is poorly constructed – it’s just a matter of putting yourself in a completely different mindset, since Rust is really more than it seems at first glance and it has traps of its own if you try to code in it “C-style”. At the time of writing this, the latest stable version of the compiler is 1.7.0, so some things might get outdated with time. If you’ve been programming for a while and are considering trying out Rust, here are some things worth being wary of as you start:

1. Data ownership model

The first thing I had to learn is how variables in Rust are not variables at all but rather bindings to specific data. As such, the language introduces the concept of ownership in a sense that data can be bound to one and only one “variable” at a time. There are good examples in the link above of how that works, so I won’t be going into details here. The reason this caused me so much problem is that referring to other struct members and recurring function calls have to be thought through very carefuly when writing a program in Rust. Gone is the idea of throwing pointers everywhere and reusing it when you see it fit – once data in Rust is borrowed you have to finish doing with it what you want in order to reclaim it somewhere else in the code. It’s an interesting concept, one that surely provides some extra safety measures which other languages lack, nevertheless it takes a while to get accustomed to it.

2. No inheritance

The lack of a basic inheritance model in Rust forced me to duplicate some parts of the code. To give an example, the C64 has two timer chips which are essentialy the same thing – for emulation purposes differing in only one function. A natural instinct here is to create a single struct and just overload that particular function but Rust has no mechanism for it. The closest thing that met my needs was a trait but what I really needed was “a trait with properties”. If your design relies heavily on OOP you should either rethink it or choose a different language.

3. No cyclic references

Having started to code my emulator “C-style”, I decided to go for a clear structure that would define the entire computer:

struct C64
{
   sid_chip: SID,  // audio   
   vic_chip: VIC,  // graphics
   cpu_chip: CPU,  // processor
   cia1_chip: CIA, // timer 1
   (...)
}

Each member variable is a simple struct type. The design was clean and satisfying, so I happily started hacking at the code implementing each chip in turn.

Halfway in my work I realized I made a terrible mistake.

It turned out that all components of the C64 struct will have to communicate with each other directly in certain situations, so I needed some sort of a “bus” component. I really wanted to avoid creating an artificial structure for that purpose which eventually would introduce annoyances of its own. Global variables spread over all modules was not an option I wanted to use either.

It was a problem I couldn’t initially solve for a couple of reasons: Rust doesn’t provide any mechanism for struct field objects to communicate with the parent and you can’t just pass a reference to parent since that breaks the Rust ownership rules. Eventually I found a solution by embedding each chip structure into an Rc nested RefCell. With this I was able to use cloned instances as separate references which I would then pass during each chip’s construction. In simplest term, this solution provides a behavior similar to smart pointers, so even though a clone of the first instance is being referenced it still deals with the same data as the original copy. Once all instances are destroyed (or dropped using Rust terminology) the memory is freed completely, so it’s safe from memory leaks.

4. No explicit NULL value

Being a language set on safety, Rust disallows creating an object without initializing each of its member variables. This means no NULL pointers which you can set at a later time, so I was stuck with a new problem after introducing RefCells:

struct VIC
{
   cpu_ref: Rc<RefCell<CPU>>,  // reference to the CPU object   
   (...)
}

impl VIC
{
    // construction of VIC
    pub fn new_shared() -> Rc<RefCell<VIC>> {
            Rc::new(RefCell::new(VIC {
                cpu_ref: CPU::new_shared(), // creating a shared instance of CPU - because we have to
                (...)
                }))
    }
}


struct CPU
{
   vic_ref: Rc<RefCell<VIC>>,  // reference to the VIC chip object   
   (...)
}

impl CPU
{
    // construction of CPU
    pub fn new_shared() -> Rc<RefCell<CPU>> {
            Rc::new(RefCell::new(CPU {
                vic_ref: VIC::new_shared(),  // this causes a problem!
                (...)
                }))
    }
}

What’s happening above is once the CPU is constructed it will force the creation of VIC which will in turn create another CPU and so on, resulting in infinite recurrence. This is where std::option comes into play, being the closest thing to a NULL value in Rust:

struct VIC
{
   cpu_ref: Option(Rc<RefCell<CPU>>),  // now it's optional
   (...)
}

impl VIC
{
    // construction of VIC
    pub fn new_shared() -> Rc<RefCell<VIC>> {
            Rc::new(RefCell::new(VIC {
                cpu_ref: None,  // no infinite recurrence - will set the reference later on
                (...)
                }))
    }
}

My only gripe with this approach was that I had to specifically create a set_references() function for each type and since each struct had different references it couldn’t be neatly solved with a more generic trait.

5. Rust macros are not what you think at first!

The natural way of thinking about macros when coming from a C background is “replace this expression with elaborately syntaxed code”. Not suprisingly, Rust takes a completely different approach, deeming (quite rightfully) plain text substitution as unsafe and error prone. After switching to shared RefCell instances I faced the problem of obfuscated syntax when trying to access the actual underlying data:

// attempting to get inside the Rc<RefCell<CPU>> from within VIC struct.
// imagine typing that every single time when you need it!
self.cpu_ref.as_ref().unwrap().borrow_mut().set_vic_irq(true);

Unlike C, a macro in Rust is treated as a syntactic structure and as such has limitations of its own. You can’t access properties of an object nor can you use the self keyword to simplify your code further:

macro_rules! as_ref {
    ($x:expr) => ($x.as_ref().unwrap().borrow_mut())
}

(...)

// same code using a Rust macro - as short as it could get
as_ref!(self.cpu_ref).set_vic_irq(true);

(...)

While I understand the reasoning behind making a macro the way it is, I still find it a bit dissapointing not being able to use the less safe C-style variant.

6. Type wrapping is technically undefined

This is a language trait I’m a bit on the fence with. In C, once you go over beyond the data type scope you automatically wrap – a feature that’s been extensively used in 8-bit computers as well. At the time of writing this, data wrapping in Rust is undefined and will cause a panic! in debug builds. Wrapping data is possible but requires additional boilerplate:

(...)

self.some_8bit_variable.wrapping_add(1); // safely wraps 255 -> 0 when addition overflows

(...)

While it’s fine that Rust explicitly tells us where data wrapping is meant to happen, I’d still want to be able to manually turn that feature off for the sake of more compact code.

7. Type explicitness everywhere

Depending on the point of view, one of the biggest flaws/merits of C and C++ is implicit type conversion when assigning variables to each other, so you can “safely” assign a char to an int and pretty much expect the code to work, as long as you know what you’re doing. Also, let’s disregard for a second that we’re pragmatic progammers who adhere to compiler warnings – my practice shows that when it comes to data precision they’re mostly ignored (or completely turned off!).

So the thing is, Rust disallows assigning different types of variables to each other unless you specifically cast one type to another. The syntax of such a cast, however, I found to be slightly cumbersome to use especially if I had to perform several casts during one operation (adding bytes, casting them to words to perform a shift, then going back to byte again etc.). This is something one has to get used to, but in my early code this was the major cause of bugs:

// EXAMPLE 1
// relative memory addressing mode in C64 code excerpt: fetching operand
// bugged code: wrong relative offset calculated
fn get_operand_rel() -> u8
{
    let offset = mem.next_byte() as i8;  // memory offset should be treated as a signed char
    let addr: i16 = cpu.prog_counter + offset as u16; // BUG!
    mem.read_byte(addr as u16) // address in mem is stored as u16, so have to cast it *again*
}

// correct code: (took quite a while to track the bug down!)
fn get_operand_rel() -> u8
{
    let offset = mem.next_byte() as i8;
    let addr: i16 = cpu.prog_counter as i16 + offset as i16; // Correct! Casting both to i16
    mem.read_byte(addr as u16)
}

// EXAMPLE 2
fn foo()
{
    let var = mem.next_byte() as u8;
    let var_word: u16 = (var as u16) << 8; // would probably look neater as (u16)var << 8
    (...)
    
    // this is legal Rust code!
    let var2 = 10 as usize as u16 as u8 as u32 as f64;
}

I admit – a lot of this is me being subjective with my own preferences but the point is that if you’re used to extensive casting you may run into trouble with understanding your code. On the other hand, this may encourage programmers into breaking more complicated operations into steps for the sake of clarity. Seeing how mixed current Rust codebases are, I’m not so sure this will soon happen, though.

8. Forget OOP – go functional

Disregarding OOP as the silver bullet of programming is not uncommon today as people realize how many problems that model creates once you go deeper. Cache hits and misses, convoluted relationships between different classes and sometimes over-the-top patterns would be on top of the list. As I got more experienced with Rust it became clear that functional programming is its main focus. If you decide to write an application, you may have to forget quite a few things you know from C++. Use functions. Use modules. Use structs but don’t rely heavily on OOP patterns to assure communication between objects. In the end it will only make you happy as the code becomes a lot more readable and easier to navigate – and this comes from a person who made his first Rust application entirely in Emacs!

Try Rust. You will enjoy it! 🙂

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr