Rendering in VR using OpenGL instancing

TL;DR; download code sample from GitHub!

In all of my VR applications thus far, I’ve been using separate eye buffers for rendering, seeing it as a convenience. Recently, however, I started wondering how I could improve drawing times and reduce unnecessary overhead, so my attention turned toward single render target solution and how it could take advantage of instanced rendering. Here’s a short summary of my results.

To briefly recap, there are two distinct ways you can use to render to the HMD (in this particular case I’ll be focusing on Oculus Rift):

1. Create two render targets (one per eye) and draw the scene to each one of them accordingly.
2. Create a single, large render target and use proper viewports to draw each eye to it.

The details on how both of these can be achieved are not specified, so it’s up to the programmer to figure out how to get both images. Usually, the first idea that comes to mind is to simply recalculate MVP matrix for each eye every frame and render the scene twice, which may look like this in C++ pseudocode:

for (int eyeIndex = 0; eyeIndex < ovrEye_Count; eyeIndex++)
{
    // recalculate ModelViewProjection matrix for current eye
    OVR::Matrix4f MVPMatrix = g_oculusVR.OnEyeRender(eyeIndex); 

    // setup scene's shaders and positions using MVPMatrix
    // setup of HMD viewports and buffers goes here
    (...)

    // final image ends up in correct viewport/render buffer of the HMD
    glDrawArrays(GL_TRIANGLE_STRIP, 0, num_verts);
}

This works fine but what we’re essentially doing is doubling the amount of draw calls due to rendering everything twice. With modern GPUs this may not necessarily be that big of a deal, however the CPU <-> GPU communication quickly becomes the bottleneck as the scene complexity goes up. During my tests, trying to render a scene with 2500 quads and no culling resulted in drastic framerate drop and GPU rendering time increase. With Oculus SDK 1.3 this can, in fact, go unnoticed due to asynchronous timewarp but we don’t want to deal with performance losses! This is where instancing can play a big role in gaining significant boost.

In a nutshell, with instancing we can render multiple instances (hence the name) of the same geometry with only single draw call. What this means is we can draw the entire scene multiple times as if we were doing it only once (not entirely true but for our purposes we can assume it works that way). So the amount of draw calls is reduced by half in our case and we end up with code that may look like this:

// MVP matrices for left and right eye
GLfloat mvps[32];

// fetch location of MVP UBO in shader
GLuint mvpBinding = 0;
GLint blockIdx = glGetUniformBlockIndex(shader_id, "EyeMVPs");
glUniformBlockBinding(shader_id, blockIdx, mvpBinding);

// fetch MVP matrices for both eyes
for (int i = 0; i < 2; i++)
{
    OVR::Matrix4f MVPMatrix = g_oculusVR.OnEyeRender(i);
    memcpy(&mvps[i * 16], &MVPMatrix.Transposed().M[0][0], sizeof(GLfloat) * 16);
}

// update MVP UBO with new eye matrices
glBindBuffer(GL_UNIFORM_BUFFER, mvpUBO);
glBufferData(GL_UNIFORM_BUFFER, 2 * sizeof(GLfloat) * 16, mvps, GL_STREAM_DRAW);
glBindBufferRange(GL_UNIFORM_BUFFER, mvpBinding, mvpUBO, 0, 2 * sizeof(GLfloat) * 16);

// at this point we have both viewports calculated by the SDK, fetch them
ovrRecti viewPortL = g_oculusVR.GetEyeViewport(0);
ovrRecti viewPortR = g_oculusVR.GetEyeViewport(1);

// create viewport array for geometry shader
GLfloat viewports[] = { (GLfloat)viewPortL.Pos.x, (GLfloat)viewPortL.Pos.y, 
                        (GLfloat)viewPortL.Size.w, (GLfloat)viewPortL.Size.h,
                        (GLfloat)viewPortR.Pos.x, (GLfloat)viewPortR.Pos.y, 
                        (GLfloat)viewPortR.Size.w, (GLfloat)viewPortR.Size.h };
glViewportArrayv(0, 2, viewports);

// setup the scene and perform instanced render - half the drawcalls!
(...)
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, num_verts, 2);

There’s a bit more going on now, so let’s go through the pseudocode step by step:

// MVP matrices for left and right eye
GLfloat mvps[32];

// fetch location of MVP UBO in shader
GLuint mvpBinding = 0;
GLint blockIdx = glGetUniformBlockIndex(shader_id, "EyeMVPs");
glUniformBlockBinding(shader_id, blockIdx, mvpBinding);

// fetch MVP matrices for both eyes
for (int i = 0; i < 2; i++)
{
    OVR::Matrix4f MVPMatrix = g_oculusVR.OnEyeRender(i);
    memcpy(&mvps[i * 16], &MVPMatrix.Transposed().M[0][0], sizeof(GLfloat) * 16);
}

Starting each frame, we recalculate MVP matrix for each eye just as before. This time, however, it is the only thing we do in a loop. The results are stored in a GLfloat array, since this will be the shader input when drawing both eyes (4×4 matrix is 16 floats, so we need 32 element array to store both eyes). The matrices will be stored in a uniform buffer object, so we need fetch the location of the uniform block before we can perform the update.

// update MVP UBO with new eye matrices
glBindBuffer(GL_UNIFORM_BUFFER, mvpUBO);
glBufferData(GL_UNIFORM_BUFFER, 2 * sizeof(GLfloat) * 16, mvps, GL_STREAM_DRAW);
glBindBufferRange(GL_UNIFORM_BUFFER, mvpBinding, mvpUBO, 0, 2 * sizeof(GLfloat) * 16);

// at this point we have both viewports calculated by the SDK, fetch them
ovrRecti viewPortL = g_oculusVR.GetEyeViewport(0);
ovrRecti viewPortR = g_oculusVR.GetEyeViewport(1);

// create viewport array for geometry shader
GLfloat viewports[] = { (GLfloat)viewPortL.Pos.x, (GLfloat)viewPortL.Pos.y, 
                        (GLfloat)viewPortL.Size.w, (GLfloat)viewPortL.Size.h,
                        (GLfloat)viewPortR.Pos.x, (GLfloat)viewPortR.Pos.y, 
                        (GLfloat)viewPortR.Size.w, (GLfloat)viewPortR.Size.h };
glViewportArrayv(0, 2, viewports);

// setup the scene and perform instanced render - half the drawcalls!
(...)
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, num_verts, 2);

First, we update the UBO storing both MVPs with new calculated values, after which we get to rendering part. Contrary to DirectX, there’s no trivial way to draw to multiple viewports using single draw call in OpenGL, so we’re taking advantage of a (relatively) new feature: viewport arrays. This, combined with the gl_ViewportIndex attribute in a geometry shader will allow us to tell glDrawArraysInstanced() which rendered instance goes into which eye. Final result and performance graphs can be seen on the following screenshot:


Test application rendering 2500 unculled, textured quads. Left: rendering scene twice, once per viewport. Right: using instancing.

Full source code of the test application can be downloaded from GitHub.

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

Why I think Oculus wins over Vive… for now.

Disclaimer: The following is based on experiences with early releases of hardware and software for both Oculus and Vive, so your mileage may vary!

I’ve been a VR enthusiast for quite a while now, starting my adventure with DK2 and following up on technology development since then. In 2016 VR has finally arrived and I believe it’s not going anywhere. Having received my HTC Vive just recently, I finally got the chance to compare it to Oculus in terms of quality, overall feel and… it’s been a mild dissapointment from consumer standpoint.
As a developer, I’m used to dealing with buggy software, unpolished hardware and bulky equipment that wouldn’t appease to general public. Putting myself in “your everyday buyer”‘s shoes, however, is a different story. Here are some of my thoughts on the consumer Vive and how in my opinion it’s going to be slightly diminished by Oculus CV1.

1. Setup

When I buy new equipment I expect it to work out-of-the-box with a minimum user intervention. Not counting the download times, Oculus setup process is pleasant and painless, once it’s done you’re ready to use the software and roam free in the VR. Enter: HTC Vive setup process.
I consider myself fairly advanced with computers, having worked with them most of my life. And yet – it took me almost 2 hours to get my $799 headset to work. Once all the necessary software and drivers were installed, the mandatory SteamVR application wouldn’t start, each time crashing with cryptic error messages. Once I finally got it working, to my horror I realized that only one Vive controller got recognized. Browsing through quite a few similar forum posts I finally managed to discover a key combination that would pair the controllers with the headset, something that has not even been mentioned as a required step during setup. 2 hours later (and a mandatory firmware update that failed the first few times) I was a proud owner of a working, room-scale VR headset.

2. Oculus Home vs Vive Home and SteamVR

Oculus Home delivers a good first-time experience and the navigation is intuitive and simple. Vive Home feels like an attempt to copy the Oculus solution and admittedly it does it quite well… if not for the fact that it still requires SteamVR. And boy, is that thing a wild ride.
So the first odd thing that happens once SteamVR starts is that it sometimes has the tendency to just shutdown altogether (and taking down Steam with it). Luckily, this doesn’t seem to happen too frequently but if mandatory software goes down without neither a warning nor an error message it sounds like either a critical bug or poor design. For some reason, the necessary services (such as VR dashboard) sometimes don’t start along with SteamVR either, which in turn leads to crippled experience when using the hardware (no camera preview or non-working system key). Really, HTC, where’s the QA team when you need it?

3. Controllers and immersion

Until the Oculus Touch arrives, Vive takes the cake. While potentially not as ergonomic as OT, the ability to interact with the environment using your hands is invaluable and highly immersive. In terms of visual immersion, Vive feels slightly better to me but I’m biased by the fact that I find walking around with the headset on comfortable (and I developed a skill in avoiding stepping on/tripping over the bulky cable!). Screen quality differences are negligable and hardly noticable for an average user, though coming from DK2 I can’t get used to fresnel lens’ glare in high contrast sceneries.

4. Software stability

Stable software is key to happy consumers. With that being said, I have yet to find a game that would crash or negatively impact the Rift. Sadly, it’s a lot easier to do with the Vive and some of the applications available for it have ridiculous behavior. Valve’s “The Lab” is the prime example: I can’t run the main hub without either getting a SteamVR shutdown or “out of memory” error on a 8GB Win7 PC with a GTX970 graphics card. Error logs turn up empty and there’s really no pattern to the crash. This is hardly acceptable. To this moment I’m not sure if it’s related to the software itself or whether it’s a driver/SteamVR bug that pops up every now and then. At the time of writing this, I’m not the only one suffering from these problems so this is likely a global issue.

5. Conclusion

I still think that both Oculus and Vive have their place in the VR market. While many people consider them to be competing hardware, I personally think they complement each other. Oculus shines for stationary/sitting experiences, while Vive is clearly aimed at room-scale VR. However, at the time of writing this I find Oculus to be delivering a more polished and stable environment for relatively the same price (counting the upcoming Oculus Touch). If you’re a tech-junkie you will enjoy both. If you want to dive into the “hardcore” walking in VR, then HTC Vive is your choice, provided that you have the patience and skills to get it working in the first place. However, if you’re not very literate with computers, are looking for a good place to start your VR adventure and want to carefully spend your money, you should probably go with the less frustrating Oculus Rift.

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

Dealing with LinkedIn tech recruiters – 3 simple steps

It’s that time of year again – recruiters on LinkedIn are starting to send out messages and job ads faster than anyone can read them. This is something I think every tech person experiences after spending substantial amount of time registered there. What suprises me is that a vast majority of people I know despise getting this kind of mail which, at the first glance, seems contradicting to the purpose of being on a professional social network. While different people may have different reasons to being registered on LinkedIn, I seem to have a rather unpopular approach of treating it as an opportunity to possibly land my next job – something that happened to me before, twice. With that being said, I accept all contact invitations unless the account is clearly recognized as spam or completely unrelated to my line of work (and that doesn’t happen very often).

If you’re anything like me, you most likely have problems with replying to all non-urgent email right away, LinkedIn recruiter messages falling under that category. This is especially true when I’m comfortable with my work situation, when my interest in new job opportunities is low. Despite that, however, I try to follow these 3 simple steps:

1. Always write back, even if you’re not interested in the offer.

Unless you’re a rockstar who may never need to look for work again, it’s always polite to respond and say that you’re not interested. Further, invite the recruiter to keep you updated on the job market he/she is working with (unless, of course, it’s something you’re completely not into). Even if your job situation is stable at the moment, you may never know what happens in a few years time and help may come from least expected places.

2. Schedule one day a week/month to go over your professional social network messages.

Spend some time and go over all unread messages on specifically scheduled days. This will help you maintain your inbox clean and ease up on accumulating unread email frustration (yes, it’s a real thing!).

3. Be professional. Be polite.

If someone keeps spamming you with unsolicited mail and something that just won’t contribute to your career advancement – remove the connection. Never send outraged messages, don’t Tweet about it, just do it quietly. Better yet – polietly let the other side know that you don’t wish to receive specific types of messages – this tactic works more often than you may think. Badmouthing other people, even if you consider them to be “annoying recruiters”, may leave a mark on your professional appearance. Remember: the Internet is smaller than you think.

Most importantly, remember that on the other end there’s a living human being who is only trying to do their job. You may be one of many people he/she wrote to but even so, being civilized about it is something everyone should remember.

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr