The Programming Process in VR

During my development of Snowday, I’ve had the chance to reflect on the programming process in relation to VR. I think a lot of people think about VR in the sense of allowing you to experience an environment, but I’m really struggling with the current problem of using it as a tool. How do you change VR into a platform that can create an ecosystem as diverse as the monitor/keyboard/mouse combo?

To that end, I’ve come to the conclusion that the only way to do this is to build an operating system from the ground up that can be iterated upon with no other peripherals that must be touched outside of VR. Here’s a few things thoughts on UI in no particular order.

Gestural/Pictographic Programming

I took a few years of Chinese in college, and the idea of combining pictographic symbols together to create abstract meaning stood out to me as a key concept. Around this time, I also started using my iPhone to write Chinese characters. Users write the character on screen using their finger, and the iPhone provides a variety of options that look similar.

A VR programming environment could potentially use the combination of the two concepts to great effect. Users would draw within a volume close to their hands with their dominant hand, and their other hand would select the intended gesture.

A potential UI.

Identifying Different Logical Structures

Similar to most programming languages, a set of gestures cannot be overridden. These would include gestures identifying if statements, for loops, and so on.

Some of these gestures could be appended onto new gestures much like keywords are prepended onto method headers or variable declarations in Java. Linguistically, this can also be compared to how the “ma” character is appended to the end of a sentence in Chinese to convert a statement into a question. This could be used to describe functions, data types, classes, and any other programming structure that requires some sort of description.

One possible class header

Connecting the Dots

Finally, none of this actually means anything if you can’t connect parameters to functions, instance classes and use them as inputs, and add methods to classes. To solve this, some of the aforementioned gestures could be used to signify inputs and outputs. Then, when a user calls a function gesture, they can drag lines between them to use as inputs.

Lots of Foundation to Build

Of course, this musing glosses over a lot of the foundation that would need to be built, and leaves much to be desired in terms of programming rigor. Some things that I look forward to exploring are methods for searching and databasing gestures and symbols, 3D gestures, stroke order, on boarding difficulty with an existing code base, methods of using voice recognition to insert comments into code without explicitly having to type, and autocomplete in VR. Let me know what you guys think!