Spatial Computing 101
TL;DR — Vision Pro gives us the possibility of injecting a middle-layer between ourselves and the always-changing dynamic reality. This can help us understand, interpret and react to the world — but we have yet to define new design patterns for it.

There are a lot of opinions about Apple’s new Vision Pro headset. Obviously it is very exciting whenever something new comes out of that company. Here are my thoughts on it, for what it’s worth.
Initially, there are some downsides. The price point is clearly ludicrous if you’re talking about consumers. There is a lack of clear use cases. Social isolation is still a thing, even if they have found an innovative way of reducing this.
There are also some upsides, like the technology itself. Clearly Apple have broken new ground in a number of areas.
But where I think it is most interesting, is in the the concept of “spatial computing”. A term which wasn’t actually coined by Apple, but by Simon Greenwold in 2003
Adding a computational layer to the dynamic world
For me, the key to understanding AR, spatial computing, the Vision Pro… however you want to phrase it, is that this technology allows us to add an extra layer onto our dynamic world.
That means we can use computer power to help us interpret, understand, and react to what’s going on around us. It’s not so much an augmentation of reality. It’s an augmentation of US .. of our senses, of our perception, of our thinking.
Beware the direct transfer of existing tech
So in my view, it’s important to get this right. AR / Spatial computing is not about having an infinite number of virtual screens floating around you in 3d space. On the contrary — that’s just a direct transferral of existing screen technology into a new domain, where screens aren’t required.
Let me underline that: using the Vision Pro, everything around you is “the screen” .. so is it a given that we need screens at all? Why would you want to create virtual screens within this experience? There are reasons, for sure, like using existing software. But the question we need to ask ourselves is: Is this what Vision Pro and AR in general lends itself to? Is this what the technology affords?
It’s not that I would NEVER put on a Vision Pro and watch a movie in a virtual environment — for example when you’re travelling. But typically this isn’t attractive to me or to people in general. Watching a movie is a social thing first and foremost. I acknowledge that Apple have taken steps to minimize the distance via the see-through feature on the Vision Pro - I don’t think it is enough.
So let’s think about what are the core uses cases for new technology? Again, what does the technology lend itself to? What are it’s strong suits? What is the one thing it can do which no other technology can do?
I think the answer to this is, that Vision Pro and AR in general can help us understand, interpret and react to the dynamic always-changing world that surrounds us. The key value proposition is that you can add a middle-layer between yourself and reality.
Some might also say that this is the most scary thing about it.
What we are going to need next, is a set of new design patterns for how to apply the possibilities of Vision Pro. I’m sure these will show up over time, after a lot of experimentation, failures, and a few precious successes.
Thanks for reading
If you want to poke me about the opinions here, feel free to leave a comment :) Maybe you took something away from this which is actually useful to you
I’ve written a few other pieces that go into more detail on specific use cases for AR in general: