Rectangles Are Dead: My Epiphany with Spatial Computing

By - Bharat Sharma 8:34 pm 12th December 2025

For the last forty years, our relationship with the digital world has been defined by rectangles. We stare at them on our desks, we hold them in our hands, and we mount them on our walls. We have always been observers looking through a window, separated from the digital realm by a pane of glass.

But recently, that glass shattered.

About three months ago, I finally bought a Meta Quest 3. I’d been eyeing it for a while—enticed by the promise of mixed reality and driven by my own obsession with graphics rendering—but I couldn’t justify the cost. That changed when I won the grand prize at a weekend hackathon. I didn’t save the cash; I didn’t invest it. I did what any self-respecting tech nerd would do: I immediately bought the flagship consumer headset to see if the hype was real.

It wasn’t just real. It was a paradigm shift.

After living inside this headset for a quarter of a year, I am convinced we are witnessing the end of the “flat screen” era. Here is my technical analysis of why spatial computing has finally arrived, and why it feels like magic.

The Invisible Threshold: The Carmack Legacy

It is impossible to discuss modern VR/MR without acknowledging John Carmack. To me, he is the Isaac Newton of this “Golden Age.” While he has since moved on to AGI, his tenure as CTO of Oculus established the rigorous engineering standards that make the Quest 3 possible today.

Carmack famously chased “motion-to-photon” latency minimization. In the early days, the disconnect between your inner ear (vestibular system) and your eyes caused instant nausea. The engineering challenge was to get total system latency under 20 milliseconds—the threshold where the brain stops perceiving lag and accepts the simulation as reality.

Using the Quest 3, you can feel the thousands of hours of optimization that went into its “Time Warping” and prediction algorithms. When you turn your head, the display doesn’t just render a new frame; it re-projects the last known frame based on your predicted movement to fill the gap before the GPU finishes the next render. It is a cheat, but it is a perfect one. It creates a “frictionless” experience where the hardware disappears, and you are simply present.

Reality, Meshed: The “Magic” of Depth API

As a programmer, I have a deep appreciation for the computational heaviness of what this OS is doing in the background.

The specific “magic” of the Quest 3 isn’t the screen resolution—it’s the Depth API and Scene Mesh. When you set up the device, it doesn’t just “see” your room; it understands the geometry. It uses computer vision to generate a low-poly mesh of your physical furniture, walls, and floor in real-time.

This allows for Dynamic Occlusion. If a virtual ball rolls behind my physical couch, it actually disappears. The device knows the couch is in front of the digital object. This sounds simple to a layperson, but computationally, calculating that z-depth relationship on a mobile chip while maintaining 90fps is a stunning feat of engineering. It blurs the line between a “video game” and a physical simulation.

Gaming Transformed: The Shift to Simulation

This isn’t the death of gaming; it is a fundamental transformation. We are moving from abstract inputs—pressing ‘A’ to jump—to 1:1 physical interaction. The line between “playing a game” and “performing an activity” is dissolving.

1. Haptic Fidelity (Eleven Table Tennis)

This app is the benchmark for how haptics can trick the brain. It is genuinely addictive, not because of gamification, but because of the physics. The haptic feedback when the ball connects with the paddle is so precise—sub-millisecond timing—that you can “feel” the spin. It ceases to be a video game and becomes a legitimate physical sport played in a digital volume.

2. Immersive Art (Theatre Elsewhere)

I was thoroughly flabbergasted by Theatre Elsewhere. It’s not a game; it’s a gallery of living, breathing art. The app uses volumetric capture and spatial animation to let you walk inside a painting. It proves that art in the future won’t be something you look at; it will be a place you visit.

3. The Flow State (Beat Saber)

While a staple of the platform, Beat Saber represents a unique connection between music, motion, and visual stimulus. It taps into human kinetics in a way that creates an immediate “flow state,” something impossible to replicate on a 2D monitor.

The Form Factor Gap: Headsets vs. Glasses

However, despite the technical marvel of the Quest 3, there is an elephant in the room: accessibility.

VR is deep, but it is not yet scalable to the general public in the way smartphones are. The friction of putting on a “face brick,” messing up your hair, and isolating yourself from the world is a barrier that even the best passthrough technology hasn’t fully solved.

This is where Meta is winning on a second front. The Meta Ray-Ban smart glasses are arguably the more important product for mass adoption. They have zero friction. You put them on because they are glasses first and computers second. While the Quest offers maximum immersion, the smart glasses offer maximum accessibility.

The future likely isn’t one or the other, but a convergence. But right now, if we are talking about what technology will reach billions of users first, the smart glasses have the clear edge in scalability, while VR remains the champion of depth.

The Glass Age: Design in 2025

This shift toward spatial computing isn’t happening in a vacuum; it is bleeding into the entire design philosophy of the tech industry.

For the last decade (2014–2024), we lived in the era of “Flat Design”—minimalist, 2D, and sterile. But look at the market now. We are seeing a massive resurgence of Glassmorphism.

Apple’s iOS 26 design language is heavily leaning into depth, volumetric shadows, and frosted glass textures. Even Microsoft has pivoted, redesigning their application logos with 3D elements, ditching the flat squares of the Windows 10 era.

The industry is collectively realizing that humans are 3D creatures who evolved to understand depth, lighting, and texture. Our software interfaces are finally catching up to our biology. We are moving away from sterile flat surfaces and returning to rich, tactile, depth-based interfaces.

Final Thoughts

Setting up the headset right out of the box convinced me that this is an otherworldly experience. The applications are boundless. With the advent of lighter form factors like the Meta Ray-Bans, we are inching closer to a world where the computer isn’t a device we carry, but a layer of reality we inhabit.

I bought this headset with hackathon money expecting a toy. I ended up with a glimpse of the next decade of human-computer interaction. The rectangles are dead; long live the mesh.