Learning to Program for the Apple Vision Pro

Learning to program for the Apple Vision Pro can be broken into 4 parts:

  • Learning Swift
  • Learning SwiftUI
  • Learning RealityKit
  • Learning Reality Composer Pro (RCP)

There are lots of good book and video resources for learning Swift and SwiftUI. Not so much for RealityKit or Reality Composer Pro, unfortunately.

If you want to go all out, you should get a subscription to https://www.oreilly.com/ for $499 per yr. This gives you access to the back catalogs of many of the leading technical book publishers.

Swift

To get started on Swift, iOS Programming for Beginners by Ahmad Sahar is pretty good. iOS 17 Programming for Beginners though the official Apple documentation will get you to the same place: https://developer.apple.com/documentation/swift

If you want to dive deeper into Swift, after learning the basics, the Big Nerd Ranch Guide, 3rd edition, is a good read: Swift Programming: The Big Nerd Ranch Guide

But if you want to learn Swift trivia to stump your friends, then I highly recommend Hands-On Design Patterns with Swift by Florent Vilmart and Giordano Scalzo: Hands-On Design Patterns with Swift

SwiftUI

SwiftUI is the library you would use to create menus and 2D interafaces. There are 2 books that are great for this, both by Wallace Wang. Start with Beginning iPhone Development with SwiftUI and once you get that down, continue, naturally, with Wallace’s Pro iPhone Development with SwiftUI .

RealityKit

Even though Apple first introduced RealityKit in 2017, very few iPhone devs actually use it. So even though there are quite a few books on SceneKit and ARKit, RealityKit learning resources are few and far between. Udemy has a course by Mohammed Azam that is fairly well rated. It’s on my shelf but I have yet to start it. Building Augmented Reality Apps in RealityKit & ARKit

Reality Composer Pro

Reality Composer Pro is sort of an improved version of Reality Composer and sort of something totally different. It is a tool that sits somewhere between the coder and the artist – you can create models (or “entities”) with it. But if you’re an artist, you are probably more likely to import your models from Maya or another 3D modeling tool. As a software developer, you can then attach components that you have coded to these entities.

There are no books about it and the resources available for the previous Reality Composer aren’t really much help. You’ll need to work through Apple’s WWDC videos and documentation to learn to use RCP:

WWDC 2023 videos:

Meet Reality Composer Pro

Explore materials in Reality Composer Pro

Work with Reality Composer Pro content in Xcode

Apple documentation:

Designing RealityKit content with Reality Composer Pro

About Polyspatial

If you are coming at this from the HoloLens or Magic Leap, then you are probably more comfortable working in Unity. Unity projects can, under the right circumstances, deploy to the VisionOS Simulator. You should just need the VisionOS support packages to get this working. Polyspatial is a tool that allows you to convert Unity shaders and materials into VisionOS’s native shader framework — and I think this is only needed if you are building mixed reality apps, not for VR (fully immersive) apps.

In general, though, I think you are always better off going native for performance and features. While it may seem like you can use Unity Polyspatial to do in VisionOS everything you are used to doing on other AR platforms, these tools ultimately sit on top of RealityKit once they are deployed. So if what you are trying to do isn’t supported in RealityKit, I’m not sure how it would actually work.