AWS Spatial Computing Blog
Getting Started with Vision Pro and AWS
Learn how to start developing apps now for the upcoming Apple Vision Pro headset using Xcode and Amazon Web Services (AWS) services. This post walks through creating a basic VisionOS app, connecting it to Amazon Simple Storage Service (S3) storage, and dynamically loading 3D model assets into your VR scenes. In this post you will learn:
- Setting up the development environment with Xcode beta and OSX
- Adding the AWS SDK to connect your Vision Pro app to AWS cloud services
- Storing credentials outside your code using Xcode schemes
- Displaying a static 3D model in the simulator
- Downloading a model dynamically from S3 at runtime
- Using Swift and RealityKit to render the S3 model in the scene
By the end, you’ll have a basic Vision Pro app that can retrieve and display 3D models from the cloud. This sets the foundation for building more advanced spatial computing apps leveraging the power of AWS cloud services.
Opportunity
The headset will not be available to consumers until “early next year (2024)” but the SDK and developer experience, is available now. This gives developers a several month head start to create the programs and applications that will debut with the Vision Pro.
One of the top challenges with a portable form factor is balancing performance on device and deciding how you can extend some of the workloads off device to enhance the user experience. Some notable ways you could enhance the experience by leveraging AWS in your application could be one or more of the following:
- Dynamic Content – By storing some assets off device, you can enable smaller app downloads and faster install and launch speed. And only when needed reach out and get additional content. Additionally, content stored off device can be updated regularly without requiring the user to download a new version of your app or reinstalling.
- User authentication – You can leverage AWS identity and federation offerings to manage user credentials and accounts. And if you are using AWS for other services, you can leverage these credentials to secure access to functions and assets in AWS with the same user information.
- Data Storage – You can store important profile information and in app assets and entitlements in AWS services and allow your users to have a seamless experience no matter which device they use, and extend that to web and mobile companion applications.
- Machine Learning – The Vision Pro has on device chips for local neural processing, but if you want to do ML models that run on a larger data set, address data from all your users, or simply want to leverage a model that is not well suited for running in a mobile form factor, AWS has a portfolio of services that will enable you to offload those workloads from the device.
- AWS Game Services & More – in Addition to those use cases, AWS has several tools tailored for game and application developers to allow in-game leaderboards, in app messaging, streaming of video and 3D assets, application logging, user interaction tracking and many other high value services that you can leverage with less code than if you developed them from scratch.
For this blog post, you will learn how to do the first use case — Dynamically load content — and the process of how to do other uses cases will be left to the learner.
Pre-requisites
The Setup
1. Update the Mac OS and install a version of Xcode that supports Vision Pro Development
For the Apple Dev Environment, I need to have a Mac laptop or desktop, Install Xcode, and a couple of critical libraries. Reality Kit, which based on similar concepts to the existing iOS ARKit. I will need OSX 13.4 or higher, a current subscription to the apple developer program ($99/year) and to download the latest version of Xcode beta and select the Vision Pro additional download.
Tip: I will need to update my Mac OS and Install a Beta version of Xcode to create a Vision Pro app
2. Download AWS SDK and CLI and install them on my Mac
Once I have installed the Correct version of the OS and Xcode, I will be able to develop Vision Pro applications. But to connect to AWS and leverage the power of the cloud in my app, I will need to install a couple additional items so that I can call cloud functions from the headset.
First I need to add the SDK for swift to communicate with AWS. Then I will need to install the AWS CLI to perform some development functions more efficiently.
AWS SDK for Swift
The main tool I will need to communicate with AWS services from swift code is the AWS SDK for Swift. The details of that install are well documented outside this blog post. The getting started section of the official page is a great resource.
Tip: For a more detailed walkthrough of using the AWS SDK for Swift for generic Swift app, see the Official AWS Documentation on AWS SDK for Swift
AWS CLI
For the SDK to have the most functionality, I am going to install the AWS CLI. To do this, follow the AWS CLI install and update instructions for macOS.
Solution
Creating Initial Application Scaffolding
Step 1. Create a Vision Pro Application
To get started, you need to create a project in Xcode
-
- Launch Xcode
- Choose Create New Project from the main splash screen menu.
- Choose the VisionOS tab in the new project dialog window.
- Choose App from the Application section
Tip: If you don’t see VisionOS as an option validate that you launched the latest/Beta version of Xcode that supports VisionOS.
- Next fill out the project metadata. Some of this will come from your Apple Developer account, the rest you can populate as you see fit.
- From Initial Scene, choose Window
Quick Walk through of the Vision Pro App and Using the simulator
Now that I have some default shell code in my IDE, I can validate the build, preview, and connection to the VisionPro simulator
In the Xcode IDE window I should see a “Preview” panel. When the code is loaded I will get a preview of the scene.
This code does not have an app icon setup and doesn’t have any visual content other that a placeholder frame. So the preview should be fairly nondescript.
The preview has more limited interaction than the simulator that will be shown later, and shows only basic geometry data. To get a more accurate idea of what this app will look like and how it will behave, and what interactions will look like in the headset, I will want to launch the simulator. I can launch the simulator.
Step 2. Launch the Simulator
- On the menu bar, under Xcode choose Open Developer Tools, Simulator
Once the Simulator is launched, I will have a view of what the app will look like in the headset. In this simulator, I can move in all directions, and also rotate and pan the viewer’s angle to simulate how the application will react to the user moving around a room. Here the rooms are synthetic, but can represent a typical apartment, room in a house, or office space.
To see my code in action, I can click the Run option from the IDE screen, and I will see my app install and launch in the Simulator.
If I have not modified the code, it will build cleanly.
1st Checkpoint: Create App Shell
At this point I have reached my first checkpoint.
- I have created a default Vision Pro App.
- I have confirmed the simulator works.
Adding in AWS Libraries to Scaffolding
Step 3. Add the AWS SDK to the Project
- From the menu bar, choose File, Add Package Dependencies…
- Add the AWS SDK for Swift, and wait for it to download.
Once that completes, I will need to identify the modules I want to use and which project to link the package to. For this blog, my project is named Vision Pro to AWS and I want to access S3 later in the code. - In the column Add To Target in the cell next to the AWSS3, choose,
Your Program Name (Vision Pro to AWS).
Then choose Add Package.
Configuring the AWS Secrets in the Project in Xcode
In addition to the SDK, I will need some specific data to connect to AWS and get the assets from S3.
I need somewhere to store those credentials, or project secrets, so they can be used in the code to call AWS, but not store them in the code. I may want them to be different per deployment builds for testing or for beta/production use with real users.
Tip: Never store secrets in source code
To do that for this demo I will make use of the “schema” tools built into Xcode.
Step 4. Configure Project Secrets
- Open the Scheme dialog from the menu bar under Product, choose Scheme, Edit Scheme…
Once I am in that menu, I will want to access the variables as the code is running in the simulator and later on the headset. - Choose Run from the left panel
- Choose the Arguments tab
Next, I will add two environment variables, for this I am creating two named as follows:
AWS_SECRET_ACCESS_KEY
AWS_ACCESS_KEY_ID - Find the Environmental Variables section
- Click the ‘+’ icon
- Enter the variable name in the Name column
- Repeat for each variable name
Once those are created, I can set the values to the secret information. For now I will put in placeholder data, and I will update them later after I have created the S3 buckets and gotten the proper tokens. - Enter placeholder data in the Value column
To load the variables at runtime, you can use the following lines of code in your app:
Add some imports at the top of the file to reference the AWS libraries.
import AWSS3
import Foundation
import AWSClientRuntime
And after that, pull in the secret key information from the environment variables.
let accessKeyId = ProcessInfo.processInfo.environment["AWS_ACCESS_KEY_ID"]
let secretAccessKey = ProcessInfo.processInfo.environment["AWS_SECRET_ACCESS_KEY"]
2nd Checkpoint: Adding AWS Libraries and Config information
At this point I have everything I need in place to start developing my code and connect to AWS.
- I have installed the components and libraries I need.
- I have set up all the dependencies to packages that I need.
- I have stored the secrets I need.
- And I have imported the references I need to connect from the Vision Pro to S3.
Now I can move on to writing the code.
The Code
Writing the Code to Show a Static Model
Now, of course, I don’t need to load a model from AWS to show content in the headset. So the first thing I am going to do is get a generic 3D model . There are plenty of locations on the internet to find a suitable 3D model. For this example our team has an appropriately licensed model of the Vision Pro in the USDZ format. If the team did not already have a USDZ version, I can use the Reality Composer application included in the XCode install to convert a supported format to the USDZ one needed by RealityKit.
Once I have the model, I add it to the bundle by dropping it into the root folder of the application. I can create additional bundles and add assets there, but that is out of the scope of this post. Assets in the main app bundle can then be referenced by name without extension.
For USDZ models, I can also click on the asset in the folder structure and get a preview of the model.
This model’s scale is in mm, but the code is looking for models where the axis is marked off in meters. So that means the model without modifications will look like it is off compared to the “real-world scale” of the headset. To make it easier to see and interact with, I also want to have the headset appear a little larger than the real device would be. I will make a note of that and correct it in the next step.
Now that I have a model in the app bundle, it is a few lines of code to add the model to the view and render it in my scene.
struct ContentView: View {
var body: some View {
RealityView { content in
// load the model based on the file name added to the application bundle, without extension
let robot = try? await ModelEntity.load(named: "Apple_vision_pro")
// Here I am converting from mm to m scaling and then a slight tweak to make it fit better in my window
robot!.transform.scale = [0.0005, 0.0005, 0.0005]
// Finally, I add the scaled model into my scene content
content.add(robot!)
}
}
}
#Preview {
ContentView()
}
And there you have it, I am loading and rendering content into my applications scene.
3rd Checkpoint: Basic VisionOS Code added
- Loaded Model Asset from Disk
- Added Asset to my Scene and Rendered it.
Enhancing the code to Dynamically retrieve content from an S3 Bucket
Now having a static piece of content meets a lot of development needs, but there are scenarios where I don’t want the content for my application to be added at build time, and I would rather have content be dynamically loaded at runtime. Some reasons may be that I need to update content, or add new content on an ongoing basis, or that the developers will not have access to the final content until well into the development process. Either way, I will need an easy location on the internet to store my content, and a seamless way to pull it into my application. This is where I can leverage AWS to keep my app updated with fresh content. Let’s show a simple example of retrieving content from AWS from inside my swift code with the tools I have already downloaded and installed.
Setup an S3 Bucket
First step is to create an S3 bucket from the AWS Management Console. Details steps can be found in the S3 documentation. For this example, the bucket I will create needs to meet these requirements:
- It will be a bucket that is only used for this demo, and will not store any other content.
- It will not have public access through HTTP/S and
- It will only have one user/role with read-only access for reading the model I will upload.
For a production use case you may needs to have a more complex setup, but this locked down bucket will be sufficient to access S3 from inside the Vision Pro.
Based on the bucket I create, I will need to update my code with the following values:
region = ""
buckName = ""
Load sample model into S3
Here I can choose Upload and place a model into the bucket that will be used later to change what is displayed to the user. An important note on naming files for S3, the naming conventions that are allowed are different than on a Mac and are more inline with allowed URL characters, so you may need to rename your file to have it upload cleanly. Details on naming and detailed processes to create S3 items can be found in the S3 documentation. Once I have uploaded the file, I will need to make note of that name, and add it to my code as well.
objectKey = ""
Setup Access keys/Secrets on AWS
Detailed instructions for IAM can be found in official AWS IAM documentation. For this post I am going to cover the high-level things I am doing to enable the Vision Pro to connect to the S3 content.
For this demo, I am going to set up a read only user and get an access token to use for accessing that S3 bucket.
From IAM users on the console, I am going to create a new user. And when I get to permissions I am going to only attach one role, S3 read-only access. If I have more than one bucket, I would limit the role’s access to just the models bucket. Please reference the Using IAM user and role policies documentation for more information.
Once that user is created I am going to create an access key for accessing the S3 Bucket as this read only user. Please reference the Managing access keys for IAM users documentation for detailed instructions.
When I create the access key I will want to protect that information. And those two values will be used in the 2 schema variables I added in Step 4 above.
accessKeyId = ""
secretAccessKey = ""
4th Checkpoint: Set-up AWS Side Complete
- Created S3 bucket
- Added New Model to S3 bucket
- Created User with Read-Only access to the Bucket
- Create Access Key to use in my application to Access Bucket as that user
Connect from the Vision Pro code to AWS assets and APIs using secure credentials to access cloud based assets
Now comes the fun part, putting all the parts together.
To pull the Model from the cloud inside the code, I need to make the following changes to the code that I used to generate the static model. I already have the includes from earlier. So the first part of my code should look like this:
import SwiftUI
import RealityKit
import RealityKitContent
import AWSS3
import Foundation
import AWSClientRuntime
Update code
And my variable definitions at the top of my main loop should look like this:
let accessKeyId = ProcessInfo.processInfo.environment["AWS_ACCESS_KEY_ID"]
let secretAccessKey = ProcessInfo.processInfo.environment["AWS_SECRET_ACCESS_KEY"]
let region = "us-example-aa"
let bucketName = "visionpro-assets-exampleBucket"
let objectKey = "Small_Sedan_Example_File.usdz"
Where the objectKey is the name of the new model I uploaded to replace the Headset static model I used earlier. Then I need to create a function to handle connecting to S3, downloading the file and finally getting a reference to the file that was downloaded:
func downloadFile(bucket: String, key: String, localFolderName: String) async throws {
let accessKeyId = ProcessInfo.processInfo.environment["AWS_ACCESS_KEY_ID"]
let secretAccessKey = ProcessInfo.processInfo.environment["AWS_SECRET_ACCESS_KEY"]
do{
// Setup AWS credentials, connection config and client
let credentialsProvider = try StaticCredentialsProvider(Credentials(accessKey: accessKeyId!, secret: secretAccessKey!))
let configuration = try S3Client.S3ClientConfiguration(region: region, credentialsProvider: credentialsProvider)
let client = S3Client(config: configuration)
// Test For bucket connection
let inputObj = ListObjectsV2Input(
bucket: bucket
)
let outputObj = try await client.listObjectsV2(input: inputObj)
// Guard Clause to validate that the bucket exists
guard let objList = outputObj.contents else{
return
}
// Create path to store models
let baseDocumentPath = NSURL(fileURLWithPath: NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0])
let ModelPath = baseDocumentPath.appendingPathComponent(localFolderName)
try FileManager.default.createDirectory(atPath: ModelPath!.path, withIntermediateDirectories: true, attributes: nil)
// Create full file path to store model
let fileUrl = (ModelPath?.appendingPathComponent(key))!
// Create reference to the file that I am attempting to download
let input = GetObjectInput(
bucket: bucket,
key: key
)
// Start download into "output"
let output = try await client.getObject(input: input)
// Readdata from the file
guard let body = output.body,
let data = try await body.readData() else {
return
}
//Write date to local file folder location
try data.write(to: fileUrl)
// Update the DownloadedFileURL
downloadedFileURL = fileUrl
} catch {
print("Unexpected error: \(error).")
}
}
Now I need to call that function from the main program
@State private var downloadedFileURL: URL?
var body: some View {
RealityView { content in
// Download the new model from AWS S3 Bucket to local storage
try? await downloadFile(bucket: bucketName, key: objectKey, localFolderName: "Downloaded Models")
// Parse model as ModelEntity from local storage
let robot = try? await ModelEntity.load(contentsOf: downloadedFileURL!)
// Add model to view and render
content.add(robot!)
}
}
If I have done everything correctly, when I run my code in the simulator now, I will see a new model that I am dynamically loading from AWS. Perfect!
5th Checkpoint: Extended app with AWS content
- Connected to AWS
- Retrieved Asset from S3 bucket
- Added S3 Sourced Asset to my Scene and Rendered the model.
Clean up
It is a good practice to delete resources that you are no longer using. By deleting a resource you’ll also stop incurring charges for that resource. To clean up the resources you created in this demo:
1. Delete the model file(s) and S3 Bucket(s). Please reference the Deleting a bucket documentation for detailed instructions.
2. Delete the IAM Access Keys. Please reference the Managing access keys for IAM users documentation for detailed instructions.
3. Delete the IAM User. Please reference the Managing IAM users documentation for detailed instructions.
Conclusion
In this blog post I have demonstrated the tools and packages both from Apple and AWS that will allow me to start developing for the Vision Pro headset today, and start leveraging AWS services to make my applications more flexible and dynamic. The are many more advanced AWS services to enrich the user experience that you can using this same toolkit and approach. I look forward to seeing what you build.
Full code
import SwiftUI
import RealityKit
import RealityKitContent
import AWSS3
import Foundation
import AWSClientRuntime
struct ContentView: View {
let accessKeyId = ProcessInfo.processInfo.environment["AWS_ACCESS_KEY_ID"]
let secretAccessKey = ProcessInfo.processInfo.environment["AWS_SECRET_ACCESS_KEY"]
let region = "us-example-aa"
let bucketName = "visionpro-assets-exampleBucket"
let objectKey = "Small_Sedan_Example_File.usdz"
@State private var downloadedFileURL: URL?
var body: some View {
RealityView { content in
// Download the new model from AWS S3 Bucket to local storage
try? await downloadFile(bucket: bucketName, key: objectKey, localFolderName: "Downloaded Models")
// Parse model as ModelEntity from local storage
let robot = try? await ModelEntity.load(contentsOf: downloadedFileURL!)
// Add model to view and render
content.add(robot!)
}
}
func downloadFile(bucket: String, key: String, localFolderName: String) async throws {
let accessKeyId = ProcessInfo.processInfo.environment["AWS_ACCESS_KEY_ID"]
let secretAccessKey = ProcessInfo.processInfo.environment["AWS_SECRET_ACCESS_KEY"]
do{
// Setup AWS credentials, connection config and client
let credentialsProvider = try StaticCredentialsProvider(Credentials(accessKey: accessKeyId!, secret: secretAccessKey!))
let configuration = try S3Client.S3ClientConfiguration(region: region, credentialsProvider: credentialsProvider)
let client = S3Client(config: configuration)
// Test For bucket connection
let inputObj = ListObjectsV2Input(
bucket: bucket
)
let outputObj = try await client.listObjectsV2(input: inputObj)
// Guard Clause to validate that the bucket exists
guard let objList = outputObj.contents else{
return
}
// Create path to store models
let baseDocumentPath = NSURL(fileURLWithPath: NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0])
let ModelPath = baseDocumentPath.appendingPathComponent(localFolderName)
try FileManager.default.createDirectory(atPath: ModelPath!.path, withIntermediateDirectories: true, attributes: nil)
// Create full file path to store model
let fileUrl = (ModelPath?.appendingPathComponent(key))!
// Create reference to the file that I am attempting to download
let input = GetObjectInput(
bucket: bucket,
key: key
)
// Start download into "output"
let output = try await client.getObject(input: input)
// Read data from the file
guard let body = output.body,
let data = try await body.readData() else {
return
}
//Write date to local file folder location
try data.write(to: fileUrl)
// Update the DownloadedFileURL
downloadedFileURL = fileUrl
} catch {
print("Unexpected error: \(error).")
}
}
}