Get started and learn how to make your first ARKit application

Instead of creating a scene with an asset you can create a SCNNode then place that node onto the sceneView at a specific point.

We are using nodes here instead of SCNScene because a SCNScene object occupies the entire sceneView, but we want our model in a specific point of the scene.

To create the SCNode we first load a temporary SCNScene with an asset and then save the scene’s childNode as the node we are going to use.

We do this because you can’t initialise a node with an asset but you can load a node from a loaded scene if you search for the node by name.

Be careful when loading the scene, it takes some seconds and power I recommend doing it on load and storing the nodes you want to show on the sceneNote that AssetName here is not the fileName of the asset but rather the node name of the model itself.

You can find what nodeName your model has just by opening the .

dae or .

scn file in XCode, and toggling the Scene Graph view, which will reveal the layer list of the file.

How to set up the name of the node on the sceneAfter getting the node, the next step is adding it to the scene.

We found two different ways to do it, and choosing one or the other depends on how you want your app to work.

First, we need to know where to render our model within the 3D world.

For our demo we get the location by getting the user tap CGpoint from the touchesBegan method:Getting the closest AR point to the 2D touch location from the userGetting a location CGPoint and translating it into a float_4x4 matrix with the worldTransform method.

The location variable we are getting from the above example is a 2D point which we need to position in the 3D AR scene.

This is where the Feature Points mentioned above come into play.

They are used to extrapolate the z-coordinate of the anchor by finding the closest Feature Point to the tap location.

sceneView.

hitTest(location, types: .

featurePoint)You can also use the cases .

existingPlaneUsingExtent and .

estimatedHorizontalPlane to get the positions of the planes when using planeDetectionThis method gives us an array of the closest ARHitTestResult, sorted by increasing distance from the tap location.

The first result of that array is therefore the closest point.

We can then use the following let transformHit = hit.

worldTransform that returns a float4x4 matrix of the real world location of a 2D touch point.

Plane DetectionNow that we have the location of the touch in the 3D world, we can use it to place our object.

We can add the model to the scene in two different ways, choosing one over the other depends on how we have set up our ARSession and if we have planeDetection enabled.

That is because if you run your configuration with planeDetection enabled, to either horizontal or vertical detection, the ARSCNView will continuously detect the environment and render any changes into the sceneView.

When you run a world-tracking AR session whose planeDetection option is enabled, the session automatically adds to its list of anchors an ARPlaneAnchor object for each flat surface ARKit detects with the rear-facing camera.

Each plane anchor provides information about the estimated position and shape of the surface.

We can enable planeDetection on viewWillAppear when adding a ARWorldTrackingConfiguration to the ARSession:configuration.

planeDetection = .

horizontalSo while planeDetection is on we can add a new node into the scene by creating a new SCNode from our Scene object and changing the node's position, a SCNVector3, to where we want the model to be on the view.

We will then add this node as part of the childNode of the sceneView, and since planeDetection is enabled the AR framework will automatically pick up the new anchor and render it on the scene.

Using the same method of getting the 3D location, we add the node we created before to the sceneViewYou can use either .

existingPlaneUsingExtent and .

estimatedHorizontalPlane cases instead of .

featurePoints when trying to find where to place the model.

The results given would be different in each case and it depends on where and how you want to place your object.

Existing Planes would give you a point fix on a plane, like a floor or a table, and feature points would give a more specific location around objects in being tracked in a real environment.

To get the correct node position we will need to use the finalTransform float4x4 matrix we created before and translate it to an float3.

To do that translation we used an extension that translates our float4x4 matrix into a float3.

Translation extension that converts 4×4 matrix to a float3Tada ????.we just successfully added a 3D model into an AR Scene!AnchoringHaving the app continuously detect the plane is quite resource heavy.

Apple recommends disabling planeDetection after you are done detecting the scene.

But as we mentioned before, if planeDetection is not enabled the ARScene won't pick up your newly added childNode and render it on to the sceneView.

So if you want to be able to add new nodes and models to a scene after you are done detecting planes you will need to add a new ARAnchor manually.

To create an ARAnchor from the tap location we will use the same transformHit float4x4 matrix we created before — without needing to translate it this time, since ARAnchors and ARHitResults use the same coordinate space.

Adding an Anchor to a scene to render an objectBy adding the new anchor by ourselves instead of relying on the Session Configuration we trigger the renderer() function from the delegate that will return the node to be rendered for a particular anchor.

Adding a node to the anchor through render methodWe need to double check if the anchor triggering the render function is the anchor we just added and not an ARPlaneAnchor.

With this in place our model will be rendered at the tap location of the sceneView just as seamlessly as when we had planeDetection enabled.

Tada ????.we just successfully added a 3D model into an AR Scene!ConclusionsTo summarise, in this post we went through the basics of Augmented Reality and Apple’s ARKit.

We applied the lessons learned and crafted an application that adds our 3D models to the world using two different methods.

The code for this demo can be found on Novoda’s GitHub and you can also check our ARDemoApp repo, where you can import your own models into an AR Scene without having to write a line of code.

If you enjoyed this post make sure to check the rest of the series!Have any comments or questions?.Hit us up on Twitter @bertadevant @KaraviasDOriginally published at blog.

novoda.

com on January 21, 2019.

.. More details

Leave a Reply