In the consumer electronics space, engineers leverage lidar for several functions, such as facial recognition and 3D mapping. While vastly different embodiments of lidar systems exist, a “flash lidar” solution serves to generate an array of detectable points across a target scene with solid-state optical elements. The benefit in obtaining three-dimensional spatial data for use in a small-form package has caused this solid-state lidar system to become more commonplace in consumer electronics products, such as smart phones and tablets. In this series of articles, we will explore how OpticStudio can be used for modelling these kinds of systems from sequential starting points to incorporating mechanical housing.
Authored By Angel Morales
Downloads
Introduction
Lidar (Light Detection and Ranging) systems have been used in several cases across many industries. While there are different types of lidar systems, such as systems with scanning elements that determine the field of view, this example will explore the use of diffractive optics to replicate the projection of an array of sources across a target scene. A receiver lens system then images this projected array of sources to obtain time-of-flight information from the incoming rays, thus generating depth information from the projected dots.
In Part 2, we cover the conversion of our sequential starting points from Part 1 and add in additional detail into the non-sequential model. We also apply the ZOS-API to generate some time of flight results with our flash lidar system.
Initial Conversion to Non-Sequential Mode
To observe how the two modules work as an entire system, we can use the Convert to NSC Group tool (found in the File tab…Convert to NSC Group) in each system to generate non-sequential models of our illumination and imaging subsystems. In both the illumination module (with the Multi-Configuration Editor cleared to leave just a single config) and the imaging module, the following settings were used in the Convert to NSC Group tool:
The following is the output for each subsystem in Non-Sequential Mode:
Combining Modules
At this stage, we can perform some edits for easier combining of our modules. In the final assembly, we assume that the source for the illumination module and the sensor for the imaging module will lie on the same plane, as we can imagine that they share the same electrical board in the full system. The overall approach that we take in Non-Sequential Mode is:
For the illumination module
-
- Redefine the module object placement such that the source is at the global Z position of zero
- Remove two of the three detectors at the module’s “image plane”, increase the size of the remaining detector, and make it a MIRROR material (as this will eventually act as a scattering wall
- Remove two of the three sources, as we will soon edit the remaining source for our diode array
For the imaging module
-
- Remove the sources from the module
- Remove two of the three detectors and increasing the size of the remaining detector based on the dimensions of the Sequential Mode file
- Redefine object reference placement to the Image plane
The modified non-sequential files after the above changes are shared with this article as “FlashLidar_Emitter_DiffGrat_PostEdit.ZAR” and “FlashLidar_Receiver_PostEdit.ZAR”.
After the adjustments, we can insert the imaging module objects into the illumination module’s Non-Sequential Component Editor through copying and pasting. After pasting, we need to be sure to renumber the “Ref Object” parameters for the objects we inserted to point new object numbers as applicable – for instance, our imaging module optical elements now need to point to Object 10 (“Imaging Module Ref.” Null Object) in the combined model. The reference Null Objects are then used to finalize the placement of the modules by editing their X position:
Final Details of the Full Assembly
To finalize the model, we’ll first need to update the source definition to incorporate additional details on the array and emission characteristics. We replace the Source Ellipse from the conversion to the Source Diode object with the following parameters:
- Ref Object: 1
- X-/Y-Divergence: 11.5°
- X-/Y-SuperGauss: 1.0
- Number X’/Y’: 5
- Delta X/Y: 0.32mm
Generating the full array of spots on our scene requires modifying the Object Properties for the Diffraction Grating objects. For each Diffraction Grating, we define the orders through the “Split” setting in the Diffraction tab, using “Split by table below” in order to have ideal, equal transmission of each order. Ideal coating definitions of I.99999999 were placed on the front and rear faces of all elements in both modules for sake of simplicity. With these modifications, we can view the full projected dot array once we allow rays to be split in the 3D Viewer:
In order for the wall element to act as a scattering surface, a Lambertian scattering profile is applied to the Scattering Wall detector. Similarly, we also make the wall an ideal reflecting and scattering surface by setting a I.0 coating (ensuring 100% reflection) and a Scatter Fraction value of 1. However, in the current definition, the scattering rays will rarely trace into the imaging module due to the wide-angle scatter. Importance Sampling can therefore be used to force rays to scatter toward the vertex of any specified object (see the article “How to use importance sampling to model scattering efficiently” for additional details on how Importance Sampling works). The target we will use is Object 11, the physical aperture for the imaging module, with a Size value of 0.7mm.
Due to Importance Sampling imparting a reduction in power of a scattered Lambertian ray when aimed at the target object (to account for the real power reduction as rays scatter away from the surface normal), the Minimum Relative Ray Intensity needs to be decreased to allow these lower energy rays to be traced by OpticStudio. A setting of 1e-8 allows the rays to be traced in this instance, and we can see that rays now can leave the illumination module and be captured by the imaging module. It should be noted that an absorbing Rectangle object was introduced between the two modules to prevent the illumination system’s stray light from affecting the imaging lens detector.
Now, we can observe the dot pattern as projected onto the wall and as observed by the imaging lens. The file is saved at this stage as “FlashLidar_FullSystem.ZAR”:
Time of Flight Considerations
Lidar systems obtain the depth information of the scene by measuring the time-of-flight of light as it arrives onto the detector. For example, sensors are typically time-gated to capture this information from the incoming beams of lights that scatter from the observed scene.
In OpticStudio, we can obtain the time-of-flight data for each ray that lands on our final Detector Rectangle by leveraging the ZOS-API to build a User Analysis that parses through the ZRD file and analyzes the path length of rays landing on the imaging module sensor, thus obtaining the depth of the observed scene. The Knowledgebase article “How to create a Time-Of-Flight User Analysis using ZOS-API” contains further information on constructing this kind of User Analysis, whereas we will just use the analysis directly.
In the flash lidar system, some relevant use-case geometry has been added, such as a small mock-up of a desk and a sphere acting as a (very simplified) fist for gesture recognition. The file is included in the attachments as “FlashLidar_FullSystem_wSceneObj.ZAR”:
Prior to running the User Analysis, a ray trace needs to be run, and ray trace data needs to be saved in the Ray Trace Control window. The User Analysis will then be able to read the saved .ZRD, and with the following settings in the analysis, we obtain the following depth output:
With these results, we can distinguish different features in our scene and how they are positioned at various depths. For example, our rough “hand” representation sits in the top left of the of the User Analysis output, and the cup that sits on top of our mock desk sits a bit further away in the top right of the scene. For demonstration, we flood the scene with illumination by making the full area of the source emissive with a Source Rectangle to more easily see depth information of the entire scene:
With our designs for the illumination and imaging modules for this flash lidar system, we can resolve the projected dot array onto our final detector plane and leverage the ZOS-API to create a User Analysis that obtains depth information of the geometry that the dot array impinges on. The ability to resolve the features in the observed scene and retrieve distance information means that this information can be passed to computational software to generate images to display to the user as well as use the motion data from the user to effect some changes in the computer-generated scene.
Conclusion
In this article, we have covered the conversion of our sequential illumination and imaging flash lidar modules into non-sequential mode. We also demonstrated refining the models and some methods to combine the two models into a single OpticStudio file. Additional details were defined for the source as well as defining scattering properties on a distant wall to verify a ray trace through the entire system. Lastly, we touched on the usage of a custom User Analysis built in the ZOS-API that returns time-of-flight data for the full flash lidar system.
References
- How to use importance sampling to model scattering efficiently
- How to create a Time-Of-Flight User Analysis using ZOS-API
This is the second article of the Modeling a Flash Lidar System series.
Next article: Modeling a Flash Lidar System - Part 3 – Knowledgebase (zemax.com)
Comments
Article is closed for comments.