This article demonstrates how to use OpticStudio tools when designing and analysing the performance of a Head-up Display (HUD), namely the Full-Field Aberration (FFA) and the NSC Sag Map.
Authored By Sandrine Auriol
Downloads
Starting Point
Description of the HUD
Here is a sketch of the HUD. The LCD display emits light. That light is reflected by the 2 mirrors forming the HUD, then reflected by the windshield and finally enters the driver’s eyes. The driver sees a virtual image on the road giving him indications like for example here the speed.
The driver will move his head while driving. The eyebox is a virtual box that represents the range of the driver’s eye position.
Specification
- Virtual image distance: 2 m
- Display of the current speed
- Mechanical constraints: the HUD will mainly be constrained by the space available under the dashboard. The windshield will act as a beamsplitter
- Eyebox: the position of the driver’s eyes is within a box of ± 50mm in width and ± 20mm in height
- Eye pupil: the diameter is 2 to 4mm in bright light and 4 to 8mm in dark. For that study, it will be set to 4mm.
- The LCD display size is ± 12.5mm in width and ± 5mm in height
- Magnification = 6
Design Selection
The starting point of the HUD is a folded system; it keeps the size small enough under the dashboard. The HUD is made of two mirrors: one flat and one freeform. Mirrors have the advantage of not adding any chromatic aberrations in an imaging system. The freeform mirror needs to be optimized.
Steps to design a HUD
- From Virtual Image to Display: the design starts backward in Sequential Mode. Why? Because starting the simulation from the virtual image seen by the driver is convenient. The STOP surface can then be placed at the front of the system at where the eye box is located. A rectangular aperture is placed on the STOP surface to describe the constraints on eye position.
- From Display to Virtual Image: Then the system will be reversed in Sequential Mode. This allows to evaluate the “true” performance from the display to the virtual image, that is in the forward direction.
- Finally, the system will be converted to Non-Sequential (NSC) Mode. This provides a more realistic model where users can include stray light analysis. It will display the true image driver sees using the HUD.
Step 1: From Virtual Image to Display (Backward)
Starting point
For convenience, a template has been built that contains all the starting elements in place. The file name is “HUD_Step1_StartingPoint.zar” and may be downloaded at the top of the article. It contains a freeform model of the whole windshield. The windshield is described as an Extended Polynomial Surface. Let’s see how this file is built.
System Explorer:
- Aperture: The Eyebox is the system STOP. Because it represents the range of positions taken by the driver’s eyes: Width = ± 50mm and Height = ± 20mm, a rectangular aperture of this size is attached to the STOP surface.
The Entrance Pupil Diameter (EPD) is then computed as 2 x (sqrt (20^2+50^2)) = 108 mm.
- Fields: The Field Type is set as Object Height and the Normalization is defined as Rectangular. In the actual system, the image on the LDC display is magnified by a factor of 6 to form the virtual image. Because the current design is backward, from virtual image to the LCD display, the size of the virtual image can be computed and used as the object height to define field size in the Field Data Editor. The LCD Display dimensions are: Width = ± 12.5mm and Height = ± 5mm. Therefore, the object size should be 6 times of this:
Field Width = ± 75mm (6 x 12.5) and Field Height = ± 30mm (6 x 5)
- Wavelengths: The LCD display will emit at one wavelength = 0.55µm
Windshield
The whole windshield can be modelled, or only the area of the windshield used by the HUD can be modelled.
To find that “active” area, the Footprint Diagram tool can be used (found under Analyze...Rays & Spots...Footprint Diagram). It displays the footprint of the beam superimposed on the windshield surface:
Windshield model:
The windshield can be described by any sequential surfaces, like freeform surfaces, or a Non-sequential CAD part. If it is described as a NSC CAD part inserted into a sequential system, then the system becomes mixed mode. This works well when modelling the system in the backward direction, from Virtual image to display, but will become problematic when working in the forward direction as the STOP surface is now located after the Non-sequential Component surface. This makes ray aiming more difficult and could cause other ray trace issues as well.
A workaround is to measure the sag of the CAD part and then model it using a sequential Grid Sag Surface. This way the system stays in pure sequential mode, and OpticStudio can convert the Grid Sag Surface into Asphere Type surfaces. The converter can be found under Optimize...Convert Asphere Type.
Convert the windshield into a Grid Sag Surface:
The NSC Sag analysis, which is a ZOS-API extension, measures the sag of a CAD part. For details, see the article entitled "NSC Sag Map User Analysis".
That analysis uses a probing source ray and records where that probing source ray hits the NSC object. The file “HUD_windshield_sag.zar” can be downloaded at the top of the article. It contains the windshield CAD part and a source.
The size in X and Y of the windshield are settings of the NSC Sag map. They can be approximated using the active cursor positions on a Shaded Model with the camera view set to X-Y:
The following settings can be entered for the NSC Sag tool:
In the settings, you can:
- Untick “Remove XY Tilt”. The NSC Sag will not set the tilts of the NSC object to 0.
- Tick “Keep Saved Files” to save the .zmx and .zrd file into the current folder
The NSC Sag Map is displayed in False Color. It can also be displayed as a text listing if under Settings the Show As option is set to Text. This text output can then be saved and converted it to the correct data format .DAT and use it for the Grid Sag Surface. That said, in this example the CAD and Grid Sag surface approach described here are not used. For simplicity, the windshield is modelled using an Extended Polynomial surface instead.
Positioning all the elements
Here is a layout that represents the positions of all the elements:
The placement of the surfaces is done using some nice tools:
- The Coordinate Break Return: a Coordinate Break surface can be defined with a Coordinate Return under Surface Properties…Tilt/Decenter. OpticStudio will then calculate the parameters of that Coordinate Break surface so that after this coordinate break surface the local coordinates are identical to (“returned” to) the local coordinates of a previous sequential surface.
- The Chief Ray Solve: that Solve calculates tilts and decenters of a coordinate break surface so it’s perpendicular to and centered on the chief ray:
Initial performance
The element that adds aberration in the system is the windshield. By how much?
The system can be simplified to light coming from infinity (eye) and being reflected by a windshield; after reflection the spot diagram can tell us the ray angles in the case of the “true” windshield and in the case an ideal flat windshield (flat mirror).
Here are the different steps to modify the file:
- Ignore Surfaces 6 to 11
- Convert the Field Type to Angle
- Set Object Thickness value to Infinity
- Add a Standard surface after the windshield. It models a flat windshield. Set the material to MIRROR. In the Surface 4 Properties, under Aperture, pickup the Aperture from Surface 3.
- Create two configurations: one with the “true” windshield and the other with the ideal flat windshield (surfaces 3 and 4)
- Tick Afocal Image Space under System Explorer…Aperture. Set the units are in degrees.
These modifications can be found in the “HUD_Step1_windshield_aberration.zar” file.
To analyze the aberration introduced by the windshield mirror, click Analyze...Aberrations...Full Field Aberration. The Seidel aberration tool is not applicable here because it only describes third order aberrations in a rotationally symmetric system.
The Full-Field Aberration analysis calculates the Zernike decomposition of the wavefront and displays the Zernike coefficients across the full field of view.
The full field of view is defined by the red ensquared settings:
Here is a representation of those field points:
For each field point, the software will fit the wavefront to a series of Zernike Standard polynomials just like it does under Analyze…Wavefront…Zernike Standard Coefficients. The following settings define the fit. The aberration term allows to select which term to display:
Under aberration, the Primary Astigmatism is calculated from Zernike Standard Term 5 (Z5) and Zernike Standard Term 6 (Z6):
The Primary Astigmatism is defined as:
- Magnitude = sqrt (Z5^2 + Z6^2)
- Angle = (1/2)*atan2(y = -Z5 , x = -Z6)
Here, atan2 is the C library function that gives the arc tangent of (y/x).
If the Display is set to Icon, the length of the line will give the magnitude and the orientation will give the angle.
In the bottom frame is displayed the average value of the selected aberration, which is here the Primary Astigmatism across the full field of view.
For that system the results are:
- Defocus: 174.4 waves
- Primary Astigmatism: Average: 80.2 waves
As can be seen, the system is initially limited by astigmatism brought by the windshield. The beam is also slightly focused by the windshield. But the defocus value is not an issue as the design will focus the beam onto the LCD display. The design of the HUD will start at correcting for the astigmatism aberration.
Build the Merit Function
Back to our original file “HUD_Step1_StartingPoint.zar”, the freeform mirror can now be optimized to correct for the aberrations introduced by the windshield. First the Quick Adjust tool under Optimize can be used to make our freeform mirror a spherical mirror. It gives a good starting point.
Build a default merit function:
The default merit function can be built to optimize for the smallest spot (RMS Spot). The system contains apertures, so the pupil will be sampled with a rectangular array.
The Full Field Aberration can be used here to check the field sampling. Quick variations of aberrations across the field of view may mean that more field points are needed.
Then the other specifications can be added manually with operands at the top of the merit function:
- Magnification: one specification is about the magnification. REA* (Real ray coordinate) operands can be added to check the positions of the fields in X and Y on the LCD display. DIVI operands can be used to compute the magnification (ratio of chief ray height on image plane vs object plane). A weight factor of 10 will be placed on these DIVI operands
- Distortion: the last specification is about distortion. It has to be below 2%.
Paraxial calculations like distortion do not always work well with asymmetric systems with coordinate breaks. When using distortion operands, always verify the results make sense. The distortion can be manually checked and/or calculated with the locations of the centroids, using CENX and CENY for the four corners of the field of view (fields 2 to 5).
The merit function is now ready. Before optimizing, the Freeform mirror Standard Surface can be changed into a Freeform surface; here a Zernike Standard Sag surface with 11 terms.
The Zernike polynomials are great for optimizing, but they may need to be converted back to standard polynomials like the Extended Polynomials for manufacturing.
The normalization radius for the Zernike surface is set to a fixed value greater than the semi-diameter. During the optimization, if that radius is not fixed, every time it is updated, it creates some jitter on the merit function during optimization.
The file before the optimization is called “HUD_Step1_MF_before_optim.zar”.
Variables:
Z1 is a piston term; it won’t be used.
Z2 and Z3 are the Tilt terms. The different positions of the elements like the LCD display are fixed, so the Tilt terms won’t be used.
The system contains 2 variables: Back focal thickness and the Freeform mirror radius of curvature.
After a first local optimization under Optimize…Optimize!, the Full Field Aberration can be checked:
Average value across the field:
Defocus: 7.8 waves Primary Astigmatism: 25.0 waves Primary Coma: 7.4 wave |
Z4 is the Defocus/Field Curvature term and is set as variable.
Z5 and Z6 are the Primary Astigmatism term and are set as variables:
After optimizing, the average value across the field is:
Defocus: 15.0 waves Primary Astigmatism: 9.1 waves Primary Coma: 6.9 wave |
Z7 and Z8 are the Primary Coma terms and are set as variables.
Z9 and Z10 are the Elliptical Coma terms and are set as variables.
Z11 is the balanced Primary Spherical aberration term and is set as variable.
Then one minute of Hammer global optimization:
The file after the optimization is called “HUD_Step1_MF_after_optim.zar”.
Result of the Optimization
The results of optimization can be checked. The system has not been reversed yet, so the performances are not “real” performances, but “reverse” performances.
- Spot size (blur): the RMS of the spot is below 200µm. It doesn’t give much information; it will be more interesting to check the angular size when the system is reversed.
- Astigmatism and Coma: The Full Field Aberration can again be checked to see if the optimization has reduced the Primary Astigmatism. Apart from that aberration, the Zernike terms that are most likely to affect the imaging quality of the HUD are the Coma and the Spherical aberration. The field of view used for the results below is the total field of view. It represents the maximum angular extent viewed by the driver allowing vertical and horizontal head movement within the HUD eyebox. It gives also the disparity seen by two eyes.
The average value across the field is:
Defocus: -3.6 waves Primary Astigmatism: 10.7 waves Primary Coma: 2.2 wave |
Astigmatism has decreased in scale from 80 waves to 11 waves. The plot below is using a relative scale (display setting); the average value is subtracted from the absolute values. It gives a better idea of the aberration variation across the field of view:
- Distortion: just above 2%
Step 2: From Display to Virtual Image (Forward)
Reverse the system
Reversing a system is not straight forward. The reverse element tool in the Lens Data Editor has some limitations and a HUD system will certainly break them as the system contains coordinate breaks and non-standard surfaces.
The tricky part is that the Z axis is “inversed”. For an asymmetric system like the HUD here, the tool does not work properly.
Another solution is described below:
- In the Lens Data Editor, select the Make Double Pass tool:
The system now contains a reflection on surface 12, which is the LCD. Only the backward part of our system is of interest.
- Surface 24 is the new STOP surface; first the semi-diameter of Surface 24 can be fixed, the Aperture changed to Float by Stop Size and then set the STOP surface to surface 24.
- The system needs tidying up: remove all surfaces defined from Virtual Image to Display; from surface 1 to 11. The pick-up solve can be removed on Surface 13; the thickness of Surface 13 is a fixed value of 2000mm. The Object Thickness (Surface 0) is set to 0mm.
- The STOP Surface 13 can be set as the Global Coordinate Reference Surface. The system looks like this:
- Now the fields in the Field Date Editor have to be redefined as the LCD fields:
The file “HUD_Step2_reversed.zar” may be downloaded at the top of the article.
Performance
- Spot size (blur): the image sharpness can be checked in Afocal Image Space with the size of the STOP equal to the eye pupil in daylight. It is 4mm in diameter.
The RMS of the spot is below 2’. 1’ is approximately the resolution of the human eye.
- Image simulation: the HUD will image the speed of the vehicle. The Image Simulation tool can give user an idea of the quality of the image formed by the HUD system:
- Dipvergence / Convergence (eye pointing disparity): Both eyes of the driver will look through the optical system. There is usually a small angular difference between the directions each eye looks to see the same image point. The vertical (up/down) angular difference is called the Dipvergence. The horizontal (left/right) angular difference is called the Convergence. The results can be checked using file “HUD_Step1_MF_after_optim_2_eyes.zar”. The pupil is 4mm in diameter and the interpupillary distance is set to 50mm. The typical limits for those values are on the order of 1.0 mrad for visual systems and so the system is within that limit here.
Step 3: Non-sequential mode
Direct Conversion to NSC Group
The system is now ready to be exported to Non-Sequential for further analysis. The starting point is the file called “HUD_Step2_reversed.zar”
OpticStudio has a built-in tool “Convert to NSC Group” that can convert sequential surfaces to Non-sequential component; or convert an entire sequential system into Non-Sequential system. When converting a mirror, if the substrate thickness is greater than 0, it will convert the mirror to a Compound Lens object, with the thickness equal to the Mirror Substrate thickness. So, in this file, we will set the thickness of mirrors 4, 6, 8 and 11 to 5mm. The file is now ready for conversion.
Once the file is converted, it needs a bit of tidying-up. Below, the list explains the different steps. The final Non-Sequential file can be downloaded at the top of the article: “HUD_Step3_NONSEQ_after_tidying_up.zar”
- Define all objects in global coordinates:
- Keep only one source: Source Ellipse on line 4, which is centered on Field 1. Delete all other sources (lines 1 to 3 and lines 5 to 12). Change that source to a Source Rectangle with a size of ± 12.5mm in width and ± 5mm. Set the number of layout rays to 10:
- Reverse the rays:
- Delete Surface 2 that was useful in sequential for reversing the system as well as Surface 3. Delete all Null Objects.
- Delete a flat mirror: only one is needed in Non-Sequential mode (delete lines 10-14).
- Change the material of the windshield to N-BK7 (line 14).
- Change the Eyebox (line 15) to a Detector Color and add a Tilt about X of -8 degrees. The speed will appear at the bottom of the Detector Color. The Eyebox size is X Half Width = 50mm and Y Half Width = 20mm. Set the number of pixels in X to 400 and in Y to 200. Additionally, the Detector Color half angles are set in X to 20 degrees and in Y to 10 degrees and a Tilt Y and a Tilt Z of 180 degrees have been added so that the final image is displayed in the right direction.
- Change Detector 25 to a Source Rectangle and change the comment to “Virtual Image”. Add a Tilt X of -8 degrees and change Y Position to 275 to the virtual image so that it is centered on the detector.
20 layout rays, X Half Width = 1000mm and Y Half Width = 500, Source Distance = 2000, Reverse the rays.
- Delete all the other detectors (16 to 24).
At that point the layout rays from the LCD window do not seem to interact with the windshield. The windshield is a Boolean Native object: it is the interaction of a Rectangular Volume and a Compound Lens made of 2 Extended Polynomial Surfaces.
To understand what is happening let’s draw the Rectangular Volume by unticking the Do Not Draw Object option in the Object Properties tab:
The 3D layout shows that the Source is inside the Rectangular Volume, which is one of the parent objects of the Boolean Native. In this case, the Inside Of flag of the Source needs to be turned on to point to the Boolean Native object. The source also needs to be defined after the Boolean Native in the NSCE in order for the Inside Of Flag to work properly.
- Cut Source Rectangle at line 1 and copy it below the windshield. Change the Inside Of flag. Now the rays split on the windshield.
- Add a Slide Object as a source image from the LCD display showing the speed and place it in front of the LCD source. Set the Slide X Full Width as 26 mm and Aspect Ratio as 1.0.
- The Source Rectangle line 17 at the virtual image reproduces the Sun illumination. Add a Slide Object to represent the background landscape seen by the driver (Object Properties >Sources> Raytrace> Reverse Rays so that rays emit towards the detector). Set the Slide X Full Width as 2000 mm and Aspect Ratio as 1.0.
- Set the Spectrum of the Source Rectangle on line 17 to match the Sun spectrum.
- Source 14 (LCD Display): power = 1W and number of analysis rays = 1E6
- Source 17 (Illuminated landscape): power = 10W and number of analysis rays = 1E7
After tidying up the final system looks like below in the NSC Shaded Model.
Result
The simulated image seen by the driver can be shown using Detector Viewer. First perform ray trace by clicking Analyze > Ray Trace and set up the Ray Trace Control as shown below. Then open the Detector Viewer by clicking Analyze > Detector Viewer. Under Settings menu, set Show As: True Color and Show Data: Angle Space. The Angle space is the Non-Sequential equivalent of the Sequential Afocal Image Space. It’s used here because the eye is not modelled in this system.
The Detector Viewer now displays in True Color what the driver will see using the designed HUD system:
What else
In Non-Sequential mode, users can perform other analyses, for example Straylight Analysis, or image brightness variation caused by driver’s head movement, etc.
KA-01801
Comments
Article is closed for comments.