This study determined the accuracy of a camera system capable of recording three-dimensional facial images. A Rainbow 3D Camera Model 250 system (Genex Technologies Inc, Kensington, Md) was used to capture images of specific models: (1) a precalibrated precision model and (2) a mannequin model that served to simulate the human condition. To assess the accuracy of the camera system, repeated images of both models were recorded at two time points, one week apart. Repeated measurements of specific distances were recorded directly on the models and from each image. Means and standard deviations were calculated for all the repeated measurements at each time point. A two-tailed t-test was used to test for significant differences between (1) each distance measured directly on the precision model and the same distance measured on the images of the precision model, (2) each distance measured directly on the mannequin and the same distance measured on the images of the mannequin, and (3) the mean differences between the same distances measured at the two times. The findings showed that substantial image distortion occurred when images of sharp angles (90°) were captured. Also, those images captured from the frontal perspective ±15° were the most accurate.

Since the introduction of the Broadbent cephalometer,1 diagnostic methods in orthodontics have used two-dimensional (2D) representations of patients' craniofacial morphology. These diagnostic methods have remained essentially unchanged for over 70 years and are still in use.

Two-dimensional cephalometric radiographs record mainly hard tissue information. Today, however, the paradigm of our treatment goals has shifted from hard to soft tissue,2 and this shift requires the use of novel approaches for 3D imaging as well as creative diagnostic methods. There have been many techniques proposed for 3D facial imaging including laser scanning,3 computerized tomography,4 stereolithography,5 and ultrasonograhy.6 Most of these techniques require the use of prohibitively expensive equipment and highly skilled technical support. Also, in some instances there maybe risks to the patient such as exposure of the body to high doses of radiation and of the eyes to laser light.7 In this study, a stereophotographic 3D camera is introduced that is of relatively low cost, fast, safe, and easy to operate. The camera captures both 2D and 3D images of the face simultaneously. This study determined the accuracy of this 3D camera in a clinical setting.

Rainbow 3D Camera Model 250 system (Genex Technologies Inc, Kensington, Md) was used to capture images of specific models. The system has a center light source with two cameras on either side of the source and a digital camera in front of the source (Figure 1). It has a field of view of 250 by 190 mm, with a stated accuracy rating of 250 microns, and captures both 3D surface data (x, y, and z coordinates) and 2D image texture data (grayscale overlay). The following 3D file formats are supported: GTI, STL, PNT, IGES, and raw data.8 Three-dimensional images of the face are produced from the 2D images by correlating specific points on the 2D image with the corresponding points on the 3D image. In this study, to assess the camera system accuracy, images of a precalibrated precision model and a mannequin model that served to simulate the human condition were recorded.

FIGURE 1.

The Rainbow 3D camera, Model 250. The system has a central light source with two cameras on each side of the light source and a digital camera in front as shown in the picture on the left. The entire setup with the computer hardware is shown in the picture on the right

FIGURE 1.

The Rainbow 3D camera, Model 250. The system has a central light source with two cameras on each side of the light source and a digital camera in front as shown in the picture on the left. The entire setup with the computer hardware is shown in the picture on the right

Close modal

Precision model

A precalibrated precision model consisted of a steel mass with two square surfaces. Each surface had a ground finish with dimensions 127 × 127 × 127 mm (5 × 5 × 5″) and was built to an accuracy of 0.01058 mm per 127 mm (0.0005″ per 6 inches). Both surfaces approximated each other at a 90° angle. The weight was 11 pounds (5 kg). To limit reflection, the surfaces were painted with a nonglossy material (Satin Paint from Krylon®; Figure 2).

FIGURE 2.

The picture on the left shows a 45° angulation view of the precalibrated precision model with antireflective coating. The right schematic shows all three angulations of the model at which the images were captured

FIGURE 2.

The picture on the left shows a 45° angulation view of the precalibrated precision model with antireflective coating. The right schematic shows all three angulations of the model at which the images were captured

Close modal

To assess the accuracy of the camera system, images of the model were captured at two time points, T1 and T2, one week apart. At each time point, the model was captured from three different angulations: 0°, 30°, and 45° (Figure 2), and five images were recorded at each angulation. Then, with the use of the Rainbow 3D software, the horizontal (X, X1, and X2) and vertical (Y) lengths of the surfaces captured at each image angulation were measured five times. At 0°, only the X and Y distances were measured.

In addition, to assess the ability of the system to capture accurately the surfaces of the model, three sets of image data from each of the 0°, 30°, and 45° angulations were converted to the 3D raw data file formats and tested for surface flatness and angular error. Assuming an accurate precision model, the two flat surfaces would intersect at a perpendicular (90°) angle (Figure 2). To assess the surface flatness and intersect angle of the captured images, one or two geometric planes were fitted to surface meshes depending on the image angulation of 0°, 30°, and 45°. The analytical equation of a geometric plane is z = a1 + a2x − a3y and the given condition of the model is a group of vertices with coordinates xi = (xi, yi, zi), i = 1, 2,…,Ni.

Therefore, we need to solve aj, j = 1, 2, 3 of the geometric plane equation. By treating the aj, j = 1, 2, 3 as independent variables, a multiple regression analysis was used to calculate the plane equation.9 Once calculated, the errors were measured in terms of the distances between the vertices on the image surfaces relative to the respective geometric planes generated by the plane fitting.

Mannequin model

On the face of the mannequin model (Figure 3), six, five-mm diameter, circular markers were secured to specific sites on the tip of nose (3), chin prominence (6), right and left zygomatic-maxillary suture area (2 and 4), and at the posterior of the right and left zygomatic arches (1 and 5). A one-mm mark was made at the center of each five-mm marker. Four distances between the one-mm marks of landmarks 1 and 2, 2 and 4, 4 and 5, and 3 and 6 were measured five times with a digital caliper (Mitutoyo® ABS Digimatic Solar Caliper; Instrument error ±0.02 mm). These measurements were assumed to represent the actual distances between the landmarks.

FIGURE 3.

Schematic diagrams of mannequin showing the facial landmarks. The different angulations and corresponding head positions at which images were recorded are depicted

FIGURE 3.

Schematic diagrams of mannequin showing the facial landmarks. The different angulations and corresponding head positions at which images were recorded are depicted

Close modal

To assess the accuracy of the system for recording images of the mannequin, images of the mannequin head were captured at two time points, T1 and T2, one week apart. At each time point, the images were captured at four different angulations: 0°, 30°, 60°, and 90°. Five images were recorded at each angulation (Figure 3). For the images captured at the 0° angulation that represented the frontal view of the face, all the four distances between landmarks 1 and 2, 2 and 4, 4 and 5, and 3 and 6 were measured. For the images captured at the 30°, 60°, and 90° angulations, only the distance between landmarks 1 and 2 was measured. At each image angulation, the distances were measured five times with the Rainbow 3D software.

Statistics

Means and standard deviations were calculated for all the repeated measurements made on the precision and mannequin models at both T1 and T2. In addition, a measure of the image distortion was calculated as follows. For the precision model, distortion was calculated as the mean difference between the actual predetermined distance of the model and the distance measured on the model image expressed as a percentage of the actual distance. Similarly, for the mannequin model, distortion was calculated as the mean difference between the distance measured directly on the mannequin and the same distance measured on the image of the mannequin expressed as a percentage of the distance measured directly. A two-tailed t-test was used to test for significant differences between the following measures:

  1. Each distance measured directly on the precision model and the same distance measured on the images of the precision model.

  2. Each distance measured directly on the mannequin and the same distance measured on the images of the mannequin.

  3. The mean differences between the same distances measured at T1 and at T2.

The statistical analyses were completed using the Minitab software.

Distance measurements of the precision model

The descriptive statistics for the measures at times T1 and T2 are given in Table 1. The values represent the mean differences (mm) between the distances recorded with the digital caliper on the precision model and similar distances recorded on the 3D images of the model at the three angulations: 0°, 30°, and 45°. Also, Table 1 shows similar results for the measurement of distortion.

TABLE 1.

Descriptive Statistics at T1 and T2 for the Difference Between Similar Distances Measured on the Precision Model and on the Images of the Model at Different Angulations. Similar Differences are Shown for the Distortion Measure

Descriptive Statistics at T1 and T2 for the Difference Between Similar Distances Measured on the Precision Model and on the Images of the Model at Different Angulations. Similar Differences are Shown for the Distortion Measure
Descriptive Statistics at T1 and T2 for the Difference Between Similar Distances Measured on the Precision Model and on the Images of the Model at Different Angulations. Similar Differences are Shown for the Distortion Measure

The results demonstrated that the mean difference between the precision model and the image captured at time T1 was the smallest for the distances X and Y at 0° angulation. These differences increased substantially at the 30° angulation and were the largest at the 45° angulation. For all the measurements and for the angulations at T1, the difference was greatest for the distance X1 at 45° and the smallest for the distance Y at 45°. The standard deviations, however, demonstrated a different trend. The standard deviation for the distance Y at 0° was the smallest, whereas the standard deviation for the distance X2 at 30° was the largest. The distortion measurement followed the same general trend. The results were similar at T2. The distance X1 at 30° showed the largest mean difference, whereas the distance Y at 30° was the smallest. The standard deviation for the distance Y at 45° was the smallest, whereas the standard deviation for distance X1 at 30° was the largest. Overall, the mean differences and standard deviations were larger at T2 than at T1.

The differences between the actual distances on the precision model and the same distances on the 3D image were significant (P ≤ .05) for the distance X at 0°, Y at 0°, X1 at 30°, X1 at 45°, and X2 at 45°. These results were similar for the distortion measurement. On comparing the values for the mean differences obtained at T1 and T2, the results demonstrated significant (P ≤ .05) differences for the distances X at 0°, Y at 0°, X1 at 30°, X2 at 30°, X1 at 45°, X2 at 45°, and Y at 45°. Only distance Y at 30° did not demonstrate a significant difference.

Surface measurements of the precision model

The errors between the vertices of the surfaces of the actual model and the fitted planes are shown in Table 2. Overall, these differences were small. However, it can be seen that the camera system did not capture the exact geometry of the 90° angle formed by the two surfaces. The errors for this 90° corner for the images at 30° and 45° were calculated from the difference in the distance between the simulated vertex at 90° and the corresponding point on each 30° and 45° image (d in Figure 4). The results demonstrated an error for d of 1.87 mm at both 30° and 45°.

TABLE 2.

Surface Irregularity Test of Precision Model. The Degree of Surface Irregularity is Measured as the Mean Distance Between the Vertices on the Model and the Corresponding Fitted Plane

Surface Irregularity Test of Precision Model. The Degree of Surface Irregularity is Measured as the Mean Distance Between the Vertices on the Model and the Corresponding Fitted Plane
Surface Irregularity Test of Precision Model. The Degree of Surface Irregularity is Measured as the Mean Distance Between the Vertices on the Model and the Corresponding Fitted Plane
FIGURE 4.

Image of intersecting “fitted” surfaces and captured edge of the precision model overlayed. The simulated fitted surfaces intersecting at a 90° angle edge. Distance d is the error between the captured and fitted edges

FIGURE 4.

Image of intersecting “fitted” surfaces and captured edge of the precision model overlayed. The simulated fitted surfaces intersecting at a 90° angle edge. Distance d is the error between the captured and fitted edges

Close modal

Distance measurements of mannequin model

The descriptive statistics for the distances measured on the images of the mannequin at times T1 and T2 are given in Table 3. The values represent the mean differences (mm) between the distances recorded with the digital caliper on the mannequin and the same distances recorded from the images of the mannequin. Also, Table 3 shows the measurement of distortion.

TABLE 3.

Descriptive Statistics at T1 and T2 for the Difference in Similar Distances Measured on the Mannequin Model and on the Images of the Model at Different Angulations. Similar Differences are Shown for the Distortion Measure

Descriptive Statistics at T1 and T2 for the Difference in Similar Distances Measured on the Mannequin Model and on the Images of the Model at Different Angulations. Similar Differences are Shown for the Distortion Measure
Descriptive Statistics at T1 and T2 for the Difference in Similar Distances Measured on the Mannequin Model and on the Images of the Model at Different Angulations. Similar Differences are Shown for the Distortion Measure

The results demonstrated that at T1, the mean difference between the right-side (1–2) distance at 0° angulation measured on the mannequin and the same distance measured on the image was the largest, whereas the same mean difference measured at 60° was the smallest. For the image captured at the 60° angulation, the z-coordinate component was minimal; however, the z-coordinate increased gradually from the 60° angulation toward 0° and from 60° angulations toward 90°. Similar to the precision model, however, the standard deviations showed a different trend. The standard deviation for the difference in the distance Y (3–6) at 0° was the smallest, whereas the standard deviation for the left-side distance (4–5) at 0° was the largest. At T2, the difference and standard deviation for the right-side (1–2) distance at 30° were the smallest, whereas the right-side (1– 2) distance and standard deviation at 0° were the largest.

The results of the t-test for significant (P ≤ .05) differences in the distances made on the mannequin with the digital calipers and the same distances made on the images demonstrated the following. Viewing the mannequin from the 0° angulation, the differences in the distances X (2–4), Y (3–6), Lt (4–5), and Rt (1–2) were significant. When viewed from the right side and focusing on the distance (1– 2) only, significant (P ≤ .05) differences were found at 0° and 30°. No significant differences were found at 60° and 90°. Similar results were found for the measure of distortion.

These results suggest that accurate images are obtained from the frontal view. An overall comparison of the mean differences in the distances at T1 and T2 demonstrated a trend similar to that seen for the precision model. At T2, the mean differences and standard deviations were greater than at T1. The results of the t-test for the difference between similar distances measured on the images at T1 and T2 demonstrated significant (P ≤ .05) differences in the distances Lt (4–5) at 0° and Rt (1–2) at 60°.

A finding in this study was that measurements made on images captured with the 3D camera system from frontal views of both the precision and mannequin models were the most accurate. When the same measurements were made on images captured from views other than the frontal, there was a greater component of the third dimension or z-coordinate. Thus, the system accuracy was greater the less that the z-coordinate was incorporated in the image. This limitation was to be expected, given the camera configuration. The stereographic system captured images from two different cameras, one on either side of the object. Because the lenses were located somewhat close to each other resulting in a limited field of view, it was difficult to get an accurate z-coordinate measurement. To help overcome this problem with the present camera specifications, the relative angle between the two side cameras or the number of cameras that are positioned around the object could be increased.

The accuracy of the images for the mannequin was greater than that of the images for the precision model. There was substantial image distortion when images of the precision model were captured with the sharp 90° angle facing the camera. In this instance, the sharp edge was not accurately reproduced in the image. The error for this edge was close to two mm. As a result, the investigator was not able to clearly define this region when recording distance measurements on these images. Fortunately, there are no sharp angles on the face and this fact may have accounted for the better performance of the measures made on the mannequin model. Ultimately, it is up to the researcher to determine or set the degree of accuracy that will be accepted. This determination depends on a number of different circumstances. These include the exact purpose for which the measurements will be used, the cost of the equipment and availability of funds for this purpose, and the amount and cost of technical support. Our findings suggest that for the purpose of capturing and making measurements on facial images, provided one is willing to accept errors in length of up to one mm, the camera system presented in this study would be sufficiently accurate for measurements made on images captured from the frontal perspective + or − 15°. These factors coupled with the relatively low cost and ease of use make this camera system attractive.

One approach to overcome the inaccuracies in images captured beyond the frontal perspective would be to capture the face from several views that are considered to be of acceptable accuracy and then to computationally combine these different views of the face. This approach involves a registration and then a combination of the images.10,11 For the purposes of illustration, an example is shown in Figure 5. In the example, two images of the mannequin captured from the frontal and lateral perspectives were combined. Each image of the face comprises 3D triangular meshes. Similar triangular components were identified in each of the two views, and these similarities were used as matching points to register, align, and combine the two views. After combining, the reconstructed meshes were smoothed using an appropriate filtering method12 (Figure 5). For this system, acceptable views could be images of the face captured from consecutive 15° angulations. In this manner, the entire face may be reconstructed for further analysis. Also, this approach could be used to compare two 3D images of the same patient taken at different times. For example, applying the same principles of image registration, pre- and posttreatment images of patients who undergo orthognathic surgery could be compared by registration of the images on the areas unaffected by the surgery.

FIGURE 5.

Registration procedure with mannequin model. (A) Two 3D meshes captured from a frontal and lateral view. (B) Images after the registration and alignment of the two meshes. (C) Images after the smoothing step

FIGURE 5.

Registration procedure with mannequin model. (A) Two 3D meshes captured from a frontal and lateral view. (B) Images after the registration and alignment of the two meshes. (C) Images after the smoothing step

Close modal

The accuracy of a camera system that is capable of recording 3D images is presented in this study. The system accuracy was best for images recorded from frontal views ±15°. An approach was described to enhance the accuracy by capturing the face from several views that are considered to be of acceptable accuracy and then to computationally combine these different views of the face.

This study was supported in part by a grant (DE 13814-01A1) from the National Institute of Dental Research. Also, the study comprised work completed by Dr Lee in Dr Trotman's Laboratory in the Department of Orthodontics at the School of Dentistry, The University of North Carolina at Chapel Hill.

1
Broadbent
,
B. H.
A new x-ray technique and its application to orthodontia.
Angle Orthod
1931
.
1
:
45
66
.
2
Proffit
,
W. R.
,
R. P.
White
Jr
, and
D. M.
Sarver
.
Contemporary Treatment of Dentofacial Deformity.
St Louis, Mo: Mosby; 2003;92–95
.
3
McCance
,
A. M.
,
J. P.
Moss
,
W. R.
Wright
,
A. D.
Linney
, and
D. R.
James
.
A three dimensional soft tissue analysis of 16 skeletal Class II patients following bimaxillary osteotomy.
Br J Oral Maxillofac Surg
1992
.
30
:
221
232
.
4
Moss
,
J. P.
,
S. R.
Girndrod
,
A. D.
Linney
,
S. R.
Arridge
, and
D.
James
.
A computer system for interactive planning and prediction of maxillofacial surgery.
Am J Orthod
1988
.
94
:
469
475
.
5
Bill
,
J. S.
,
J. F.
Reuther
,
W.
Diyyman
,
N.
Kubler
,
J.
Miere
,
H.
Pistner
, and
G.
Wittenberg
.
Stereolithography in oral and maxillofacial operation planning.
Int J Oral Maxillofac Surg
1995
.
24
:
98
103
.
6
Hell
,
B.
3D sonography.
Int J Oral Maxillofac Surg
1995
.
24
:
84
89
.
7
Ayoub
,
A. F.
and
L.
David
.
Three-dimensional modeling for modern diagnosis and planning in maxillofacial surgery.
Int J Adult Orthod Orthognath Surg
1996
.
11
:
225
233
.
8
Genex Technologies, Inc,
Rainbow 3D Camera User Guide.
Kensington, Md: Genex Technologies, Inc; 2002
.
9
Schroeder
,
L. D.
,
D. L.
Sjoquist
, and
P. E.
Stephan
.
Understanding Regression Analysis: An Introductory Guide.
Thousand Oaks, Calif: Sage Publications. Series: Quantitative Applications in the Social Sciences. 1986; No. 57
.
10
Ohtake
,
Y.
,
A. G.
Belyaev
, and
I. A.
Bogaevski
.
Polyhedral surface smoothing with simultaneous mesh regularization.
IEEE Geometric Model Processing. Hong Kong, China, pp 229–237, April 10–12, 2000
.
11
Pulli
,
K.
Multiview registration for large datasets.
In: Proceedings of Second International Conference on 3D Digital Imaging and Modeling (3DIM '99). Ottawa, Canada, pp 160–168, October 4– 8, 1999
.
12
Rusinkiewicz
,
S.
and
M.
Levoy
.
Efficient variants of the ICP algorithm.
In Proceedings of Third International Conference on 3D Digital Imaging and Modeling (3DIM '01). Quebec City, Canada, pp 145–152, May 28–June 1, 2001
.

Author notes

Corresponding author: Carroll-Ann Trotman, BDS, MA, MS, Department of Orthodontics, University of North Carolina, CB# 7459, 275 Brauer Hall, Chapel Hill, NC 27599-7450 ([email protected])