Yingsi.com December 21, 2021
The camera has become an important component of the AR / VR headline. In the patent application named “Camera for Augment Reality Display”, Microsoft introduced the camera for enhancing the real head.
Figure 1 schematically depicts an enhanced realistic scene. Specifically, FIG. 1 schematically shows a user 100 using enhanced realistic display device 102 in a real environment 104. The enhanced realistic display device 102 presents an image to the user’s eyes to the user’s eyes, thereby providing an enhanced real experience. Images presented by the nearshole display include a real world image captured by the camera, and a mixing of a virtual image viewed by an enhanced realistic display device or using an enhanced realistic display device.
In the example of FIG. 1, the user can see the real sofa 110 imaging enhanced realistic display device camera, and the virtual role 112 that is generated by enhanced realistic display devices and as an enhanced realistic experience is presented in a virtual role 112.
Alternatively, the virtual image can be generated by a device different from the enhanced realistic display device. For example, an enhanced realistic display device receives a pre-rendered virtual image from a separate rendering computer and displays a pre-rendering virtual image through an off-eye display. Rendering Computers can be a local computer that enhances a realistic display device (for example, communicating through wired or suitable wireless protocols), or rendering the computer away from enhanced realistic display devices (such as server computers), and communicates over the Internet or other suitable networks. .
Figure 2 describes a mix of real images and virtual images in more detail. Specifically, FIG. 2 schematically shows various aspects of the enhanced realistic display device 102 during use. The outer eye display 106 is located adjacent the user’s eye 200 of the user 100 such that the image presented by the neoplast is visible to the user. The enhanced realistic display device 102 is equipped with a camera 202. The camera 202 is configured to capture light from the real world environment and generate an output of an image that can be used to contribute to the image of the user’s eye via the proximal display.
Specifically, as shown, the camera 202 captures the image 204 of the real world environment 104 in Fig. 1 based on the sensor output signal 203. It is worth noting that the image 204 can be a still image, or may be one frame of a video stream having any suitable frame rate, such as 90 frames per second.
The light received from the real environment is usually visible to generate a visible light image of a real environment, similar to the image seen in the case where the user’s eyes does not enhance the real display device. However, in other examples, the camera can generate an image of a real-world environment based on other spectra (eg, infrared, ultraviolet, X-rays, etc.). Since the spectrum is not visible to the human eye, the camera and / or enhanced realistic display device can convert pixel values corresponding to the received light to a suitable visible RGB value.
Further, in FIG. 2, the camera 202 is positioned along the optical axis 212 away from the user’s eye 200. In this way, the position of the camera can be substantially similar to the location of the user’s eyes with respect to the X-axis and the Y-axis, only the z-axis is different. However, in other examples, the camera does not have to be placed along the optical axis away from the eye, but may have any suitable position relative to the X, Y, and Z axes. Figure 3-6 provides more details regarding the camera 202.
The enhanced realistic display device 102 includes a virtual image renderer 206 configured to generate a virtual image to display the user’s eyes through a nearing display. As shown, the virtual image renderer has generated virtual image 208 depicting the fiction guide role. The virtual image 208 can then be superimposed in real world image 204 to generate an enhanced realistic image 210. In this way, the user can sense the fictional wizard role, as if it actually exists in the user’s real world environment.
As described above, the camera 202 captures image 204 of the real world environment, and the image 204 is combined with the virtual image 208 to give an enhanced realistic image 210. The enhanced realistic image 210 presents at least a portion of the user’s eye 200 via a proximal display 106 that substantially replaces the user’s view of the view thereof. However, as shown, the location of the camera is different from the location of the user’s eyes, which means that the real world is from differently expected angles that the user’s brain is usually expected. As mentioned above, this may result in a variety of problems, resulting in dizziness.
This can mitigate the problem when the light from the optical path from the camera radiograph to the image sensor is in a suitable threshold range of the length of the user’s eye and the camera aperture. This has a similar effect with the actual location of the image sensor to closer to the user’s eye.
3A and 3B schematically depict details of optical elements disposed within camera 202, which can be used to extend light from the aperture to the image sensor. Specifically, FIG. 3A schematically depicts a user eye 200, and an adhesive display 106 and a camera 202 of an enhanced real display device 102 illustrated in a cross section. The camera 202 includes aperture 300, and the aperture 300 is configured to receive rays 302a and 302b from the real world environment. The camera 202 includes an image sensor 304, and the image sensor 304 is configured to respond to light received from the real world environment by generating a sensor output signal that generates an image that can be used to depict the real world environment. Lines 302a and 302b travel along the optical path from the aperture to the image sensor provided by the camera one or more optical elements.
The camera aperture can have any suitable shape and size. In the example, the aperture includes a ring ring formed in an environment facing the camera surface. As shown in FIG. 6, FIG. 6 shows an environmentally-facing surface of the camera 202. As shown, the camera 202 includes a single annular hole formed in the vicinity of the outer edge. The image sensor 304 is shown in dashed line, which describes the relative position of the image sensor relative to the camera body and the aperture. However, in other examples, the camera can include a plurality of apertures (and / or aperture having other suitable shapes.
Any suitable type of image sensor can be used. In one example, the image sensor can be a complementary metal oxide semiconductor (CMOS) sensor. As another example, the image sensor can be a charge coupled device (CCD) sensor.
Returning to FIG. 3A, the camera 202 includes two light retaining surfaces 306a and 306b that provide light from the aperture to the image sensor. Specifically, the rays 302a and 302b are repeatedly reflected between the two light retarding surfaces, thereby extending the optical path. It should be understood that the specific arrangement depicted in FIG. 3A is provided for example and is not a limiting example. For example, in FIG. 3A, the light is reflected twice in the optical path of the diagonal light, and is reflected by the surface of the aperture proximal light. However, in other embodiments, light can be reflected along the optical path to reflect any appropriate number of times each light-rendering surface. This may vary depending on dimensional constraints, material constraints, desired optical path lengths and / or other considerations.
Further, in FIG. 3A, the image sensor is disposed over the aperture near the aperture of the two light retarding surfaces. In other examples, the image sensor can be disposed over the distal light retarding surface of the aperture, or any other suitable location having other components relative to the camera 202.
As described above, when the length of the optical path is within the appropriate threshold range of the user’s eye and the distance between the image pickup, it can be reduced, as shown in Figures 3A and 3B. It is worth noting that the repeating reflection of the light between the light-retained orientation surfaces 306a and 306b increases the length of the optical path, which is larger than the physical length of the camera, and is substantially equal to the distance between the user’s eye 200 and the aperture 300.
FIG. 3B shows an equivalent “expansion” view of the camera 202. Specifically, it is shown that the user’s eye 200, the aperture 300, the lights 302a, and 302b, and the aperture proximal light redirective surface 306a, but other components of the camera 202 are omitted. In essence, FIG. 3B shows an alternative optical path employed by light, wherein the light propagates the same distance in the space, but is not repeatedly reflected by the light-resetting surface.
The reference line 312 is indicated that the position of the “expand” optical path reflected in Fig. 3A. By along this alternating “expand” path, the light will be concentrated in the position of the user’s eyes, not the image sensor. This corresponds to a scene disposed on or near the image sensor, as shown by block 314 showing an equivalent position of the image sensor. In other words, the configuration depicted in Figure 3A allows the image sensor to imaging the real world environment, as if the image sensor is located at 314 shown in Figure 3B. Therefore, the image sensor will have a perspective view similar to the user’s eye.
It should be understood that the length of the optical path may be any suitable relationship with the distance between the user’s eyes and the camera aperture. Typically, it is desirable to be as close as possible to the distance between the user’s eyes and the camera as close as possible. However, it is estimated that even if the length of the optical path is only 50% of the distance between the user’s eyes and the camera aperture, it can achieve a significant benefit.
Therefore, in general, the length of the optical path will be within the appropriate threshold of the distance between the user’s eyes and the camera aperture. As an example, the threshold can be equal to 50%. In other examples, the threshold can be 25% or 10%. Any suitable value can be used, and in a particular case, the threshold can be based on the distance between the eyes and the object being viewed. In certain cases, the optical path length can be dynamically adjusted.
In the example of FIG. 3A, two light retarding surfaces are separate components separated by air gaps. In general, the two light retarding surfaces can be separated by any suitable optical transmission medium. Alternatively, the two light retraction surfaces may be a first surface and a second curved surface of a single structure.
Fig. 4 schematically shows a different example camera 400 having a hole diameter 402, which receive lights 404a and 404b. The light travels along the optical path from the aperture to the image sensor 406, wherein the light is repeatedly reflected by the light retaining surfaces 408a and 408b. Unlike Figure 3a, in example, the light retarding surface is the first and second surfaces of the single light transmissive substrate 410.
In the case where the two light rendering surfaces are physically separated, as shown in FIG. 3A, the distance between the two light retarding surfaces can be dynamically adjusted. In other words, any one or both of the two light rendering surfaces can be moved relative to the camera body, thereby changing the distance between the two curves. To this end, FIG. 3A includes a distance adjuster 308 that is operatively coupled to two light retarding surfaces of the controller 310.
For example, one or two of the light rendering surface may be attached to the track, and the distance adjuster can include a motor configured to move the surface along the track, for example, relative to the z-axis. Controller 310 can include any suitable computer hardware and firmware assembly, so that the controller can be activated to dynamically adjust the distance between the light redirect surface. By adjusting the difference, the optical path length of the light from the aperture to the image sensor can be changed.
When the two light retarding surfaces are the first and second surfaces of a single optical transmit substrate, similar effects can be achieved, as in the case of FIG. For example, the optical transmissive substrate may be a light-optic material having a dynamic variable refractive index, such as a liquid crystal or other suitable material. Therefore, as shown in FIG. 3A, the camera 400 is operatively coupled to the controller, and the controller is configured to affect the light transmitting characteristics of the light transmitting substrate by dynamically providing voltage or current.
In the example of FIG. 3A, the depicted light retarding surface is flat and parallel to each other. But other examples can take other ways. Figure 5 depicts another example camera 500. As with the cameras 202 and 400, the camera 500 includes aperture 502 configured to receive rays 504a and 504b from a real world environment. The light travels along the optical path from the aperture to the image sensor 506, wherein the light is repeatedly reflected by the light retaining surfaces 508a and 508b.
Unlike the camera described above, the light retarding surface in the camera 500 is curved. Such curved light rendering surfaces may have any suitable radius of curvature, and the curve can be convex or concave, depending on the implementation. Additionally, each of the two light rendering surfaces do not need to be bent in the same manner or in the same degree.
Light weights and 508b are separate components spaced apart from the air gap. Thus, like the camera 202, the distance between the curved light redirecting surface can be dynamically adjusted by any suitable mechanism in some cases. Alternatively, as the camera 400, the curved light retarding surface may be a separation surface of a single optical transmit substrate. In this case, the optical transmitting substrate may be a photolitrating material having a dynamically adjustable refractive index in some cases.
The light rendering surface and other components of the camera can be manufactured by any suitable manufacturing method. As an example, when the planar light retransmission surface is used, the surface can be produced by wafer stage manufacturing. Thus, the light retarding surface may be a silicon wafer or other suitable material, wherein one or more optical elements or coating are applied to a wafer. Alternatively, the light retarding surface can be a single division of a single wafer. When the light is bent, another suitable method of manufacturing, such as a diamond turn can be used.
In general, the light rendering surface and other optical components of the camera can be composed of any suitable material. For example, the light retarding surface can be composed of silicon, plastic, metal, glass, ceramic or other suitable material. When the light redirect surface is a separate surface of the optical transmission substrate, the substrate can be constructed of any suitable transmission material, such as glass, transparent plastic, transparent silicon or electro-optic material.
In various cases, the entire surface region substantially each of the light retransal surfaces may be reflected, or the respective portions of only the light-rendering surface may be reflected. In other words, each portion of each of the light retarding surfaces may have a reflective coating that is printed on the surface or otherwise applied to the surface.
In addition, the optical element arranged in the camera is not necessarily limited to the light retaining surfaces described so far.In contrast, as a non-limiting example, suitable optical elements may include various reflective elements, transmitting elements, refractive elements, diffraction elements, holographic elements, lenses, filters, diffusers, and the like.
: Microsoft Patent | Camera for Augment Reality Display
Microsoft patent applications named “Camera for Augment Reality Display were initially submitted in May 2020 and announced by the US Patent Trademark Office.