{"id":390,"date":"2021-04-16T17:32:40","date_gmt":"2021-04-17T00:32:40","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/?p=390"},"modified":"2021-04-19T14:38:24","modified_gmt":"2021-04-19T21:38:24","slug":"understanding-indirect-tof-depth-sensing","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/understanding-indirect-tof-depth-sensing\/","title":{"rendered":"Understanding Indirect ToF Depth Sensing"},"content":{"rendered":"<p>Now that you\u2019ve had a chance to read the <a href=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/intro-to-microsoft-time-of-flight-tof-imaging-technology\/\">Intro to Indirect Time-of-Flight (ToF)<\/a> post, let\u2019s dig a little deeper into the mechanics behind the Microsoft implementation of time-of-flight depth sensing, as well as a couple functional advantages it presents.<\/p>\n<h2><span style=\"font-size: 18pt;\">How does \u201cIndirect\u201d ToF work?<\/span><\/h2>\n<p>To recap, \u201ctime-of-flight\u201d refers to emitting light at an object and measuring how long it takes to bounce back and return, then converting the time measurement into distance using the speed of light, giving us the object\u2019s shape and position in its surroundings.<\/p>\n<p style=\"text-align: center;\"><a href=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/ToF-Diagram-B4.1.png\"><img decoding=\"async\" class=\"alignnone wp-image-383\" src=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/ToF-Diagram-B4.1.png\" alt=\"ToF Diagram\" width=\"371\" height=\"333\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/ToF-Diagram-B4.1.png 478w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/ToF-Diagram-B4.1-300x269.png 300w\" sizes=\"(max-width: 371px) 100vw, 371px\" \/><\/a><\/p>\n<p style=\"text-align: left;\"><span style=\"font-size: 10pt;\"><strong>Figure 1. Generalized operation of time-of-flight sensing.<\/strong><\/span><\/p>\n<p>In practice, we don\u2019t use the actual round-trip time \u2013 to get millimeter precision, we would need picosecond stopwatch circuits for every pixel! Instead, we take the \u201cindirect\u201d approach, where we emit light with a periodic or \u201cmodulated\u201d pattern, then calculate the shift in the return measurement.<\/p>\n<p><span class=\"NormalTextRun SCXW217885692 BCX8\"><a href=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i2.jpg\"><img decoding=\"async\" class=\"alignnone size-full wp-image-396\" src=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i2.jpg\" alt=\"Image i2\" width=\"1101\" height=\"503\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i2.jpg 1101w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i2-300x137.jpg 300w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i2-1024x468.jpg 1024w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i2-768x351.jpg 768w\" sizes=\"(max-width: 1101px) 100vw, 1101px\" \/><\/a><\/span><\/p>\n<p><span style=\"font-size: 10pt;\"><strong>Figure 2. Plots of modulated signal energy as time elapses, both transmitted and received, and the mathematical relationship between phase shift and distance.<\/strong><\/span><\/p>\n<p>How do we measure this phase shift? At the silicon level, each pixel in the Microsoft ToF sensor is comprised of two detectors, or \u201cwells\u201d, that receive photons and turn them into electric charge. Each well can be opened or closed by the sensor control circuit. If we keep each well open for half of a modulation period \u2013 side A in sync with the emitter, and side B exactly out of sync \u2013 then the returning photons will be split between the two detectors, but at least one well is always ready to catch them. Based on the proportion of photons that accumulate in detector A vs. detector B, we can calculate the shift of the returned signal. (We wait for the modulation period to repeat a few times to accumulate charge \u2013 this is referred to as \u201cintegration time\u201d, much like shutter speed on a color camera.)<\/p>\n<p style=\"text-align: center;\"><a href=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/fig-3a.gif\"><img decoding=\"async\" class=\"alignnone size-full wp-image-401\" src=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/fig-3a.gif\" alt=\"Image fig 3a\" width=\"480\" height=\"270\" \/><\/a><\/p>\n<p><span style=\"font-size: 10pt;\"><strong>Figure 3a. Animation showing the synchronized emitter-sensor operation. Modulated light leaves the emitter on the far right, strikes the target on the left, and returns to a certain pixel in the sensor on the middle-right. Detector A (in green) is open while the emitter is active, and Detector B (in cyan) is open during the inactive part of the cycle. Because of the phase shift, returning photons will not necessarily arrive in sync.<\/strong><\/span><\/p>\n<p style=\"text-align: center;\"><a href=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-1-B4.4.png\"><img decoding=\"async\" class=\"alignnone size-large wp-image-377\" src=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-1-B4.4-1024x416.png\" alt=\"Image Modulation 1 B4 4\" width=\"640\" height=\"260\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-1-B4.4-1024x416.png 1024w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-1-B4.4-300x122.png 300w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-1-B4.4-768x312.png 768w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-1-B4.4.png 1440w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/><\/a><\/p>\n<p><span style=\"font-size: 10pt;\"><strong>Figure 3b. 2D plots of light energy over time in the same scenario. The difference in accumulated charge between the two detectors will be the \u201coutput\u201d of this pixel.<\/strong><\/span><\/p>\n<h2><span style=\"font-size: 18pt;\">Is one measurement enough?<\/span><\/h2>\n<p>However, we can\u2019t stop here. If a second object appears in the scene, at a distance that gives it the same phase shift but the opposite direction, what happens? The split of accumulated charge in the pixel is just like it was before, but we have no way to know that detector A was charged at the end of the return instead of the beginning. This means that our measurement is ambiguous.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-2-B4.5.png\"><img decoding=\"async\" class=\"size-large wp-image-378 aligncenter\" src=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-2-B4.5-1024x512.png\" alt=\"Modulation\" width=\"640\" height=\"320\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-2-B4.5-1024x512.png 1024w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-2-B4.5-300x150.png 300w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-2-B4.5-768x384.png 768w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/Modulation-2-B4.5.png 1440w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/><\/a><span style=\"font-size: 10pt;\"><strong>Figure 4. The same scenario as above, but contrasted with a hypothetical second target that\u2019s about one-third of a modulation period farther away. Our (A \u2013 B) output has not changed\u2026<\/strong><\/span><\/p>\n<p>There are a few ways to remove the ambiguity. The most common method is to repeat the measurements immediately, but push the sensor out of phase, filling out the picture of the returned wave. In this example, we modulate the sensor 90 degrees out of phase with respect to the light source, changing the open-close pattern of the charge accumulators in the pixel:<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i3.jpg\"><img decoding=\"async\" class=\"alignnone size-full wp-image-397\" src=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i3.jpg\" alt=\"Image i3\" width=\"1125\" height=\"525\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i3.jpg 1125w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i3-300x140.jpg 300w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i3-1024x478.jpg 1024w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i3-768x358.jpg 768w\" sizes=\"(max-width: 1125px) 100vw, 1125px\" \/><\/a><\/p>\n<p><span style=\"font-size: 10pt;\"><strong>Figure 5. Another pair of measurements in the original scenario adds information that can be used to disambiguate.<\/strong><\/span><\/p>\n<p>Seeing how the returned light \u201cprojects\u201d itself onto the synchronized and 90-degree wells, we can use complex numbers to reconstruct the original phase shift, discarding the false second solution. This complex sum gives us both the phase shift (angle from the real axis) we were originally after <em>and<\/em> the brightness of the returned light (as the magnitude of the complex number). From this brightness calculation, we can extract object reflectivity, or monochrome images of the scene in infrared!<\/p>\n<p style=\"text-align: center;\"><a href=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i4.jpg\"><img decoding=\"async\" class=\"alignnone size-full wp-image-398\" src=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i4.jpg\" alt=\"Image i4\" width=\"772\" height=\"653\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i4.jpg 772w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i4-300x254.jpg 300w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i4-768x650.jpg 768w\" sizes=\"(max-width: 772px) 100vw, 772px\" \/><\/a><\/p>\n<p><span style=\"font-size: 10pt;\"><strong>Figure 6. Combining the two difference measurements from the pixel.<\/strong><\/span><\/p>\n<h2><span style=\"font-size: 18pt;\">What about other sources of light?<\/span><\/h2>\n<p>Now consider a case where the modulated light from our camera is not the only light present in the scene. It\u2019s common for other sources, such as the sun, to provide \u201cambient\u201d light strong enough to show up in our measurements. This has a diluting effect, dropping extra photons onto our detectors whenever they are open.<\/p>\n<p>Since this extra light is generally unmodulated, it will impact both detectors about equally. However, our phase shift measurements depend only on the difference between the two, which means that we can tolerate a lot of ambient light without introducing a systematic bias! Not only that, we don\u2019t have to dynamically tailor our shutter speed to the environment, unless we observe light levels so high that pixels are \u201csaturating\u201d, or completely filling the charge wells during an exposure. These benefits of differential measurement turn a lot of potentially challenging scenarios into plug-and-play simplicity.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i5.jpg\"><img decoding=\"async\" class=\"alignnone size-full wp-image-399\" src=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i5.jpg\" alt=\"Image i5\" width=\"1087\" height=\"561\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i5.jpg 1087w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i5-300x155.jpg 300w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i5-1024x528.jpg 1024w, https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-content\/uploads\/sites\/68\/2021\/04\/i5-768x396.jpg 768w\" sizes=\"(max-width: 1087px) 100vw, 1087px\" \/><\/a><\/p>\n<p><span style=\"font-size: 10pt;\"><strong>Figure 7. The ambient light energy contributes about the same offset to each detector measurement, leaving the difference unaffected.<\/strong><\/span><\/p>\n<p>Now, I hate to attenuate the sunny overview we\u2019ve had so far, but it\u2019s important to note that the indirect time-of-flight system is not without its sources of error. Ambient light may not cause interference by matching up with modulation, but it still adds its own variation, or \u201cnoise\u201d, to the measurements. There is also noise in the system itself \u2013 pixels may respond more strongly or weakly to the same amount of light; the light source can experience time or intensity fluctuations. If ambient light begins to overfill the pixels, it compresses the space that the two wells need to differentiate themselves from each other, which intensifies the effect of any noise contributions. We can\u2019t completely avoid this, but to reduce it, we select a wavelength for our emitter from an atmospheric absorption band, and add a sensor bandpass filter to help screen photons we didn\u2019t produce ourselves.<\/p>\n<p>In conclusion, differential response of modulated light as an indirect time-of-flight measurement brings a lot to the table, even in this basic form: robust accuracy even in the presence of extra light, performance without much prior knowledge of the environment, additional measurement streams such as reflectivity and monochrome IR imaging. In future posts, we\u2019ll cover even more benefits of our ToF sensor and the improvements that we made on the basic indirect ToF mechanism.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Now that you\u2019ve had a chance to read the Intro to Indirect Time-of-Flight (ToF) post, let\u2019s dig a little deeper into the mechanics behind the Microsoft implementation of time-of-flight depth sensing, as well as a couple functional advantages it presents. How does \u201cIndirect\u201d ToF work? To recap, \u201ctime-of-flight\u201d refers to emitting light at an object [&hellip;]<\/p>\n","protected":false},"author":57912,"featured_media":399,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-390","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-azure-depth-platform"],"acf":[],"blog_post_summary":"<p>Now that you\u2019ve had a chance to read the Intro to Indirect Time-of-Flight (ToF) post, let\u2019s dig a little deeper into the mechanics behind the Microsoft implementation of time-of-flight depth sensing, as well as a couple functional advantages it presents. How does \u201cIndirect\u201d ToF work? To recap, \u201ctime-of-flight\u201d refers to emitting light at an object [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-json\/wp\/v2\/posts\/390","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-json\/wp\/v2\/users\/57912"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-json\/wp\/v2\/comments?post=390"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-json\/wp\/v2\/posts\/390\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-json\/wp\/v2\/media\/399"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-json\/wp\/v2\/media?parent=390"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-json\/wp\/v2\/categories?post=390"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-depth-platform\/wp-json\/wp\/v2\/tags?post=390"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}