{"id":193437,"date":"2024-07-24T02:24:25","date_gmt":"2024-07-24T07:24:25","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/07\/nonlinear-encoding-in-diffractive-information-processing-using-linear-optical-materials"},"modified":"2024-07-24T02:24:25","modified_gmt":"2024-07-24T07:24:25","slug":"nonlinear-encoding-in-diffractive-information-processing-using-linear-optical-materials","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/07\/nonlinear-encoding-in-diffractive-information-processing-using-linear-optical-materials","title":{"rendered":"Nonlinear encoding in diffractive information processing using linear optical materials"},"content":{"rendered":"<p style=\"padding-right: 20px\"><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/nonlinear-encoding-in-diffractive-information-processing-using-linear-optical-materials2.jpg\"><\/a><\/p>\n<p>Furthermore, many experimental factors, such as fabrication errors and physical misalignments, can affect the performance of diffractive processors during the experimental deployment stage. Investigating the inherent robustness of different nonlinear encoding strategies to such imperfections, as well as their integration with vaccination-based training strategies<sup><a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 39\" title=\"Mengu, D. et al. Misalignment resilient diffractive optical networks. Nanophotonics 9, 4207&ndash;4219 (2020).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR39\" id=\"ref-link-section-d83158163e7888\">39<\/a><\/sup> or in situ training methods<sup><a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 40\" title=\"Momeni, A., Rahmani, B., Mall\u00e9jac, M., del Hougne, P. & Fleury, R. Backpropagation-free training of deep physical neural networks. Science 382, 1297&ndash;1303 (2023).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR40\" id=\"ref-link-section-d83158163e7892\">40<\/a><\/sup>, would provide more comprehensive guidance on the implementation and limitations of these approaches. These considerations would be crucial for future research and practical implementations of diffractive optical processors.<\/p>\n<p>Throughout the manuscript, our analyses assumed that diffractive optical processors consist of several stacked diffractive layers interconnected through free-space light propagation, as commonly used in the literature<sup><a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 10\" title=\"Lin, X. et al. All-optical machine learning using diffractive deep neural networks. Science 361, 1004&ndash;1008 (2018).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR10\" id=\"ref-link-section-d83158163e7899\">10<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 13\" title=\"Qian, C. et al. Performing optical logic operations by a diffractive neural network. Light Sci. Appl. 9, 59 (2020).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR13\" id=\"ref-link-section-d83158163e7902\">13<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 41\" title=\"Yan, T. et al. Fourier-space diffractive deep neural network. Phys. Rev. Lett. 123, 023901 (2019).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR41\" id=\"ref-link-section-d83158163e7905\">41<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 42\" title=\"Fang, X. et al. Orbital angular momentum-mediated machine learning for high-accuracy mode-feature encoding. Light Sci. Appl. 13, 49 (2024).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR42\" id=\"ref-link-section-d83158163e7908\">42<\/a><\/sup>. Our forward model employs the angular spectrum method for light propagation, a broadly applicable technique known for its accuracy, covering all the propagating modes in free space. While our forward model does not account for multiple reflections between the diffractive layers, it is important to note that such cascaded reflections are much weaker than the transmitted light and, thus, have a negligible impact on the optimization process. This simplification does not compromise the model\u2019s experimental validity since a given diffractive model also acts as a 3D filter for such undesired secondary sources that were ignored in the optimization process; stated differently, a by-product of the entire optimization process is that the resulting diffractive layers collectively filter out some of these undesired sources of secondary reflections, scattering them outside the output FOV. The foundation of our model has been extensively validated through various experiments<sup><a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 10\" title=\"Lin, X. et al. All-optical machine learning using diffractive deep neural networks. Science 361, 1004&ndash;1008 (2018).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR10\" id=\"ref-link-section-d83158163e7912\">10<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 11\" title=\"Li, J., Mengu, D., Luo, Y., Rivenson, Y. & Ozcan, A. Class-specific differential detection in diffractive optical neural networks improves inference accuracy. Adv. Photon. 1, 046001 (2019).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR11\" id=\"ref-link-section-d83158163e7915\">11<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 16\" title=\"Bai, B. et al. Data-class-specific all-optical transformations and encryption. Adv. Mater. 35, 2212091 (2023).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR16\" id=\"ref-link-section-d83158163e7918\">16<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"Li, J. et al. Unidirectional imaging using deep learning&ndash;designed materials. Sci. Adv. 9, eadg1505 (2023).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR18\" id=\"ref-link-section-d83158163e7921\">18<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 43\" title=\"Bai, B. et al. Information hiding cameras: optical concealment of object information into ordinary images. Sci. Adv. 10, eadn9420 (2024).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR43\" id=\"ref-link-section-d83158163e7924\">43<\/a><\/sup>, providing a good match to the corresponding numerical model in each case, further supporting the accuracy of our forward model and diffractive processor design scheme.<\/p>\n<p>Finally, our numerical analyses were conducted using coherent monochromatic light, which has many practical, real-world applications such as holographic microscopy and sensing, laser-based imaging systems, optical communications, and biomedical imaging. These applications, and many others, benefit from the precise control of the wave information carried by coherent light. In addition to coherent illumination, diffractive optical processors can also be designed to accommodate temporally and spatially incoherent illumination. By optimizing the layers for multiple wavelengths of illumination, a diffractive processor can be effectively designed to operate under broadband illumination conditions<sup><a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"Li, J. et al. Unidirectional imaging using deep learning&ndash;designed materials. Sci. Adv. 9, eadg1505 (2023).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR18\" id=\"ref-link-section-d83158163e7932\">18<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 19\" title=\"Shen, C.-Y., Li, J., Mengu, D. & Ozcan, A. Multispectral quantitative phase imaging using a diffractive optical network. Adv. Intell. Syst. https:\/\/doi.org\/10.1002\/aisy.202300300 (2023).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR19\" id=\"ref-link-section-d83158163e7935\">19<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 29\" title=\"Li, J., Bai, B., Luo, Y. & Ozcan, A. Massively parallel universal linear transformations using a wavelength-multiplexed diffractive optical network. Adv. Photon. 5, 016003 (2023).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR29\" id=\"ref-link-section-d83158163e7938\">29<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Bai, B. et al. Information hiding cameras: optical concealment of object information into ordinary images. Sci. Adv. 10, eadn9420 (2024).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR43\" id=\"ref-link-section-d83158163e7941\">43<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Luo, Y. et al. Design of task-specific optical systems using broadband diffractive neural networks. Light Sci. Appl. 8,112 (2019).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR44\" id=\"ref-link-section-d83158163e7941_1\">44<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Li, J. et al. Spectrally encoded single-pixel machine vision using diffractive networks. Sci. Adv. 7, eabd7690 (2021).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR45\" id=\"ref-link-section-d83158163e7941_2\">45<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Bai, B. et al. All-optical image classification through unknown random diffusers using a single-pixel diffractive network. Light Sci. Appl. 12, 69 (2023).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR46\" id=\"ref-link-section-d83158163e7941_3\">46<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 47\" title=\"Mengu, D., Tabassum, A., Jarrahi, M. & Ozcan, A. Snapshot multispectral imaging using a diffractive optical network. Light Sci. Appl. 12, 86 (2023).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR47\" id=\"ref-link-section-d83158163e7944\">47<\/a><\/sup>. Similarly, by incorporating spatial incoherence into the forward model simulations, we can design diffractive processors that function effectively with spatially incoherent illumination<sup><a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 30\" title=\"Rahman, M. S. S., Yang, X., Li, J., Bai, B. & Ozcan, A. Universal linear intensity transformations using spatially incoherent diffractive processors. Light Sci. Appl. 12,195 (2023).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR30\" id=\"ref-link-section-d83158163e7948\">30<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 48\" title=\"Yang, X., Rahman, M. S. S., Bai, B., Li, J. & Ozcan, A. Complex-valued universal linear transformations and image encryption using spatially incoherent diffractive networks. Adv. Photon. Nexus 3, 016010 (2024).\" href=\"https:\/\/www.nature.com\/articles\/s41377-024-01529-8#ref-CR48\" id=\"ref-link-section-d83158163e7951\">48<\/a><\/sup>. Without loss of generality, our current study focuses on coherent monochromatic light to establish a foundational understanding of nonlinear encoding strategies in diffractive information processing using linear optical materials by leveraging the precise control that coherent processors offer. Future work could explore the extension of these principles to spatially or temporally incoherent illumination scenarios, further broadening the applicability of diffractive optical processors in practical settings.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Furthermore, many experimental factors, such as fabrication errors and physical misalignments, can affect the performance of diffractive processors during the experimental deployment stage. Investigating the inherent robustness of different nonlinear encoding strategies to such imperfections, as well as their integration with vaccination-based training strategies39 or in situ training methods40, would provide more comprehensive guidance on [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,1635],"tags":[],"class_list":["post-193437","post","type-post","status-publish","format-standard","hentry","category-biotech-medical","category-materials"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/193437","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=193437"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/193437\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=193437"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=193437"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=193437"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}