COMPUTATIONAL MATH – IDeAS SEMINAR
Recurring weekly series · Thursdays, 2:00 – 3:00 PM, 224 Fine Hall
This week's talk:
Speaker: Diana Halikias (NYU)
Title: Operator learning without the adjoint.
Date: Thursday, March 19, 2026
Time: 2:00 PM – 3:00 PM
Room: 224 Fine Hall
Abstract:
There is a mystery at the heart of operator learning: how can one recover a non-self-adjoint operator from data without probing the adjoint? Current practical approaches suggest that one can accurately recover an operator while only using data generated by
the forward action of the operator without access to the adjoint. However, naively, it seems essential to sample the action of the adjoint. We partially explain this mystery by proving that without querying the adjoint, one can approximate a family of non-self-adjoint
infinite-dimensional compact operators via projection onto a Fourier basis. We then apply the result to recovering Green's functions of elliptic partial differential operators and derive an adjoint-free sample complexity bound. While existing theory justifies
low sample complexity in operator learning, ours is the first adjoint-free analysis that attempts to close the gap between theory and practice. We also explore a closely related question in numerical linear algebra: when is access to both forward and transpose
matrix-vector products essential? We discuss the role of transpose access in sketching algorithms for low-rank approximation, least-squares problems, and norm estimation.
About the speaker:
Diana Halikias is a Courant Instructor in the mathematics department at NYU. Previously, she was a postdoctoral fellow for the semester-long program in Complexity and Linear Algebra at the Simons Institute in Berkeley. She completed a PhD in mathematics in
2025 at Cornell, where she was advised by Alex Townsend.
Up next: Thursday, March 26 – Leticia Mattos Da Silva (MIT).