A piece in 36Kr Europe argues that AI is beginning to take on a new role in mathematics. Instead of acting only as a calculator or a tutor, it is increasingly being treated as a research partner. More importantly, the researchers did not keep that workflow private. They openly acknowledged their AI-assisted math proof, and that openness is what makes this feel different (新智元, 2026).
What an AI-assisted math proof could change next
To explain why this matters, the article points to two arXiv preprints. Taken together, they suggest a shift from quiet, informal tool use to something more explicit and documentable in the public record (新智元, 2026).
To start, the first preprint sits in algebraic geometry. The authors study a complicated space of genus 0 maps into a flag variety. Then, rather than leaving the method ambiguous, they add a direct disclosure about the process. Specifically, they say the proof was obtained “in conjunction with Google Gemini and related tools,” while also noting that the paper itself is written by humans aside from clearly marked excerpts (Bryan et al., 2026). Because of that disclosure, what might have been an invisible workflow becomes something other researchers can point to, discuss, and cite. In other words, the AI-assisted math proof is not just implied. It is recorded.
From there, the mathematical claim can be described in plain terms. The result is an identity in a standard algebraic “cut-and-paste” bookkeeping system. Put simply, the object being studied looks geometrically messy, yet when measured using a widely used invariant, it matches the signature of a much more familiar kind of space (Bryan et al., 2026). That contrast is exactly what makes the Gemini note interesting. After all, the hardest part of research is often the middle: testing patterns, trying intermediate lemmas, and searching for a route that will actually close. If AI is helpful anywhere, it is plausible that it helps most in that exploratory zone where an AI-assisted math proof could emerge from many failed attempts.
Next, the second preprint moves into harmonic analysis and probability. Whereas the first paper centers on a geometric identity, this one focuses on inequalities that measure behavior across scales. In particular, it studies a dyadic square function, which is a standard tool for tracking how a function changes as you zoom in and out (Alpay & Ivanisvili, 2025). With that setup in place, the authors prove lower bounds for the simplest possible input: the indicator function of a set. As a result, the statement is easy to describe conceptually even if the technical machinery is not.
At this point, the shape of the bound becomes the key detail. They defineS2(f)=n≥1∑dn2,
and then prove a lower bound of the form∥S2(1A)∥1≥C∣A∣∗log(∣A∣∗1),
where ∣A∣∗=min{∣A∣,1−∣A∣} and C>0 is universal (Alpay & Ivanisvili, 2025). What matters here is not only that a lower bound exists, but that it does not scale purely linearly with the size of the set. Instead, the extra square-root-log factor makes the effect stronger near the extremes, meaning when the set is tiny or nearly everything. Those edge regimes are often where an inequality reveals its sharpest structure, so the technical form of the bound signals that the authors are pushing close to the “limit case” behavior (Alpay & Ivanisvili, 2025).
So, how do these examples connect back to the bigger question? First, they make it harder to dismiss AI involvement as a hidden convenience. Because at least one set of authors is willing to describe the tool’s role directly, the discussion shifts from rumors about “who used what” to a record that can be evaluated (Bryan et al., 2026). Second, by appearing alongside serious, domain-specific mathematics, the idea of an AI-assisted math proof becomes less of a novelty headline and more of a procedural issue the field must manage.
From that angle, the most likely change is cultural rather than magical. It is not necessary to claim that “AI has surpassed humans,” and it is also unnecessary to predict instant push-button proof factories. Instead, the more grounded possibility is that researchers will increasingly disclose meaningful AI assistance, especially when it shapes the path to the result (新智元, 2026). If that disclosure becomes common, expectations will tighten. Credit practices will need clearer language. Independent checking will need clearer standards. Supporting material may become more important, because the community will want to understand not only what was proved but also how the proof was obtained.
Finally, none of this makes mathematicians optional. A theorem is not just a true statement. The work also includes making the argument checkable, teachable, and connected to what came before. If AI speeds up idea search, then human judgment becomes even more central, because verification and explanation remain the bottleneck. For now, that is precisely why an AI-assisted math proof still depends on human authorship in the parts that matter most (Bryan et al., 2026).
References
Alpay, N., & Ivanisvili, P. (2025, February 22). Lower bounds for dyadic square functions of indicator functions of sets (arXiv:2502.16045). arXiv. https://doi.org/10.48550/arXiv.2502.16045
Bryan, J., Elek, B., Manners, F., Salafatinos, G., & Vakil, R. (2026, January 12). The motivic class of the space of genus 0 maps to the flag variety (arXiv:2601.07222). arXiv. https://doi.org/10.48550/arXiv.2601.07222
新智元. (2026, January 16). Terence Tao was amazed. The mathematical singularity has initially emerged, and AI has presented an original proof beyond human reach for the first time. 36Kr Europe. https://eu.36kr.com/en/p/3641249893518976


