The paper argues that we are hitting a wall with current AI because we are obsessed with number crunching instead of structure.
Belabes posits that modern AI is too focused on statistical minimization and processing speed, which reduces everything to collections of numbers that inherently lack meaning. You lose the essence of what you are actually trying to model when you strip away the context to get raw data. The author suggests a pivot to Alexandre Grothendieck's Topos theory, which provides a mathematical framework for understanding geometric forms and preserving the deep structure of data rather than just its statistical number crunching.
Topos theory focuses on finding a new style space that acts as a bridge between different mathematical objects. Instead of just looking at points in a standard space, a topos allows us to look at the relationships and sheaves of information over that space, effectively letting us transfer invariants from one idea to another. It creates a way to connect things that seem totally unrelated on the surface by identifying their common essence. Belabes links this to the idea of conceptual strata where something that looks like noise or insignificant data in one layer might actually be critical structure in another layer. It's a move away from the binary notion of significant versus insignificant data and toward a relativistic view where significance depends on the conceptual layer you are analyzing.
The author uses literary examples like Homer and Dostoevsky to show that authentic meaning often precedes the words used to express it, whereas our current digital systems treat language as a closed loop where words define other words. Current AI essentially simulates discourse without the underlying voice or intent. By adopting a Topos-based approach, we might be able to build systems that respect these layers of meaning and read slowly to extract the actual shape of the information. It is basically a call to stop trying to brute force intelligence with bigger matrices and start modeling the actual geometry of thought.
PM_ME_VINTAGE_30S [he/him] - 2day
The author suggests a pivot to Alexandre Grothendieck's Topos theory, which provides a mathematical framework for understanding geometric forms and preserving the deep structure of data rather than just its statistical number crunching.
There's this four-volume book that I've been meaning to finish reading called The Topos of Music by Mazzola. My (extremely loose) understanding is that it basically applies Grothendieck's theory to come up with a topos theory for gestures in music. This 'geometry of concepts' idea is basically the justification for the framework in The Topos of Music.
So it's absolutely not surprising (at least to me) to see topoi pop up in AI.
3
☆ Yσɠƚԋσʂ ☆ - 2day
Yeah, the whole idea makes intuitive sense to me. Especially considering the fact that our own minds basically evolved to navigate the physical world, which means that a lot of our own fundamental reasoning is based around geometry.
in particular, the commutative diagrams might disagree with some of what you're saying in contrasting transformer architectures with toposes-compatible architectures, but I'm not a philosopher and the preprint is not peer-reviewed
yogthos in technology
The Grothendieck’s Toposes as the Future Mathematics of AI
https://medwinpublishers.com/PhIJ/the-grothendiecks-toposes-as-the-future-mathematics-of-ai.pdfThe paper argues that we are hitting a wall with current AI because we are obsessed with number crunching instead of structure.
Belabes posits that modern AI is too focused on statistical minimization and processing speed, which reduces everything to collections of numbers that inherently lack meaning. You lose the essence of what you are actually trying to model when you strip away the context to get raw data. The author suggests a pivot to Alexandre Grothendieck's Topos theory, which provides a mathematical framework for understanding geometric forms and preserving the deep structure of data rather than just its statistical number crunching.
Topos theory focuses on finding a new style space that acts as a bridge between different mathematical objects. Instead of just looking at points in a standard space, a topos allows us to look at the relationships and sheaves of information over that space, effectively letting us transfer invariants from one idea to another. It creates a way to connect things that seem totally unrelated on the surface by identifying their common essence. Belabes links this to the idea of conceptual strata where something that looks like noise or insignificant data in one layer might actually be critical structure in another layer. It's a move away from the binary notion of significant versus insignificant data and toward a relativistic view where significance depends on the conceptual layer you are analyzing.
The author uses literary examples like Homer and Dostoevsky to show that authentic meaning often precedes the words used to express it, whereas our current digital systems treat language as a closed loop where words define other words. Current AI essentially simulates discourse without the underlying voice or intent. By adopting a Topos-based approach, we might be able to build systems that respect these layers of meaning and read slowly to extract the actual shape of the information. It is basically a call to stop trying to brute force intelligence with bigger matrices and start modeling the actual geometry of thought.
There's this four-volume book that I've been meaning to finish reading called The Topos of Music by Mazzola. My (extremely loose) understanding is that it basically applies Grothendieck's theory to come up with a topos theory for gestures in music. This 'geometry of concepts' idea is basically the justification for the framework in The Topos of Music.
So it's absolutely not surprising (at least to me) to see topoi pop up in AI.
Yeah, the whole idea makes intuitive sense to me. Especially considering the fact that our own minds basically evolved to navigate the physical world, which means that a lot of our own fundamental reasoning is based around geometry.
this one might interest you
in particular, the commutative diagrams might disagree with some of what you're saying in contrasting transformer architectures with toposes-compatible architectures, but I'm not a philosopher and the preprint is not peer-reviewed