What Does = Mean? Mathematicians Aren’t Sure – And That Could Be A Problem

What Does = Mean? Mathematicians Aren’t Sure – And That Could Be A Problem



In 1912, philosophers Bertrand Russell and Alfred North Whitehead did something they believed nobody had ever done before: they proved that 1 + 1 = 2.

It took them an entire book’s worth of setup, plus more than 80 pages of a second volume, to do it. It was, a reasonable person might conclude, slight overkill. And yet, in all that work, there’s one thing the pair were surprisingly sloppy about: they never defined what “=” means. 

But of course, why would they? Everyone knows what “equals” means; it’s like, the first thing you learn in preschool math! Except that, just like 1 + 1, the concept of mathematical equality is one that is far from simple or universal – and that’s becoming a big problem.

What does “equals” mean?

Now, don’t get us wrong: your average mathematician understands “=” pretty much the same way you do – albeit with a bit more jargon thrown in.

“The meaning of x = y [is] that x and y are two names of the same identical object,” wrote logician and number theorist John Barkley Rosser in his 1953 textbook Logic for Mathematicians. “We place no restrictions on the nature of the object, so that we shall have equality not only between numbers […] as is common in mathematics, but between sets, or between functions, or indeed between the names of any logical object.”

It’s vague, but workable. The problem comes when we try to explain this to computer scientists – or, even worse, the computers themselves.

“Computer scientists […], many years ago, isolated several different concepts of equality, and [have] a profound understanding of the subject,” wrote Kevin Buzzard, an algebraic number theorist and professor of pure math at Imperial College London, in a recent discussion paper on the concept posted to preprint server arXiv. “The three-character string “2 + 2”, typed into a computer algebra system, is not equal to the one-character string “4” output by the system, for example; some sort of “processing” has taken place. A computer scientist might say that whilst the numbers 2 + 2 and 4 are equal, the terms are not.”

“Mathematicians on the other hand are extremely good at internalizing the processing and, after a while, ignoring it,” Buzzard continued. “In practice we use the concept of equality rather loosely, relying on some kind of profound intuition rather than the logical framework which some of us believe that we are actually working within.”

Equality as an isomorphism

The basic concept of equality goes back… well, probably as far as math itself – but if you ask a modern mathematician to delve a little deeper into what they mean by the word, there’s a decent chance they’ll try to explain it using something called “isomorphisms.”

Coming from the ancient Greek for “equal form”, an isomorphism is, basically, just a way to get from one mathematical structure to another of the same type. There are some stipulations to it – it has to be reversible and bijective, for example – but other than that, they can be surprisingly vibes-based. Not for nothing is this the concept behind the old joke that mathematicians can’t tell the difference between a donut and a coffee cup: the two shapes are topologically isomorphic, and therefore, in a way, the same thing.

“Isomorphism is equality,” Thorsten Altenkirch, Professor of Computer Science at the University of Nottingham, told New Scientist earlier this month. “I mean, what else? If you cannot distinguish two isomorphic objects, what else would it be? What else would you call this relationship?”

Other uses of “equals” in math are equally fuzzy. Buzzard’s paper, based on a talk he gave at a Mathematical Logic and Philosophy conference in 2022, covers some of the most egregious offenders: for example, in “[Alexander] Grothendieck’s seminal work […] where he and Dieudonne develop the foundations of modern algebraic geometry,” he points out, “the word “canonique” appears hundreds of times […] with no definition ever supplied.”

“Of course we certainly know what Grothendieck means,” he added. But “[interactive theorem prover] Lean would tell Grothendieck that this equality simply isn’t true and would stubbornly point out any place where it was used.”

It’s a dichotomy that exemplifies the state of modern math. As ever purer fields are finding applications in the real world, mathematicians and computer scientists are leaning more than ever on AI and computers to inform their work. But these machines can’t rely on the appeal to intuition that human mathematicians have grown to accept: “As a mathematician, you somehow know well enough what you’re doing that you don’t worry too much about it,” Chris Birkbeck, a Lecturer in Number Theory at the University of East Anglia, told New Scientist. “Once you have a computer checking everything you say, you can’t really be vague at all, you really have to be very precise.”

The problem of equality

So, what are mathematicians to do? After all, it’s hard to think of a more fundamental problem than not having a definition of “=” in mathematical statements.

Well, there are two potential options. One solution may be to overhaul math itself, perhaps by redefining equality to be the same thing – one might say “equal to” – canonical isomorphisms. But that may be just shunting the problem down the line, Buzzard cautioned: “[mathematician] Gordon James told me that he once asked John Conway what the word [canonical] meant, and Conway’s reply was that if you and the person in the office next to yours both write down a map from A to B, and it’s the same map, then this map is canonical,” he wrote. “This might be a good joke, but it is not a good definition.”

Alternatively, a new class of mathematician may need to be brought up, Buzzard suggests – ones who can “fill in [the] holes” that currently plague the discipline. The advantages of this approach are practical, at least, he argues: “It’s very difficult to change mathematicians,” he told New Scientist. “You have to make the computer systems better.”

Either way, it seems math is due for a rethink – and for those interested in the frontiers of computer-aided proofs, the sooner the better. Pinning down a definition of “equals” may seem like a basic problem, but it may end up having profound consequences for our future world – after all, as Russell and Whitehead pointed out after that seminal proof more than 100 years ago, the “proposition [that 1 + 1 = 2] is occasionally useful.”



Source link

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

Social Media

Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.

Categories