Saturday, October 3, 2020 – 11:01 ksa
This year, it may be the last IMO (International Mathematical Olympiad) in which “pure humans” participated.
Participate in the IMO 2020 Chinese squad (Li Jinmin ‘s official age details is wrong)
Since AI will also compete in this gold medal battle next year and become a “seed player.”
The AI that sneaked into the IMO event was called Lean and was created by researchers from Microsoft.
They are actually working on having Lean compete in the International Mathematical Olympiad next year.
In other words, he will race with Olympians from all over the world for the IMO gold medal.
Lean, able to demonstrate his ability at the IMO
In reality, it is a good experimental kit (kit man) for the purpose that Microsoft researchers let AI participate in IMO.
One of the founders of IMO ‘s Grand Challenge is Microsoft researcher Selsam. He said the aim of this competition is to train an artificial intelligence device in the world’s best math competition to win the gold medal.
That in mathematics, there are not only the “simplest” topics (not even basic mathematics, because that can’t be done), but also a set of top masters from all over the world.
If AI is able to prove these mathematical theorems like humans, it can also demonstrate to some degree that making it “think like humans” would not be too difficult.
Microsoft researchers started creating Lean in 2013 on the basis of this theory, aiming to give AI the opportunity to make independent decisions and carry out deductions based on assumptions.
In other words, it is an open access initiative aimed at bridging the distance between the proof of dynamic theorems and the proof of automated theorems.
Automatic proof of theorem: Find a way to prove or disprove the theorem or conjecture suggested in mathematics. Not only does the machine execute deductions based on observations, but it also has some ability to judge.
Interactive theorem proving: Understanding and checking the accuracy of mathematical theorems and completing the proof of mathematical theorems with the assistance of computer-aided proving methods.
Lean has launched three versions, and now Lean 4 ‘s fourth version is still being updated. The new logic scheme is based on the dependency type theory and is solid enough to prove all standard mathematical theorems.
In other words, it is still very difficult for it to describe by itself the “unseen” mathematical problems presented by the IMO.
At present, Lean 4 is not fully prepared yet. “The author Leonardo de Moura said that if it is permitted to participate in the IMO this year,” it can only get 0 points.
Since Lean can’t really grasp what concepts throughout some mathematical problems need to be addressed, and what these concepts themselves mean.
The algorithm is the ‘first phase’ of the proof.
Mathematics is very complex for many entities and difficult to understand properly.
AI feels the same as you, in truth.
AI is helpful in general engineering implementation issues, since the algorithm model already has an overview of a class of problems in the pre-training process.
In other words, the capacity of AI to do at this point is still small, normally provided conditions and details, and “more complicated calculations” can be done after continuous “questioning”.
This is a “1” to “2”, “3”, and even endless, method.
The essence of the data for mathematical questions, though, is not the same. Proving an axiom or a complex equation requires “starting from scratch” altogether.
The first step of the facts: recommend a rational direction of data. This 0 to 1 key can currently only be controlled by the human brain.
For most AIs, it is difficult to prove the concept by taking the first move.
Euclid proved in 300 BC, taking one of the simplest and oldest mathematical axioms, that there are infinitely many prime numbers.
To prove this conclusion, the key is to realize that you can always find a new prime number by multiplying all known prime numbers and adding 1. With this idea, the next proof is very simple.
But the act of “thinking this idea” itself is extremely difficult for AI.
Speaking of IMO, although the three questions in the official competition do not involve advanced mathematics such as calculus, they all require the contestants to use all the mathematics knowledge in middle school to make clever ideas and give solutions to the problems.
For example, this 2005 IMO real question:
At that time, contestants from different countries gave at least three different proofs. Among them, the widely accepted and discussed solution adopted the idea of simplification of Cauchy’s inequality, which required about half a page of A4 paper.
And another player from Moldova, creatively used two lines to complete the proof:
The upper line is “because” and the lower line is “so”. Its conciseness, precision and even “crude and effective” shocked the audience.
The ingenious ideas also won the IMO Special Award that year.
It should be noted that the IMO Special Award does not depend on the total score, but is only awarded to contestants with unique problem-solving methods.
This kind of shocking “first step” is almost impossible for the current AI.
This may be why Microsoft’s researchers set the goal to “hit the gold medal.”
If you can’t play well, what method does Lean take to compete with the human brain?
How does Lean learn mathematics?
Lean, like all AI algorithms, needs to “feed data” for training.
At present, Lean is not only unable to design a complete IMO title certification process, it can’t even understand the concepts involved in some of the issues.
Therefore, Lean’s first task is to learn more about mathematics.
The training data comes from Mathlib’s library. Mathlib is a basic mathematics database, which contains almost all mathematics knowledge below the second year of college.
However, Mathlib still has some gaps in middle school mathematics, and the team is completing the Mathlib database.
Mastering knowledge is only the first step, and how to use it flexibly is the key.
The team’s approach is the same as chess, Go AI, etc.-follow the decision tree until the algorithm finds the optimal solution.
The key to many IMO questions is to find a certain mode of proof. Going deep into the bottom layer of mathematical proof is a series of very specific and logical steps.
The researchers tried to train Lean with all the details of the IMO title proof.
However, this method also has limitations. Each specific problem proves to be too “special” for the algorithm, and the next different type of problem still cannot be solved.
To solve this problem, the team needs a mathematician to write a detailed formal proof of the previous IMO problem. Then, the team refines the different strategies used in the proof.
Next, Lean’s task is to find a “victorious” combination among these strategies.
This task is actually much more difficult than described. The team likened it like this:
In Go, the goal is to find the best move. In mathematics, the goal is to find the best game, and then find the best move in this game.
The team said that maybe next year, winning the gold medal is still very difficult, but at least, Lean has a chance to participate.
In this regard, some netizens lamented the rapid progress of AI over the years: first chess, then go… Now, AI is about to capture the gold medal in the International Olympics.
However, some netizens hold a pessimistic attitude, believing that AI can only approach human level in certain aspects at this stage.
At present, AI algorithms are all based on human cognition…So for special tasks like (Proving Mathematical Theorems), I have a negative attitude. After all, only a small number of people in the world can help.
“What is mathematical thinking?”
This question is unexpectedly difficult to explain thoroughly. When a mathematician tries to solve a new problem, the brain’s activity is hard to describe, let alone implement it in an algorithm.
Although some AI teams have taken a step toward the deeper level of mathematical thinking, judging from the strategies they have adopted, they still learn from the past ideas and choose the “permutation and combination” with the highest success rate.
Such an AI algorithm must surpass human beings in terms of creativity and breakthrough.
The GPT next door has also achieved preliminary results in the direction of mathematical proof.
Recently, OpenAI launched GPT-f for mathematical problems, which uses the generation ability based on Transformer language model for automatic theorem proving.
The 23 short proofs discovered by GPT-f have been accepted by the Metamath main library. This is the first time that AI mathematical proofs have been recognized in the industry.
GPT is really going to hit everyone’s jobs, even mathematicians will not let it go.
So, Lean or GPT-f, which one do you prefer?