1.The P versus NP Problem（P対NP問題）
Suppose that you are organizing housing accommodations for a group of four hundred university students. Space is limited and only one hundred of the students will receive places in the dormitory. To complicate matters, the Dean has provided you with a list of pairs of incompatible students, and requested that no pair from this list appear in your final choice. This is an example of what computer scientists call an NP-problem, since it is easy to check if a given choice of one hundred students proposed by a coworker is satisfactory (i.e., no pair from taken from your coworker's list also appears on the list from the Dean's office), however the task of generating such a list from scratch seems to be so hard as to be completely impractical. Indeed, the total number of ways of choosing one hundred students from the four hundred applicants is greater than the number of atoms in the known universe! Thus no future civilization could ever hope to build a supercomputer capable of solving the problem by brute force; that is, by checking every possible combination of 100 students. However, this apparent difficulty may only reflect the lack of ingenuity of your programmer. In fact, one of the outstanding problems in computer science is determining whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve by any direct procedure. Problems like the one listed above certainly seem to be of this kind, but so far no one has managed to prove that any of them really are so hard as they appear, i.e., that there really is no feasible way to generate an answer with the help of a computer. Stephen Cook and Leonid Levin formulated the P (i.e., easy to find) versus NP (i.e., easy to check) problem independently in 1971.
It is Saturday evening and you arrive at a big party. Feeling shy, you wonder how many people you already know in the room? Your host proposes that you must certainly know Rose, the lady in the corner next to the dessert tray. In a fraction of a second you are able to cast a glance and verify that your host is correct. However, in the absence of such a suggestion, you are obliged to make a tour of the whole room, checking out each person one by one, to see if there is anyone you recognize. This is an example of the general phenomenon that generating a solution to a problem often takes far longer than verifying a given solution is correct. Similarly, if someone tells you that the number 13,717,421 can be written as the product of two smaller numbers, you might not know whether to believe him, but if he tells you that it can be factored as 3607 times 3803 then you can easily check that it is true using a hand calculator. The problem of deciding whether the answer can be quickly checked can really take much longer to solve, no matter how clever a program we write, is considered one of the outstanding problems in logic and computer science. It was formulated by Stephen Cook in 1971.
2.The Hodge Conjecture（ホッジ推測）
In the twentieth century mathematicians discovered powerful ways to investigate the shapes of complicated objects. The basic idea is to ask to what extent we can approximate the shape of a given object by gluing together simple geometric building blocks of increasing dimension. This technique turned out to be so useful that it got generalized in many different ways, eventually leading to powerful tools that enabled mathematicians to make great progress in cataloging the variety of objects they encountered in their investigations. Unfortunately, the geometric origins of the procedure became obscured in this generalization. In some sense it was necessary to add pieces that did not have any geometric interpretation. The Hodge conjecture asserts that for particularly nice types of spaces called projective algebraic varieties, the pieces called Hodge cycles are actually (rational linear) combinations of geometric pieces called algebraic cycles.
3.The Poincare Conjecture（ポアンカレの予想）
If we stretch a rubber band around the surface of an apple, then we can shrink it down to a point by moving it slowly, without tearing it and without allowing it to leave the surface. On the other hand, if we imagine that the same rubber band has somehow been stretched in the appropriate direction around a doughnut, then there is no way of shrinking it to a point without breaking either the rubber band or the doughnut. We say the the surface of the apple is ‘simply connected,’ but that the surface of the doughnut is not. Poincare, almost a hundred years ago, knew that a two dimensional sphere is essentially characterized by this property of simple connectivity, and asked the corresponding question for the three dimensional sphere (the set of points in four dimensional space at unit distance from the origin
).This question turned out be be extraordinarily difficult, and mathematicians have been struggling with it ever since.
4.The Riemann Hypothesis（リーマン仮説）
Some numbers have the special property that they cannot be expressed as the product of two smaller numbers, e.g., 2, 3, 5, 7, etc. Such numbers are called prime numbers, and they play an important role, both in pure mathematics and its applications. The distribution of such prime numbers among all natural numbers does not follow any regular pattern, however the German mathematician G.F.B. Riemann (1826 ? 1866) observed that the frequency of prime numbers is very closely related to the behavior of an elaborate function “z(s)” called the Riemann Zeta function. The Riemann hypothesis asserts that all interesting solutions of the equation
z(s) = 0
lie on a straight line. This has been checked for the first 1,500,000,000 solutions. A proof that it is true for every interesting solution would shed light on many of the mysteries surrounding the distribution of prime numbers.
The laws of quantum physics stand to the world of elementary particles in the way that Newton's laws of classical mechanics stand to the macroscopic world. Almost half a century ago, Yang and Mills introduced a remarkable new framework to describe elementary particles using structures that also occur in geometry. Quantum Yang-Mills theory is now the foundation of most of elementary particle theory, and its predictions have been tested at many experimental laboratories, but its mathematical foundation is still unclear. The successful use of Yang-Mills theory to describe the strong interactions of elementary particles depends on a subtle quantum mechanical property called the "mass gap:" the quantum particles have positive masses, even though the classical waves travel at the speed of light. This property has been discovered by physicists from experiment and confirmed by computer simulations, but it still has not been understood from a theoretical point of view. Progress in establishing the existence of the Yang-Mills theory and a mass gap and will require the introduction of fundamental new ideas both in physics and in mathematics.
The equations of quantum physics describe the world of elementary particles. Almost fifty years ago, the physicists Yang and Mills discovered a remarkable relationship between geometry and particle physics, embodied in these equations. In so doing, they paved the way to later combination of the laws for electro-magnetic forces with those for strong and weak ones. The predictions culled from these equations describe particles observed at laboratories around the world, including Brookhaven, Stanford, and CERN. However, the gauge theories of Yang and Mills are not known to have solutions compatible with quantum mechanics, nor to describe the particles observed in nature. Despite this, the “mass gap” hypothesis concerning supposed solutions to the equations is taken for granted by most physicists and provides an explanation of why we do not observe “quarks.” Solving this mathematical problem requires establishing a mathematical proof of this phenomenon.
Waves follow our boat as we meander across the lake, and turbulent air currents follow our flight in a modern jet. These and other fluid phenomena are described by the mathematical equations known by the names of the mathematicians Navier and Stokes. Unlike many problems in quantitative science, the solutions to these equations are not known, nor is it even known how to solve these equations. The solution to this problem entails showing the existence and smoothness of solutions to the Navier-Stokes equations.
7.The Birch and Swinnerton-Dyer Conjecture（バーチとスウィナートンダイヤー推測）
Mathematicians have always been fascinated by the problem of describing all solutions in whole numbers x,y,z to algebraic equations like
x2 + y2 = z2 .
Euclid gave the complete solution for that equation, but for more complicated equations this becomes extremely difficult. Indeed, in 1970 Yu. V. Matiyasevich showed that Hilbert's tenth problem is unsolvable, i.e., there is no general method for determining when such equations have a solution in whole numbers. But in special cases one can hope to say something. When the solutions are the points an abelian variety, the Birch and Swinnerton-Dyer conjecture asserts that the size of the group of rational points is related to the behavior of an associated zeta function ζ(s) near the point s=1. In particular this amazing conjecture asserts that if ζ(1) is equal to 0, then there are an infinite number of rational points (solutions), and conversely, if ζ(1) is not equal to 0, then there is only a finite number of such points.