Math history
Sharing of different mathematic elements, stories, archives of all kinds.
Description
Amplitude de Jacobi. Fonction elliptique.
Concepteur
Réalisé sous la direction du professeur Dr Brill par le Dr P. Vogel par les étudiants en mathématique Th. Kuen et Chr. Wolff.
Date de conception
1880
Fabricant / Éditeur
Martin Schilling
Date de fabrication
Premier quart du 20e siècle
Lieu de fabrication
Allemagne
Dimensions & matériaux
Hauteur : 19 cm ; Largeur : 35 cm ; Profondeur : 25 cm
Plâtre

What is Benford’s Law?
Benford’s law describes the relative frequency distribution for leading digits of numbers in datasets. Leading digits with smaller values occur more frequently than larger values. This law states that approximately 30% of numbers start with a 1 while less than 5% start with a 9. According to this law, leading 1s appear 6.5 times as often as leading 9s! Benford’s law is also known as the First Digit Law.
If leading digits 1 – 9 had an equal probability, they’d each occur 11.1% of the time. However, that is not true in many datasets. The graph displays the distribution of leading digits according to Benford’s law.
Analysis of datasets shows that many follow Benford’s law. For example, analysts have found that stock prices, population numbers, death rates, sports statistics, TikTok likes, financial and tax information, and billing amounts often have leading digits that follow this distribution. Below is a table that Benford produced for his 1938 study, which shows the different types of data he evaluated.
While Benford popularized the law in 1938, he didn’t actually discover it. Simon Newcomb first found the distribution in 1881. Hence, some analysts refer to it as the Newcomb-Benford Law.
In this post, learn about Benford’s law, its formula, how it works, and the types of datasets it applies to. Additionally, I’ll work through an example where I assess how well Benford’s law applies to a real dataset. And you’ll learn how to use Excel to assess it yourself!
Uses for Benford’s Law
Benford’s law is an intriguing, counterintuitive distribution, but can you use it for practical purposes?
Analysts have used it extensively to look for fraud and manipulation in financial records, tax returns, applications, and decision-making documents. They compare the distribution of leading digits in these datasets to Benford’s law. When the leading digits don’t follow the distribution, it’s a red flag for fraud in some datasets.
The idea behind why this works is straightforward. When people manipulate numbers, they don’t track the frequencies of their fake leading digits, producing an unnatural distribution of leading digits. In some cases, they might systematically adjust the leading digits to be below a particular threshold value. For example, if there is a $100,000 limit on a transaction type, fraudsters might start many numbers with a 9 for $99,000.
Using Benford’s law to find fraud is admissible in local, state, and federal US courts. In the past, it has detected irregularities in Greece’s EU application and investment return data for Ponzi schemes, such as Bernie Madoff’s.
However, there are several important caveats.
When a dataset you expect should follow Benford’s curve does not, it’s only a red flag, not proof of fraud. You’ll still need to send in the auditors and investigators, but at least you can target them more effectively on questionable records.
Furthermore, not all data follow Benford’s law naturally. In those cases, leading digits that follow a different distribution aren’t signs of fraud. Consequently, it’s crucial to know which datasets are appropriate to compare to it—which takes us to the next section.
Benford’s Law Formula
Benford’s law formula is the following:
Where d = the values of the leading digits from 1 to 9.
The formula calculates the probability for each leading digit.
https://statisticsbyjim.com/probability/benfords-law/
Note. NBL = Newcomb-Benford law.
Many real-world datasets approximately conform to NBL (Hill, 1998; Nigrini, 2012). For instance, the distance between earth and known stars (Alexopoulos & Leontsinis, 2014) or exoplanets (Aron, 2013), crime statistics (Hickman & Rice, 2010), the number of daily-recorded religious activities (Mir, 2014), earthquake depths (Sambridge, Tkalcic, & Arroucau, 2011), interventional radiology Dose-Area Product data (Cournane, Sheehy, & Cooke, 2014), financial variables (Clippe & Ausloos, 2012), and internet traffic data (Arshadi & Jahangir, 2014), were found to conform to NBL. In psychology, NBL was found relevant in the study of gambling behaviors (Chou, Kong, Teo, Wang, & Zheng, 2009), brain activity recordings (Kreuzer et al., 2014), language (Dehaene & Mehler, 1992; Delahaye & Gauvrit, 2013), or perception (Beeli, Esslen, & Jäncke, 2007).
Although NBL is ubiquitous, not all random variables or datasets conform to it. Scott and Fasli (2001) studied 230 sets of data and found that among them, less than 13% conformed precisely to NBL. Diekmann and Jann (2010), Bonache, Maurice, and Moris (2010), or Lolbert (2008) have also warned against overconfidence in NBL. NBL is not, they argue, a universal law but a property that appears in certain specific (albeit numerous) contexts.
The Sensitivity and Specificity of Benford Analysis
Human pseudorandom productions are in many ways different from true randomness (Nickerson, 2002). For instance, participants’ productions show an excess of alternations (Vandierendonck, 2000) or are overly uniform (Falk & Konold, 1997). As a consequence, fabricated data might fit NBL to a lesser extent than genuine data (Banks & Hill, 1974). Haferkorn (2013) compared algorithm-based and human-based trade orders and concluded that algorithm-based orders approximated NBL better than human-based orders. Hales, Chakravorty, and Sridharan (2009) showed that NBL is efficient in detecting fraudulent data in an industrial supply-chain context.
These results support the so-called Benford analysis, which uses a measure of discrepancy from NBL to detect fraudulent or erroneous data (Bolton & Hand, 2002; Kumar, 2013; Nigrini, 2012). It has been used to audit industrial and financial data (Rauch, Göttsche, Brähler, & Kronfeld, 2014; Rauch, Göttsche, & El Mouaaouy, 2013), to gauge the scientific publication process (de Vries & Murk, 2013), to separate natural from computer-generated images (Tong, Yang, & Xie, 2013), or to detect hidden messages in images’ .jpeg files (Andriotis, Oikonomou, & Tryfonas, 2013).
As a rule, the Benford analysis focuses on the distribution of the first digit and compares it to the normative logarithmic distribution. However, a more conservative version of Benford’s law states that numerical values or a variable X should conform to the following property: Frac(Log(X)) should follow a uniform distribution in the range of [0,1[. Here, Frac(x) stands for x—Floor(x), Floor(x) being the largest integer inferior or equal to x. The logarithmic distribution of the first digit is a mathematical consequence of this version (Raimi, 1976).
Hsü (1948), Kubovy (1977), and Hill (1988) provided direct experimental evidence that human-produced data conform poorly to NBL. However, in their experiments, participants were instructed to produce numbers with a given number (four or six) of digits. Specifying such a constraint could well induce participants to attempt to generate uniformly chosen numbers or to use a digit-by-digit strategy (repeatedly picking a random digit). Researchers who study the situations in which NBL appears often conclude that one important empirical condition is that the numerical data cover several orders of magnitude (e.g., Fewster, 2009; for a more detailed mathematical account, see Valadier, 2012). Consequently, any set of numbers that are bound to lie in the thousands scale (four digits) or in the hundreds of thousands scale (six digits) will probably not conform to NBL, whether produced by humans or not. Participants’ failure to produce data conforming to NBL could just be a consequence of the instructions they were given.
Furthermore, these studies were decontextualized. Participants were asked to give either the “first number that came to mind” or a number chosen “at random”, without being told what the numbers were supposed to represent. A “random number with four digits” usually implicitly refers to the uniform distribution (Gauvrit & Morsanyi, 2014)—and the uniform distribution does not conform to NBL. Therefore, the lack of context could prime a non-Benford response, even if participants are, in fact, able to produce series conforming to NBL.
For these reasons, the idea that fabricated numerical data will usually not follow NBL has been questioned. Using a more contextualized design, Diekmann (2007) asked social science students to create plausible regression coefficients, with four-digit precision. Note that, contrary to the case of a four-digit integer, which is bound to fall between 1,000 and 9,999, covering only one order of magnitude, here, the coefficients could run between .0001 and 1, covering four orders of magnitude. Diekmann found that, in this case, the first digit does approximately conform to NBL and concluded that researchers should not only consider the first digit as relevant to detecting fraud but should also look beyond the first digit, toward the conservative version of NBL. Using correlation coefficients makes the task meaningful, and this may explain why Diekmann’s participants are not bad at producing plausible rs, whereas other samples suggested that human participants would be unable to mimic NBL.
Another study went even further in formulating meaningful tasks by using the type of data known to exhibit a Benford distribution. Burns (2009) asked participants to guess real-world values, such as the US gross national debt or the peak summer electricity consumption in Melbourne. He found that although participants’ first digit responses did not perfectly follow a logarithmic law, they conformed to the logarithmic distribution better than to the uniform distribution. Burns concluded that participants are not too bad at producing a distribution that conforms to NBL as soon as the task involves the type of real-world data that do follow NBL.
One limitation of Burns’ (2009) study is that it only works at a population level. We cannot know from his data if a particular individual would succeed in producing a pseudorandom series conforming to NBL, since each participant produced a single value. Nevertheless, his and Diekmann’s studies certainly suggest that using Benford’s law to detect fraud is questionable in general since humans may be able to produce data confirming to NBL, in which case a Benford test will yield many undetected frauds, lacking sensitivity. As mentioned above, not all random variables or real-world datasets conform to NBL (and when they do, it is generally only in an approximate manner). Because many real-world datasets do not conform to NBL, a Benford test used to detect fraud not only may have low sensitivity but may also have low specificity.
Generalized Benford’s law
Several researchers (e.g., Leemis, Schmeiser, & Evans, 2000) have studied conditions under which a distribution seems more likely to satisfy NBL. Fewster (2009) provided an intuitive explanation of why and when the law applies and concluded that any dataset that smoothly spans several orders of magnitude tends to conform to NBL. Data limited to one or two orders of magnitude would generally not conform to the law.
To pursue the question of why many data conform to NBL further, the conservative version of the NBL may be a better starting point than the mere first-digit analysis. Recall that in the conservative version, a random variable X conforms to NBL if Frac(Log(X)) ~ U([0,1[). In an attempt to show that the roots of NBL ubiquity should not be looked for in the specific properties of the logarithm, Gauvrit and Delahaye (2008, 2009) defined a generalized Benford’s law (GBL) associated with any function f as follows: A random variable X conforms to a GBL associated with function f if Frac(f(X)) ~ U([0,1[). The classical NBL thus appears as a special case of GBL, associated with function Log.
Testing several mathematical and real-world datasets, Gauvrit and Delahaye (2011) found that several of them fit GBL better than NBL. Of 12 datasets they studied, six conformed to the classical NBL, while 10 conformed to a GBL with function f(x) = π × x2, and nine with square-root function. On the other hand, none conformed to GBL with function Log o Log. These findings suggest that a GBL associated with the relevant function—depending on the context—might yield more specific or sensitive tests for detecting fraudulent or erroneous data.
We addressed this question in two studies. In both studies, each participant produced a whole series of values, allowing analyzing the resulting distribution at an individual level. In Study 1, we examined three versions of GBL in four different situations in order to compare the sensitivity and specificity of different types of GBL analyses. Study 2 explored the potential effects of variations of familiarity with the material and of cognitive effort on the productions.
Concluding Discussion
We performed the first investigation of the generalized Benford analysis, an equivalent of the classical Benford analysis, but based on the broader GBL. Results from Study 1 rendered mild support for the generalized Benford analysis, including the classical Benford analysis. They also draw attention to the fact that different types of data yielded different outcomes, suggesting that the best way of detecting fraud using GBL associated with some function f would be obtained either by finding the function f that best matches the particular data at hand or by combining different analyses. Although the classical Benford analysis was validated in our studies, it occasionally failed at detecting human-produced data as efficiently as other generalized Benford analysis.
The present positive results could have been the result of our sample characteristic, in which participants, contrary to real swindlers, might have put little effort into the task since the stakes were low. Plus, the participants were not highly familiar with the material at hand. To rule out the possibility that our results resulted from such features and GBL would be inapplicable in real situations, Study 2 aimed at demonstrating that cognitive effort and familiarity with the material have little effect on the participants’ responses. The data supported this view, although further studies (including higher levels of cognitive pressure and true experts) would be recommended.
With Benford analysis having become more common in fraud detection, new complementary analyses are needed (Nigrini & Miller, 2009). The GBL analysis potentially provides a whole set of such fraud detection methods, which means making it more difficult, even for informed swindlers intentionally conforming to NBL, to remain undetected.
Carsten Nicolai
anti
2004
‘anti’ is a regular geometric form which represents systematic thinking and the interrelationship between mathematics, optics, art and philosophy. in appearance, it is a distorted cube, truncated at the top and bottom to obtain rhombic and triangular faces. the object reacts to the magnetic fields of bodies, enabling interaction with the visitor, while all of its mechanisms are hidden within. derived from artist albrecht dürers engraving ‘melancholia i’ (1514), ‘anti’s’ black, light-absorbent surface and monolith-like crystalline shape confronts the viewer, trying both to mask its form and to disguise its function, thereby absorbing information.
https://www.designboom.com/art/carsten-nicolai-anti-reflex/
PP lightweight structure, sound module, theremin module, transducer, amplifier, light-absorbent black paint
300×255×255cm
Installation view: Schirn Kunsthalle, Frankfurt/Main, 2005
Photo: Uwe Walter

The Codex Atlanticus (Atlantic Codex) is a twelve-volume, bound set of drawings and writings by Leonardo da Vinci, the largest such set; its name indicates its atlas-like breadth. It comprises 1,119 leaves dating from 1478 to 1519, the contents covering a great variety of subjects, from flight to weaponry to musical instruments and from mathematics to botany. This codex was gathered by the sculptor Pompeo Leoni, son of Leone Leoni, in the late 16th century, who dismembered some of Leonardo's notebooks in its formation. It is currently preserved at the Biblioteca Ambrosiana in Milan.
https://www.discoveringdavinci.com/codexes
To explore it
https://codex-atlanticus.ambrosiana.it/#/
Objective: the same number π for perimeter and area?
The number π is defined as the ratio between the perimeter of a circle and its diameter: this ratio is constant, and does not depend on the circle chosen. In his treatise De la mesure du cercle, the illustrious scientist Archimedes of Syracuse (287 to 212 BC) shows that the same constant is also involved when relating the area of a disk to the square of its radius.
Below we present a geometric method for understanding the relationship between the area of a disk and the perimeter of a circle. Method: sharing a pizza
Let's consider a disk-shaped pizza with radius R. Its perimeter is 2×R×π. Let's now justify that its area is expressed using the same constant π, and is π×R×R.
We first cut the pizza in half along a diameter, then divide each half-pizza into a large number of equally sized slices.
Each pizza slice is almost triangular in shape: it's a geometric figure whose two sides are segments whose length is the radius of the pizza, and the last side is a small arc of a circle. The larger the number of slices, the more triangular the slices become, as the arc of the circle becomes a small segment of a straight line.
We now rearrange the pieces as follows.
then
The result is a new pizza shape, which almost coincides with a rectangle. The width of this rectangle is the radius R of the initial pizza, and its length is the half-perimeter π×R. Its area is therefore π×R×R=πR^2^. Now the new pizza has kept the same area as the original disk. The area of this disk is therefore also πR^2^.
We don't know who first came up with the idea for the cut-out shown above, but it goes back at least as far as Leonardo da Vinci, as this detail from the Codex Atlanticus dating from 1513 shows.
Translated from https://sorciersdesalem.math.cnrs.fr/Autour_de_pi/Pizza_Archimede.html

Concepteur :
Hermann Wiener
Date de conception :
ca. 1896
Fabricant / Éditeur :
B. G. Teubner-Verlag
Date de fabrication :
[Premier quart du 20e siècle]
Lieu de fabrication :
Leipzig, Allemagne
Dimensions & matériaux :
Hauteur : 65 cm ; Largeur : 39 cm ; Profondeur : 39 cm Métal

The problem of covering a flat surface — a subset of the Euclidean plane or the whole plane itself — using some fixed geometric shapes and with no overlaps is probably one of the oldest in mathematics. Such a covering is called a tiling, or also a tessellation or a mosaic.
https://arxiv.org/pdf/2310.18950
Although it might look strange, mathematics also studies the placement of tiles on a floor! If the floor is an infinite plane, we have a tiling of the plane. A tiling is a way to cover the plane using nonoverlapping copies from a small set of geometric shapes.
In a periodic tessellation there is a region that repeats itself by translations. Mathematicians discovered that there are exactly 17 different kinds of periodic tessellations (the well known tessellations that cover the Alhambra walls in Granada contain examples of each of the existing 17 structures).
On the contrary the Penrose tessellation is a nonperiodic way to fill the plane, therefore there is no region that continually repeats itself. In a sense one could say that the Penrose tessellation is very varied, never repeating itself. To construct it, we use two forms: darts (dark colored tiles) and kites (light colored tiles). The fascinating thing is that the Penrose tessellation is not only non-periodic, but there is no way to place the tiles in a periodic way: like the pieces of a jigsaw puzzle, the way to fit darts and kites invariably leads to a non-periodic tessellation of the plane.
https://penrose.dmf.unicatt.it/
To understand what makes Penrose tilings so special, we first need to explore the idea of tiling itself. Usually, when you tile a surface, you use simple shapes like squares or triangles, and the pattern repeats periodically, like a checkerboard or bathroom tiles. These traditional tilings are periodic, meaning that if you slide or shift them by a certain distance, the entire pattern aligns with itself again.
Penrose tilings break this rule. They are non-periodic, meaning that no matter how you shift or rotate the pattern. It never repeats itself exactly. Yet, despite the lack of periodicity, Penrose tilings maintain a kind of order. It’s as if they are carefully balanced between chaos and predictability — a structure that is endlessly complex but still follows precise mathematical rules.
There are different versions of Penrose tilings, but the most famous involves two shapes: a fat rhombus (a squashed square) and a skinny rhombus (a squashed diamond). Penrose discovered that by using these two tiles, and applying a specific set of rules for how they must fit together, you could cover a plane in an arrangement that never repeats, no matter how far you extend it.
https://medium.com/@prmj2187/the-puzzle-that-never-ends-penrose-tilings-explained-9ca9822690ec
https://e.math.cornell.edu/people/mann/classes/chicago/penrose%20reading.pdf
Tilings Encyclopedia
Aperiodic
There are disputations among the experts how to define “aperiodic”. One possibility is to use it synonymously with nonperiodic. This is somehow a waste of this term. Others refer to an “aperiodic tiling” as one, which is created by an aperiodic set of tiles. This is unsatisfactory since this is rather a property of the set of prototiles than the tiling itself. Another definition is: A tiling is aperiodic, if its hull contains no periodic tiling. Personally, I like the latter definition (DF). Then a sequence …aaaaabaaaaaaa…. is not aperiodic (since its hull contains the periodic sequence aaaaaaaaaaaaa….), but the Fibonacci sequence is aperiodic.
Nonperiodic
A tiling T is called nonperiodic, if from T+x=T it follows that x=0. In other words, if no translation (other than the trivial one) maps the tiling to itself. In the theory of nonperiodic tilings usually the repetitive ones are the objects to be investigated. Usual constructions for repetitive nonperiodic tilings are substitutions, cut and project methods, and matching rules. Another well-studied class of nonperiodic (non-repetitive) tilings are random tilings, which can also be viewed as being generated by matching rules.
https://tilings.math.uni-bielefeld.de/
The Galton board, also known as the Galton box or quincunx or bean machine (or incorrectly Dalton board), is a device invented by Francis Galton to demonstrate the central limit theorem, in particular that with sufficient sample size the binomial distribution approximates a normal distribution.
The quincunx, as drawn by Francis Galton
https://en.m.wikipedia.org/wiki/Galton_board
Most people have some awareness of chance and probability. The probability of an event occurring can be assigned a number between 0 and 1; if the probability of occurrence is 1, then the event is certain to happen. If it is zero, it will never occur. Naturally, most events will have a probability somewhere in between. Since a die has six faces, if it is fair, the probability of throwing any particular number is 1/6. If you toss a coin, the probability of a head is ½. What happens if you toss a coin ten times? Will you get five heads and five tails? Suppose you get ten heads; what is the probability of getting a head on the next throw?
Mathematics can be used in an amazingly wide range of ways to predict the probability of an events occurring but, if the actual event is random, like a coin toss, it cannot predict the outcome of the next throw. However, if enough trials are made, the mathematics can be used to give a good estimate of the number of events which occur. So you might not be surprised if you threw four heads out of a total of 10, but you would be very concerned about the quality of your penny if you scored 4,000 heads out of 10,000 throws, even though the proportions are the same! Estimating probability by performing a large number of trials is experimental probability and the Galton Board is an example of this.
The board consists of a triangular arrangement of pins and a mechanism for introducing balls at the top. When a ball hits a pin, it can continue either to the left or right. It then drops down to the next row where it is faced with the same possible outcomes. At each row, left or right both have a probability of 0.5, just like tossing a coin. Eventually, when it has bounced its way down all the rows of pins, it drops into a channel where it is collected. The questions students are faced with, before the experiment is run, is how many balls are likely to drop into each channel and what will the shape of the filled channels be? Students will begin to realise that the number of balls in each channel is a measure of the probability of a ball’s taking that path. In fact, the probability of a ball’s ending up at the end channel is exactly the same as throwing the same number of consecutive heads as the ball has undergone bounces!
Mathematically, the shape approximates the Binomial Distribution which is approximated by the well known bell shape of the Normal Distribution if there are enough rows of pins
https://www.stem.leeds.ac.uk/wp-content/uploads/2018/09/Galton_board.pdf
Galton designed it to illustrate his idea of regression to the mean, which he called "reversion to mediocrity" and made part of his eugenist ideology.
Trigonometry is concerned with measurements of angles about a central point (or of arcs of circles centered at that point) and quantities, geometrical and otherwise, that depend on the sizes of such angles (or the lengths of the corresponding arcs). It is one of those subjects that has become a standard part of the toolbox of every scientist and applied mathematician. Why is it so valuable?
There is a key geometrical feature of the measurement of angles, or arcs that are traced out, about a point in the plane: as we might expect, the sizes of such angles (or the lengths of such arcs) grow as one end of the arc moves counterclockwise around its circle (while the other remains fixed); but when the moving point returns to the place of the fixed point after making a full turn around the circle, it continues to retrace the same path through another turn, and another and another as the angle/arc grows ever-larger. In other words, periodic behavior is at the core of the relationships between angles and arcs and the measurements we associate with them.
As a consequence, wherever the mathematical description of cyclical phenomena is needed, trigonometric functions make appearances in pure mathematics and in applications of mathematics to the sciences. An introduction to trigonometry is a staple of the mathematics curriculum in high schools and colleges, many of whose students later study calculus and other forms of mathematical analysis in which periodic phenomena are explored.
Daniel E. Otero (Xavier University), "Teaching and Learning the Trigonometric Functions through Their Origins," Convergence (March 2020)
Extract from the diploma of the Bac ès Sciences dated November 7, 1871, obtained with the mention Assez Bien, despite a zero in mathematics.
Extrait de Henri Poincaré, Paul Appell, Plon, 1925, p19
https://henripoincare.fr/s/biographie/item/16145
En 1871, Henri obtient le baccalauréat ès sciences avec la mention « assez bien » mais avec 0 en mathématiques : il avait probablement mal compris l’énoncé et traité « une autre question que celle qui était posée. » Henri suit ensuite au lycée les cours de la classe préparatoire en mathématiques élémentaires et en mathématiques spéciales où il rencontre l'alsacien Paul Appell, son futur collègue à l'Académie des sciences. En 1872, il est premier de sa classe, premier au concours académique et premier au concours général de mathématiques élémentaires. Les souvenirs de la sœur de Poincaré, Aline Boutroux, nous permettent d’avoir une idée assez précise de son caractère de jeunesse : « Il était extrêmement pondéré. Jamais il ne manifestait ni colère, ni émotion ni passion d’aucune sorte ; c’était ces sentiments les plus profonds qu’il cachait le plus soigneusement. Dans ses jugements sur les autres, il redoutait toute exagération ». Cela ne l’empêchait cependant pas d’être opiniâtre. [...]

[...]
M. Khalkhli
Do you agree for example that Hilbert was not prophetic in his choice of problems and the course of mathematics in the 20th century was not predicted in Hilbert’s questions?
Alain Connes
Some of them played a role, but mathematics does not work like this. Nobody works on problems because they are well-known but mostly because they are interesting and relevant. The goal of the Millenium problems was to attract public attention on mathematics and in that respect it worked wonderfully.
G. B. Khosrovshahi
Which one of those 7 problems is the most outstanding problem?
Alain Connes
There is always an element of arbitrariness in choosing such problems. There are some problems that everybody agrees upon like the Riemann hypotheses. But Navier-Stokes? It is a typical nonlinear equation about which we would love to understand a lot more, but it’s very hard to say that deciding for that equation between existence of smooth solutions or breakdown, which is a very difficult mathematical problem in analysis, will be truly relevant for the concrete instances where the equation applies. There is some element of arbitrariness. So nobody can be sure that these are the most important problems and it is very difficult for some of them even to formulate the question, while it would be clear to everybody what a breakthrough would mean. This is clear for the Yang-Mills problem for instance whose “mathematical” formulation is quite difficult.
[...]
Hilbert's problems are 23 problems in mathematics published by German mathematician David Hilbert in 1900. They were all unsolved at the time, and several proved to be very influential for 20th-century mathematics. Hilbert presented ten of the problems (1, 2, 6, 7, 8, 13, 16, 19, 21, and 22) at the Paris conference of the International Congress of Mathematicians, speaking on August 8 at the Sorbonne. The complete list of 23 problems was published later, in English translation in 1902 by Mary Frances Winston Newson in the Bulletin of the American Mathematical Society. Earlier publications (in the original German) appeared in Archiv der Mathematik und Physik.