We have been aware of some of those problems for a long time, and we have been creating tools to avoid our mistakes at least for the last few millennia. Deductive logic, mathematical methods, experiments—they were all created and perfected to help us correct our mistakes. Or, more likely, to correct the mistakes of those who disagreed with us. With those tools, we have been learning how to get better answers about how the world is. More recently, we have created probabilistic inductive methods. Those do not provide answers to the question of which idea is correct, but probability inductive methods allow us to compare ideas and theories and try to estimate which ones are more probable.
In this book, I review some results about our cognition and the tools we created to understand how we can improve our reasoning. Our argumentation skills seem to work on a motivated fashion to defend our points of views. As a consequence, holding points of view that we assume to be true is counterproductive. Our failures make it clear we need formal methods of reasoning. We need logic and mathematics because they make it easier to show when an argument is wrong. That need explains the success of mathematics where it is used. That success seems surprising only because we compare it to our far more fallible nonmathematical arguments.
The realization that we cannot know any hypothesis about the real world to be right has serious consequences. It has deep impacts in the current crisis in statistical misuse and the problems with null-hypothesis tests. We are observing a serious lack of replicability of so many published results. Understanding the relation between that crisis and our desire to believe can help us make better choices on the tools we should use and which ones we should avoid. For similar reasons, the problem of demarcation between scientific and nonscientific ideas makes no sense. We can say an idea is so improbable it must be a bad description of the world, but no ideas should be labeled nonscientific. Bayesian methods are sometimes enough to separate the probable from the improbable ideas, but they depend not only on the main theory but also on auxiliary hypotheses. The existence of those auxiliary hypotheses and the role they play is central to answering some of the criticisms to the Bayesian point of view.
I will use the equivalence between Bayesian methods and a Solomonoff induction machine to understand a few questions in the Bayesian framework. While the framework is, from a practical point of view, impossible to use fully, we can still obtain approximations. We will also see that theoretical work acquires a new role. We need a constant influx of new ideas. The cultural relativists got part of the description of the scientific enterprise correct. But they missed some key aspects that are fundamental. Ideas might be equal at the beginning, but data comes and some of them become more probable. Underdeterminacy exists, but it does not mean we have no way to move forward.
The picture that emerges explains why the hard sciences have been so successful. It is interesting to see that physics has been able to make amazing advances despite the naïve epistemological point of view of most physicists. But that naïveté does carry problems to some current debates in physics. There are different typical mistakes in each area of knowledge. Luckily, once we understand the main issues in this book, we can see easily identifiable paths for improvements.
Knowing seems to be a simple concept. Defining it should take a few conditions and nothing more. We have an opinion, we have a good reason to have that opinion, and we are right. When that is the case, we should be able to say we know we are correct. Indeed, that description makes for a reasonable suggestion for the technical meaning of knowing. That is, maybe knowing something means someone has a justified true belief on the matter (Gettier 1963).
Unfortunately, that definition does not work as well as we would think. It can happen that we have a belief that is indeed true, and we also have a reasonable reason to hold that belief. But it may happen that our reason is flawed, and our belief was correct just by random chance. In this case, it is hard to say we had real knowledge. Justification can be a difficult condition to define well.
As examples, consider the following scenarios:
1. You remember putting your mobile phone inside your briefcase. From that memory, you believe the phone is there, but you actually forgot you took it out to answer a call and didn’t put it back. Luckily, your partner saw the phone and put it in the suitcase for you. Your belief is true, but can we actually say you knew it when it is only there because you were lucky?
2. Your country went through a period of high inflation and recession some time ago. A new president was elected one year ago, and you believe the new economic politics are correct. You also think the president’s decisions will generate new jobs. When the numbers about employment come out, you see that your belief that more people would leave unemployment turned out to be correct.
3. You are an astronomer in the third century AD, and you use the Ptolemaic system to predict the position of the planets. You make your calculations to see where the planets will be one month from now. The numbers tell you Venus and Jupiter will be very close in the sky a few hours before dawn. Your prediction is later confirmed.
4. You are a child learning about prime numbers. And, for some reason, you believe that all numbers that end in 7 are prime. As a consequence, when the teacher asks you if 37 is a prime number, you answer it is. While the answer is correct, your reasoning was clearly wrong.
In each of those situations, you held a belief that was true, and you had a justification for your beliefs in each case, but it does not feel right to claim you actually knew what you believed. Of course, feeling right is not a good standard for philosophical discussions. Maybe we should not build arguments based only what we feel. But, in this case, we are trying to capture what we mean when we say we know. And we can use our feelings about the word to check if a definition matches how we use it. Of course, we may conclude our common usage of knowing is not logically consistent. If that is the case, there might not be a proper way to define it in a way that is consistent with our usage and also logically solid.
In the scenario with your mobile phone, you had forgotten an important piece of information and only got it right due to luck. The kid in the school situation is similar, but it is even clearer the kid had no actual knowledge about prime numbers. The only way an honest grader would mark the question as correct is if the exam did not require justifying the answer.
The second scenario, about unemployment, has a few extra problems. You might have noticed I did not include a statement about your justification being correct or not. There is a good reason for that. In that case, nobody might actually know the correct cause with certainty. Maybe the unemployment is reacting to the new policies. Maybe the extra jobs were created because of decisions made by the previous administration that are only influencing the job market now. Maybe it was caused by external factors. If the world economy is booming, the new jobs might have been created from an increase