Interesting. Let me stick to physics because that is the field I am most familiar with. If the idea of falsifiability was not well understood before the 60s, then how did physics manage to progress after Newton? I mean well before the 60s physicists had discovered mechanics, electrodynamics, special relativity, general relativity, quantum mechanics, statistical mechanics, etc. There were many other proposed theories, but these were the theories that survived empirical tests. Theories that made no testable predictions did not survive. So, based on the history of physics, it seems to me that physicists understood this idea well before Popper came along. Do you agree?
If you agree, then perhaps Popper was responsible for spreading this idea to the social sciences, but he was not the one who introduced it to science.
Now, falsifiability is in fact an outdated notion, and this too was appreciated well before the 60s. As a first example, consider quantum mechanics which was formulated in the beginning of the 20th century. It postulates that the state of a system is described by a wave function, but only certain aspects of the wave function can be observed. The theory makes many testable predictions, but not all aspects of it can be tested (and therefore falsified). Popper argued against this theory (along with Einstein), but it turns out he was wrong. So it seems to me that, not only did physicists understand falsifiability before Popper, but they actually understood more than Popper did (or at least some of them did, like Schrodinger).
A second point about falsifiability is that it is in fact quite a naive and unproductive idea, even without the complexities of quantum mechanics. It is true that theories cannot be proven, but it is also true that they generally cannot be falsified, except in very simple cases. If I have a theory that the sun rises every day, this theory can be falsified. But most theories rely heavily on statistical measurements, which are inherently uncertain. Therefore we can only assign a probability to whether the theory agrees with measurements or not. If we start thinking that we actually 'falsified' some theories, this can actually damage the progression of science. If you like Bayesian statistics, for each theory we can assign a probability of it being correct, based on our measurements, but the probability is rarely 0 or 1. This, again, is well understood by physicists, and I don't think this idea was introduced to physics by philosophers.
People worked during the day and not at night for ages before astronomers came along and described how the sun and the earth relate to each other. So, based on the history of human work, humans understood the solar system well before astronomers came along, otherwise no work would have been achieved. Do you agree?
If you agree, then perhaps Popper was responsible for spreading this idea to the social sciences, but he was not the one who introduced it to science.
Now, falsifiability is in fact an outdated notion, and this too was appreciated well before the 60s. As a first example, consider quantum mechanics which was formulated in the beginning of the 20th century. It postulates that the state of a system is described by a wave function, but only certain aspects of the wave function can be observed. The theory makes many testable predictions, but not all aspects of it can be tested (and therefore falsified). Popper argued against this theory (along with Einstein), but it turns out he was wrong. So it seems to me that, not only did physicists understand falsifiability before Popper, but they actually understood more than Popper did (or at least some of them did, like Schrodinger).
A second point about falsifiability is that it is in fact quite a naive and unproductive idea, even without the complexities of quantum mechanics. It is true that theories cannot be proven, but it is also true that they generally cannot be falsified, except in very simple cases. If I have a theory that the sun rises every day, this theory can be falsified. But most theories rely heavily on statistical measurements, which are inherently uncertain. Therefore we can only assign a probability to whether the theory agrees with measurements or not. If we start thinking that we actually 'falsified' some theories, this can actually damage the progression of science. If you like Bayesian statistics, for each theory we can assign a probability of it being correct, based on our measurements, but the probability is rarely 0 or 1. This, again, is well understood by physicists, and I don't think this idea was introduced to physics by philosophers.