The trouble, I feel, with the overlap between science and philosophy is that most modern practitioners of what we know as "science" only have exposure to one epistomological world view: positivism. Granted, this world view has created a great abundance of valuable and life improving knowledge for humanity - as Bacon would have described it, it is good for more than disputation (no offense, to Ol' Aristotle) - but it does not encompass the entire sphere of knowledge. There is authentic knowledge to be gained from disciplines which are entirely a priori - such as mathematics, or as some argue, economics.
I really don't think so. Positivism is more a feature of how philosophers of science tried for a while to describe science than it is of actual science. And it seemed to fall out of favor decades ago. Actual science is marked by commonsense, naive realism. It has more in common with how a plumber looks for a leak than with most of the formalisms dreamed up by philosophers.
EDIT: And this is the big one that the article leaves out: the main complaint by scientists against philosophy is that it's so often wrong, especially about science. Many scientists who get curious about philosophy dip into some "philosophy of science" for obvious reasons, and don't recognize there any of the descriptions of the science that they're familiar with. And when philosophers try to apply their general principles to instruct is about how the universe works, they're even more wrong. Heisenberg's Physics and Philosophy is a good introduction to this wrongness. He devotes a good deal of it to showing how Kant derived so many wrong conclusions about physics that he thought must be true because of general principles.
I agree. In my experience scientific researchers were not interested in philosophy of science and meta- questions that interested me in philosophy. They would engage in conversation but I doubt they concerned themselves with the big philosophy of science, epistemology or metaphysics questions.
They were naive realists and that's ok- it seemed helpful for getting data, interpreting results, keeping the wheels of science moving forward.
I am a biologist with a love for philosophy of science and found that both Kuhn and Lakatos discussed science in ways that were at least intelligible. Lakatos in particular described a philosophy of science that seemed to match what I have observed in practice.
I agree pretty much, especially about Kuhn, who had the advantage that he was a superb historian of science. His Structure of Scientific Revolutions is most famous, but I think his really great work was The Copernican Revolution.
Limiting themselves to the philosophy of science may be the problem. Far better to start with a history of philosophy in general. Not only does this provide a better introduction to philosophy qua philosophy, it also provides the necessary historical framework, which is essential in understanding how sciences can, and did, emerge from from the philosophic tradition.
Aside from being interesting in its own right, this keeps you from making the mistake that philosophy is simply a precursor to science, and that it can be reliably judged based on how much ended up in the realm of empiricism.
A priori knowledge is supposedly independent of experience. I agree with you that we have no evidence of knowledge outside human experience.
But from a scientific perspective, both knowledge and experience arise from the physical properties of the universe; properties that predate humans and biological systems entirely. From that perspective, one could say that all knowledge is a priori.
But whether we say it's all priori or posteriori doesn't really matter. The point is that making the distinction privileges the human perspective in a way that we have no scientific evidence to support. The entire exercise begs the question by assuming that human experience is determinative, or distinct in any way from the rest of the universe.
I don't understand. If I say
Define L to be a thing of type J.
Where j is a J, define T j L to be j
Define T L j to be the same as j.
Where j is a J define W j to be a J.
And where a and b are both J, define T a W b to be the same as T W a b
Define a K to be a J and either be L or W L or where k can be determined to be a K, W k.
I think these definitions are sufficient to show that if a and b are both a K then T a b can be shown to be the same as T b a.
( though this does not neccisarily make addition over the natural numbers because I did not include the requirement that if a and b are both a K and are not the same then W a must not be the same as W b)
And while I'm not totally sure about the definition of K ( specifically the whether something can be shown part),
I think this would be based just on definitions, without really any axioms?
Although I suppose it is possible that I included an axiom and just wrote "Define" before it.
(tangent: I suppose one could say "define a system with the following axioms" and then make a proof about that system, and claim that it was a proof by definition?)
Maybe I don't really understand what a priori really means (this seems fairly likely) but I don't think definitions are considered a priori?
Definitions can absolutely be a priori. Think of the definition of a triangle, some thing like "a closed geometrical shape with three sides." There is nothing about observation of the world that will tell you that this is or isn't true. It is true by definition. Then all closed three sided shapes are triangles, by definition. Then you can start building all sorts of proofs from that, and a few other definitions. You get to the Pythagorean theorem, trigonometry, and off you go, without ever needing anything other than the ideas and definitions.
Care to elaborate? A-priori is generally used to mean that it does not rely on experience or observation. So while the words and symbols we use to explain 2+2=4 (i.e. Two, plus, four, equals) are leaned concepts, the truth and the ability to understand that truth is independent of that experience.
He's getting that argument from good ole Kant and his Critique of Pure Reason from 1781 (http://en.wikipedia.org/wiki/Critique_of_Pure_Reason). The basic argument, so far as I understand it, is that there must exist certain "structures in the brain" that make all our understanding possible. Since these are brain structures they must be a priori (everyone is born with them). On the side of aesthetic senses he boils these down to space and time. From our sense of space it is possible for us to reason about geometry and therefore much more abstract forms of mathematics without having to have recourse to the "outside" world. So we can envision and prove new geometrical proofs and our internal brain structures for understanding space ground these proofs in a sort of sense certainty. To go to your example of 2 + 2 we can say that nowhere in these symbols is contained the meaning 4 - so no amount of analysis (dividing these symbols up, reducing them to more primitive symbols, etc) will get us to the equality - we have to perform some operation internally and that operation has to be grounded on something a priori - these would be our sense of space where we can envision two things and two more things and perform the count and arrive at 4 - all done outside of experience.
Now there are plenty of philosophers who came later who have tried instead - following in the shoes of Hume - to perform a materialist grounding of these faculties not in the brain but derived from sense experience itself - sense experience somehow constitutes the structures of the brain and changing external sense experiences can reshape or reconstitute these brain structures. For example, Gilles Deleuze seems to me to do something like this.
If I understand you correctly, you are saying that math isn't invented, it is discovered. I am not sure this is true. I think they are models that are not true or untrue, just consistent and more or less useful than other particular models. It is trivially true that if I have two things in my left hand and two things in my right hand and I put them together, there are four things because this can be mapped onto identity of things. But really quickly it doesn't scale and you need non-real concepts to solve more complex problems. Roman numerals mapped to things (I,II,III) but Arabic were more symbolic and enabled more complex calculations (but still were base-10, which is a choice, not reality, some bases are better for certain calculations than others.) But again they still represented things in some sense. Zero was invented, it had to be invented because it's not representing a thing that can be counted, it's the lack of a thing so it's, I think, fundamentally a concept and not "real." It's still fairly intuitive (despite the fact that it took millions of years to invent) but then there are concepts like infinity. But then it turned out that that just described an infinite number of "things", there was also uncountably infinite. Then there are imaginary numbers. They can solve real problems, but I don't even know what they are. They seem to be an artifact of math that let you solve quadratic equations even though they map into something that can't even be said to be a "lack" of something, like zero is. It is a mapping into a completely virtual space. It was invented to solve problems, but it's not real in any material sense. I'm not a math person, so there are a lot more that I don't know anything about. Maybe I missed the point of your statement, but I wanted to explain what I think. That math isn't reality, it is a human invention that describes reality. You can have things in math that aren't reality but are useful models.
The a-priori/a-posteriori distinction is essentially: is its truth subject to observation of the world. Most mathematicians will argue that even complex ideas like imaginary numbers are the result of the system of mathematics, and can be arrived at and proved without observing the world. Physics is a-posteriori because the math involved is used to describe the world, and the correct formulas and the truth of the formulas cannot be arrive at without observation. Even if someones work is just performing certain calculations on others work, somewhere, working backwards, observation is necessary. If you start with imaginary numbers, and work backwards to simpler and simpler math, it is purely self referential. You don't need anything other than the base concepts to prove the truth, and the base concepts are true by definition, not by observation. You don't have to see a triangle to know it has three sides. It has three sides because that is the definition of a triangle.