It was impressive even to ask the questions they did. That doesn't mean they always came up with good answers. It's not considered insulting to say that ancient Greek mathematicians were naive in some respects, or at least lacked some concepts that would have made their lives easier. So I hope people will not be too offended if I propose that ancient philosophers were similarly naive.
In particular, they don't seem to have fully grasped what I earlier called the central fact of philosophy: Much to their surprise, they didn't arrive at answers they agreed upon. In fact, they rarely seemed to arrive at answers at all. They were in effect arguing about artifacts induced by sampling at too low a resolution. The proof of how useless some of their answers turned out to be is how little effect they have.
No one after reading Aristotle's Metaphysics does anything differently as a result. No, they may not have to. Hardy's boast that number theory had no use whatsoever wouldn't disqualify it. But he turned out to be mistaken. In fact, it's suspiciously hard to find a field of math that truly has no practical use. And Aristotle's explanation of the ultimate goal of philosophy in Book A of the Metaphysics implies that philosophy should be useful too.
Theoretical Knowledge Aristotle's goal was to find the most general of general principles. The examples he gives are convincing: The trend is clear: But then he makes a mistake—possibly the most important mistake in the history of philosophy. He has noticed that theoretical knowledge is often acquired for its own sake, out of curiosity, rather than for any practical need.
So he proposes there are two kinds of theoretical knowledge: Since people interested in the latter are interested in it for its own sake, it must be more noble. So he sets as his goal in the Metaphysics the exploration of knowledge that has no practical use. Which means no alarms go off when he takes on grand but vaguely understood questions and ends up getting lost in a sea of words. His mistake was to confuse motive and result.
Certainly, people who want a deep understanding of something are often driven by curiosity rather than any practical need. But that doesn't mean what they end up learning is useless. It's very valuable in practice to have a deep understanding of what you're doing; even if you're never called on to solve advanced problems, you can see shortcuts in the solution of simple ones, and your knowledge won't break down in edge cases, as it would if you were relying on formulas you didn't understand.
That's what makes theoretical knowledge prestigious. It's also what causes smart people to be curious about certain things and not others; our DNA is not so disinterested as we might think.
So while ideas don't have to have immediate practical applications to be interesting, the kinds of things we find interesting will surprisingly often turn out to have practical applications. The reason Aristotle didn't get anywhere in the Metaphysics was partly that he set off with contradictory aims: He was like an explorer looking for a territory to the north of him, starting with the assumption that it was located to the south.
And since his work became the map used by generations of future explorers, he sent them off in the wrong direction as well. The Metaphysics is mostly a failed experiment.
A few ideas from it turned out to be worth keeping; the bulk of it has had no effect at all. The Metaphysics is among the least read of all famous books. It's not hard to understand the way Newton's Principia is, but the way a garbled message is. Arguably it's an interesting failed experiment.
But unfortunately that was not the conclusion Aristotle's successors derived from works like the Metaphysics. Instead of version 1s to be superseded, the works of Plato and Aristotle became revered texts to be mastered and discussed.
And so things remained for a shockingly long time. It was not till around in Europe, where the center of gravity had shifted by then that one found people confident enough to treat Aristotle's work as a catalog of mistakes. And even then they rarely said so outright. If it seems surprising that the gap was so long, consider how little progress there was in math between Hellenistic times and the Renaissance.
In the intervening years an unfortunate idea took hold: No one thought to go back and debug Aristotle's motivating argument. And so instead of correcting the problem Aristotle discovered by falling into it—that you can easily get lost if you talk too loosely about very abstract ideas—they continued to fall into it. The Singularity Curiously, however, the works they produced continued to attract new readers.
Traditional philosophy occupies a kind of singularity in this respect. If you write in an unclear way about big ideas, you produce something that seems tantalizingly attractive to inexperienced but intellectually ambitious students.
Till one knows better, it's hard to distinguish something that's hard to understand because the writer was unclear in his own mind from something like a mathematical proof that's hard to understand because the ideas it represents are hard to understand. To someone who hasn't learned the difference, traditional philosophy seems extremely attractive: That was what lured me in as a high school student.
This singularity is even more singular in having its own defense built in. When things are hard to understand, people who suspect they're nonsense generally keep quiet.
There's no way to prove a text is meaningless. The closest you can get is to show that the official judges of some class of texts can't distinguish them from placebos. That alone is fairly damning evidence, considering philosophy's claims.
It's supposed to be about the ultimate truths. Surely all smart people would be interested in it, if it delivered on that promise. Because philosophy's flaws turned away the sort of people who might have corrected them, they tended to be self-perpetuating.
Bertrand Russell wrote in a letter in Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject.
I think Wittgenstein deserves to be famous not for the discovery that most previous philosophy was a waste of time, which judging from the circumstantial evidence must have been made by every smart person who studied a little philosophy and declined to pursue it further, but for how he acted in response.
The field of philosophy is still shaken from the fright Wittgenstein gave it. Since that seems to be allowed, that's what a lot of philosophers do now. Meanwhile, sensing a vacuum in the metaphysical speculation department, the people who used to do literary criticism have been edging Kantward, under new names like "literary theory," "critical theory," and when they're feeling ambitious, plain "theory.
Gender is not like some of the other grammatical modes which express precisely a mode of conception without any reality that corresponds to the conceptual mode, and consequently do not express precisely something in reality by which the intellect could be moved to conceive a thing the way it does, even where that motive is not something in the thing as such.
There's a market for writing that sounds impressive and can't be disproven. There will always be both supply and demand. So if one group abandons this territory, there will always be others ready to occupy it.
A Proposal We may be able to do better. Here's an intriguing possibility. Perhaps we should do what Aristotle meant to do, instead of what he did. The goal he announces in the Metaphysics seems one worth pursuing: But instead of trying to discover them because they're useless, let's try to discover them because they're useful. I propose we try again, but that we use that heretofore despised criterion, applicability, as a guide to keep us from wondering off into a swamp of abstractions.
Instead of trying to answer the question: What are the most general truths? The test of utility I propose is whether we cause people who read what we've written to do anything differently afterward.
Knowing we have to give definite if implicit advice will keep us from straying beyond the resolution of the words we're using. The goal is the same as Aristotle's; we just approach it from a different direction. As an example of a useful, general idea, consider that of the controlled experiment. There's an idea that has turned out to be widely applicable.
Some might say it's part of science, but it's not part of any specific science; it's literally meta-physics in our sense of "meta". The idea of evolution is another.
It turns out to have quite broad applications—for example, in genetic algorithms and even product design. Frankfurt's distinction between lying and bullshitting seems a promising recent example.
Such observations will necessarily be about things that are imprecisely defined. Once you start using words with precise meanings, you're doing math. So starting from utility won't entirely solve the problem I described above—it won't flush out the metaphysical singularity. But it should help.
It gives people with good intentions a new roadmap into abstraction. And they may thereby produce things that make the writing of the people with bad intentions look bad by comparison. One drawback of this approach is that it won't produce the sort of writing that gets you tenure. And not just because it's not currently the fashion. In order to get tenure in any field you must not arrive at conclusions that members of tenure committees can disagree with.
In practice there are two kinds of solutions to this problem. In math and the sciences, you can prove what you're saying, or at any rate adjust your conclusions so you're not claiming anything false "6 of 8 subjects had lower blood pressure after the treatment".
In the humanities you can either avoid drawing any definite conclusions e. The kind of philosophy I'm advocating won't be able to take either of these routes.
At best you'll be able to achieve the essayist's standard of proof, not the mathematician's or the experimentalist's. And yet you won't be able to meet the usefulness test without implying definite and fairly broadly applicable conclusions. Worse still, the usefulness test will tend to produce results that annoy people: Here's the exciting thing, though. Anyone can do this. Getting to general plus useful by starting with useful and cranking up the generality may be unsuitable for junior professors trying to get tenure, but it's better for everyone else, including professors who already have it.
This side of the mountain is a nice gradual slope. You can start by writing things that are useful but very specific, and then gradually make them more general. Joe's has good burritos. What makes a good burrito? What makes good food? What makes anything good? You can take as long as you want. You don't have to get all the way to the top of the mountain. You don't have to tell anyone you're doing philosophy.
If it seems like a daunting task to do philosophy, here's an encouraging thought. The field is a lot younger than it seems. Though the first philosophers in the western tradition lived about years ago, it would be misleading to say the field is years old, because for most of that time the leading practitioners weren't doing much more than writing commentaries on Plato or Aristotle while watching over their shoulders for the next invading army.
In the times when they weren't, philosophy was hopelessly intermingled with religion. Metaphysical naturalism holds that all properties related to consciousness and the mind are reducible to, or supervene upon, nature.
Broadly, the corresponding theological perspective is religious naturalism or spiritual naturalism. More specifically, metaphysical naturalism rejects the supernatural concepts and explanations that are part of many religions. Methodological naturalism concerns itself with methods of learning what nature is. These methods are useful in the evaluation of claims about existence and knowledge and in identifying causal mechanisms responsible for the emergence of physical phenomena.
It attempts to explain and test scientific endeavors, hypotheses, and events with reference to natural causes and events. This second sense of the term "naturalism" seeks to provide a framework within which to conduct the scientific study of the laws of nature. Methodological naturalism is a way of acquiring knowledge.
It is a distinct system of thought concerned with a cognitive approach to reality, and is thus a philosophy of knowledge. Studies by sociologist Elaine Ecklund suggest that religious scientists in practice apply methodological naturalism. They report that their religious beliefs affect the way they think about the implications - often moral - of their work, but not the way they practice science.
In a series of articles and books from onward, Robert T. Pennock wrote using the term "methodological naturalism" to clarify that the scientific method confines itself to natural explanations without assuming the existence or non-existence of the supernatural, and is not based on dogmatic metaphysical naturalism as claimed by creationists and proponents of intelligent design , in particular by Phillip E.
Pennock's testimony as an expert witness  at the Kitzmiller v. Dover Area School District trial was cited by the Judge in his Memorandum Opinion concluding that "Methodological naturalism is a 'ground rule' of science today": Expert testimony reveals that since the scientific revolution of the 16th and 17th centuries, science has been limited to the search for natural causes to explain natural phenomena While supernatural explanations may be important and have merit, they are not part of science.
According to David Kahan of the University of Glasgow , in order to understand how beliefs are warranted, a justification must be found in the context of supernatural theism, as in Plantinga's epistemology.
Plantinga argues that together, naturalism and evolution provide an insurmountable " defeater for the belief that our cognitive faculties are reliable", i.
Take philosophical naturalism to be the belief that there aren't any supernatural entities - no such person as God, for example, but also no other supernatural entities, and nothing at all like God. My claim was that naturalism and contemporary evolutionary theory are at serious odds with one another - and this despite the fact that the latter is ordinarily thought to be one of the main pillars supporting the edifice of the former.
Of course I am not attacking the theory of evolution, or anything in that neighborhood; I am instead attacking the conjunction of naturalism with the view that human beings have evolved in that way. I see no similar problems with the conjunction of theism and the idea that human beings have evolved in the way contemporary evolutionary science suggests.
More particularly, I argued that the conjunction of naturalism with the belief that we human beings have evolved in conformity with current evolutionary doctrine Pennock contends  that as supernatural agents and powers "are above and beyond the natural world and its agents and powers" and "are not constrained by natural laws", only logical impossibilities constrain what a supernatural agent could not do. As the supernatural is necessarily a mystery to us, it can provide no grounds on which to judge scientific models.
But by definition we have no control over supernatural entities or forces. Allowing science to appeal to untestable supernatural powers would make the scientist's task meaningless, undermine the discipline that allows science to make progress, and "would be as profoundly unsatisfying as the ancient Greek playwright's reliance upon the deus ex machina to extract his hero from a difficult predicament.
Naturalism of this sort says nothing about the existence or nonexistence of the supernatural, which by this definition is beyond natural testing. As a practical consideration, the rejection of supernatural explanations would merely be pragmatic, thus it would nonetheless be possible, for an ontological supernaturalist to espouse and practice methodological naturalism.
For example, scientists may believe in God while practicing methodological naturalism in their scientific work. This position does not preclude knowledge that is somehow connected to the supernatural.
Generally however, anything that can be scientifically examined and explained would not be supernatural, simply by definition. Quine describes naturalism as the position that there is no higher tribunal for truth than natural science itself.
In his view, there is no better method than the scientific method for judging the claims of science, and there is neither any need nor any place for a "first philosophy", such as abstract metaphysics or epistemology , that could stand behind and justify science or the scientific method.
Therefore, philosophy should feel free to make use of the findings of scientists in its own pursuit, while also feeling free to offer criticism when those claims are ungrounded, confused, or inconsistent.
In Quine's view, philosophy is "continuous with" science and both are empirical. Instead, it simply holds that science is the best way to explore the processes of the universe and that those processes are what modern science is striving to understand.
However, this Quinean Replacement Naturalism finds relatively few supporters among philosophers. Karl Popper equated naturalism with inductive theory of science. He rejected it based on his general critique of induction see problem of induction , yet acknowledged its utility as means for inventing conjectures. A naturalistic methodology sometimes called an "inductive theory of science" has its value, no doubt I reject the naturalistic view: Its upholders fail to notice that whenever they believe to have discovered a fact, they have only proposed a convention.
Hence the convention is liable to turn into a dogma. This criticism of the naturalistic view applies not only to its criterion of meaning, but also to its idea of science, and consequently to its idea of empirical method. Popper instead proposed that science should adopt a methodology based on falsifiability for demarcation , because no number of experiments can ever prove a theory, but a single experiment can contradict one. Popper holds that scientific theories are characterized by falsifiability.
From Wikipedia, the free encyclopedia. This article is about the term that is used in philosophy. For other uses, see Naturalism disambiguation. History of metaphysical naturalism. Alternatives to natural selection. Retrieved 6 March Naturalism is not so much a special system as a point of view or tendency common to a number of philosophical and religious systems; not so much a well-defined set of positive and negative doctrines as an attitude or spirit pervading and influencing many doctrines.
As the name implies, this tendency consists essentially in looking upon nature as the one original and fundamental source of all that exists, and in attempting to explain everything in terms of nature. Either the limits of nature are also the limits of existing reality, or at least the first cause, if its existence is found necessary , has nothing to do with the working of natural agencies.
All events, therefore, find their adequate explanation within nature itself. But, as the terms nature and natural are themselves used in more than one sense, the term naturalism is also far from having one fixed meaning. Stanford Encyclopedia of Philosophy. Methodological naturalism is the adoption or assumption of naturalism in scientific belief and practice without really believing in naturalism. On the Origins of Methodological Naturalism. The Encyclopedia of Philosophy. Editor Stone , p. And perhaps we need some natural piety concerning the ontological limit question as to why there is anything at all.
But the idea that naturalism is a polemical notion is important". Retrieved 22 December Objective reality exists beyond or outside our self. Any belief that it arises from a real world outside us is actually an assumption. It seems more beneficial to assume that an objective reality exists than to live with solipsism, and so people are quite happy to make this assumption.
In fact we made this assumption unconsciously when we began to learn about the world as infants. The world outside ourselves appears to respond in ways which are consistent with it being real.
The assumption of objectivism is essential if we are to attach the contemporary meanings to our sensations and feelings and make more sense of them. The most obvious components of this comprehensive presupposition are that the physical world exists and that our sense perceptions are generally reliable.
It works the other way around. First, nothing in our incomplete but extensive knowledge of history disagrees with it.
John Locke (—) John Locke was among the most famous philosophers and political theorists of the 17 th century. He is often regarded as the founder of a school of thought known as British Empiricism, and he made foundational contributions to modern theories of limited, liberal government.
September In high school I decided I was going to study philosophy in college. I had several motives, some more honorable than others. One of the less honorable was to shock people.
Diversifying the philosophical canon is more likely to lead to university budget cuts than to enriched minds. Introduction Knowledge. Traditionally, the term "philosophy" referred to any body of knowledge. In this sense, philosophy is closely related to religion, mathematics, natural science, education and politics.
Welcome. The Royal Institute of Philosophy is a charity dedicated to the advancement of philosophy through the organisation and promotion of teaching, discussion and research of all things philosophical. Ayn Rand has inspired individuals with a philosophy of reason, purpose, and self-esteem. See for yourself what Objectivism is all about.