There are various ways in which dualists have defended, and continue to defend, the idea that the human mind, or some aspect of it, resists wholesale explanation in material terms. They appeal to such things as: the phenomenal character of experience; the irreducibility of the first-person perspective; the simplicity of the self as against the mutable complexity of the physical body; the existence of free will; the nature of intentionality; the existence of morality and moral responsibility; as well as other phenomena that appear to defy any kind of materialist reduction.
At least some of these approaches are, in my view, correct. Others, however, are at least questionable. For instance, an appeal to the survival of the self despite total replacement of bodily parts might lead one to wonder whether we should be dualists about some non-human animals, perhaps the primates. But then we might wonder whether, if dualism about, say, chimpanzees were true, they also had free will and moral responsibility. It is, to say the least, highly doubtful whether this is so. Even more pointedly, the appeal to phenomenology raises the question of whether we should be dualists about all animals. Framing the issue in terms of conscious experience, there is good reason for thinking that even humble protozoans are conscious, unlike any plants (Oderberg 2007, chapter 8). That is to say, even the lower animals have sensory receptors that take in information, interpret it, amplify it where necessary, filter out noise, and communicate it to conspecifics. These are all actions of conscious beings; yet we should not be dualists about microbes. It might be thought that microbes are not conscious because there is nothing it is like—to use the standard way of putting things made popular by Thomas Nagel (1974)—for them to feel what they feel. Yet it is a mistake to think that for a being to feel something it is necessary that there be something it is like for it to feel that thing. A being may be merely conscious in the sense of having no phenomenology associated with its conscious states, at least in the Nagelian sense. A protozoan might sense a noxious environment, and take consequent action to avoid it, without that sensation’s feeling like anything. A higher animal such as a mammal might feel the pain of a pin-prick without that pain’s having any phenomenological character, such as feeling tingly or sharp. It might feel painful, of course: how else should a pain feel? To that extent there is something phenomenal in the animal’s experience. But it would be wrong to assume we had, on conceptual grounds, to say anything more about the character of its sensation.
The point of these examples is to suggest that arguments for dualism can sometimes be blunt instruments; or, to vary the metaphor, they can be a net with too coarse a mesh. I do not propose here to show definitively why we should not bite the bullet of certain dualist arguments and attribute something immaterial to the constitution of non-human animals. All that need be said is that to do so looks like a case of the dualist tail wagging the metaphysical dog. To be a dualist is, first and foremost, to recognize there is something about us human beings that precludes materialistic reduction. Is there something about human beings that requires a dualist explanation such that, whilst that feature may also on investigation turn out to be shared by, say, higher mammals, it is highly unlikely to be possessed by microbes or spiny anteaters? Perhaps appeal to a certain kind of phenomenology can ground an argument against materialism, but what kind it must be is more complicated than the common style of argumentation usually acknowledges. By contrast, an older way of arguing for dualism, based on the Aristotelian tradition, does not invoke anything subjective, first-personal, or phenomenological. Instead, it identifies a feature of human beings more amenable to third-personal investigation—the activity of reason. According to this kind of argument, human beings engage in a kind of activity that resists materialistic reduction, a position that can be established without appeal to anything necessarily subjective or perspectival in what each person knows about themselves. To be sure, investigation and analysis of our subjective states gives access to data about what we do and how we do it that impress the traditional dualist, but the data themselves are not essentially subjective.
I have called the kind of dualism representative of the Aristotelian (and Thomistic) tradition “hylemorphic dualism” (Oderberg 2005), but what follows is compatible not only with that kind of dualism. I will discuss it at various points, but the central argument to be presented should be congenial also to substance dualists of Cartesian or other stripes, and perhaps also to certain kinds of event or property dualists. It is meant as a general argument to the effect that there is something constitutive of the human being, amenable both to third- and first-personal investigation, that requires the postulation of an immaterial element.
The argument favoured by the hylemorphic dualist in favour of an immaterial element in the human being takes its cue from Aristotle’s remark that the intellect has no bodily organ (Loeb 1943, 171).1 The idea is that intellectual activity—the formation of concepts, the making of judgments, and logical reasoning—is an essentially immaterial process. By essentially immaterial is meant that intellectual processes, in the sense just mentioned, are intrinsically independent of matter, this being consistent with their being extrinsically dependent on matter for their normal operation in the human being. Extrinsic dependence, then, is a kind of non-essential dependence. For example, certain kinds of plant depend extrinsically, and so non-essentially, on the presence of soil for their nutrition, since they can also be grown hydroponically. But they depend intrinsically, hence essentially, on the presence of certain nutrients that they normally receive from soil but can receive via other routes. Something similar is true of the human intellect. To engage in concept formation, judgment, and reasoning is to engage in a process not essentially dependent on matter. Nevertheless, the normal operation of the process in the human being—that is, operation in an unattenuated way—extrinsically requires the presence of matter, whereby the intellect operates on sensory information delivered through material channels. I will not discuss extrinsic dependence on matter here, only the essential character of intellectual operation.
The dualist argument under consideration depends on a certain view of concepts, namely that they have an entitative character. The term “entitative” is here introduced with a specific sense distinct from that connoting mere being. In traditional ontology, anything thinkable is a being. Nothing distinguishes concepts in this broad sense from any other thing. On the more restricted reading I am proposing, concepts are entitative in the sense that they are not to be identified with capacities, powers, practices, activities, or any kind of behaviour whether actual or possible. A number of powerful arguments along these lines have been provided by Jerry Fodor (1995; 1998) and I do not intend to rehearse them all here.2 Several issues, however, should be discussed.
In response to the claim that concepts can be reduced to sorting capacities, Fodor rightly points out that all co-extensive concepts would then be identical, unless the capacity included the disposition or ability to sort possible as well as actual instances of the concept. But even in this case, necessarily coextensive concepts such as triangle and trilateral would be identical (Fodor 1995, 8). Hence a standard move is to supplement sorting capacities with dispositions to make certain inferences. Fodor takes the inferences to be ones the concept possessor is disposed to make “from the sorting he performs” (1995, 9), but this cannot be right. The person who sorts the triangles from the non-triangles and the trilaterals from the non-trilaterals will find that all and only the triangles fall into the same “mental basket,” as it were, as the trilaterals. He will not, as Fodor supposes, then be disposed to think of triangles as having three angles, and trilaterals as having three sides, without also being disposed to think of triangles as having three sides and trilaterals as having three angles. Indeed, one would expect that it is only the latter that he would be disposed to infer from the sorting: for the sorting would not reveal that triangles had angles and trilaterals had sides, rather the reverse. It would be of no avail to strengthen the causal connection between the sorting and the inference-making by holding that a triangle collector is disposed to make certain inferences about the objects in his collection precisely because he thinks of the triangles qua triangles. For anyone who thinks of triangles qua triangles is ipso facto disposed to think of them as having sides. Now Fodor supposes, on behalf of the concept reductionist, that the person who collects triangles must accept that the objects in his collection have angles “whether or not he has noticed that they have sides”; mutatis mutandis for the collector of trilaterals (1995, 9). Yet this is a wholly unrealistic scenario, at least as far as triangles and trilaterals are concerned: no one, not even a child, who collects triangles can fail to notice that they have sides, and no one who collects trilaterals can fail to notice that they have angles, whether the collector knows the word for angles (viz., “angle”) or not. The scenario would be more plausible for necessarily co-extensive properties of more complex geometrical structures, but we can see that mere sorting capacity, tied to the contingencies of what the sorter might or might not notice, cannot be sufficient for a general account of concept possession.
More important is that one should expect that the ability to think of triangles as having angles and trilaterals as having sides should exist prior to any sorting or capacity to sort. In other words, having the capacity to sort triangles must depend causally upon a prior ability to draw certain inferences about triangles, in particular that they have angles. To sort Fs is to be able to place Fs in a category of their own based on at least some of their characteristics. But then one must already be able to recognize those characteristics before one can place the Fs into the category. And to be able to recognize those characteristics is precisely to be able to think of the Fs as possessing them. Of course, one might be able to recognize angles without being able to think of a given kind of figure as possessing them; but then one would not be able to sort that very kind of figure into a category of its own, whether generically or specifically, on the assumption that having angles was essential to that kind of figure. So having the concept of a triangle cannot consist in an ability to sort and an ability to draw certain inferences based on the sorting, given that the sorting requires the ability to draw certain inferences in the first place. Nor is there any reason to think it consists in the ability to sort and the ability to draw certain inferences not based on the sorting, for what else could the inferences be based on? The obvious answer is propositions understood in virtue of prior possession of the concept of a triangle— yet this is precisely the answer not open to a concept reductionist.
Leaving aside the ability to sort, the reductionist might still insist that possession of the concept of an F consists in the ability to draw inferences that define F-ness. It is not that such a capacity presupposes possession of the concept, but that possession of the concept just is the capacity. Fodor’s objection to this position, which he labels “definitional pragmatism,” is that most concepts do not have definitions. “At a minimum,” he tells us, “to define a concept is to provide necessary and sufficient conditions for something to be in its extension” (1995, 12), but such a requirement cannot be met for most concepts. He goes on to contrast the capacity to define a bachelor with the capacity to define a dog, which latter is not possessed by anyone, at least not if definitions are to be non-circular.
Fodor’s main objection to the definitional approach is that, circularity aside, “being a necessary and sufficient condition for the application of a concept is not a sufficient condition for being a definition of the concept” (1995, 12). One could, for instance, list all and only the dogs and say that this provides necessary and sufficient conditions for being a dog. But we need a list that includes all actual and possible dogs in order to have the right kind of necessary and sufficient conditions, lest we fall into the old problem of co-extensiveness. Yet he fails to note that the equally old problem of necessary co-extensiveness will still be present: we can claim that triangles are defined by being on the list of actual and possible triangles, but they are also on the list of actual and possible trilaterals, so modalized necessary and sufficient conditions will not do. Fodor’s objection, then, reduces to the earlier one against sorting: our conceptual capacities are more fine-grained than capacities to list or to sort by modalized necessary and sufficient conditions.
At a more basic level, Fodor goes wrong in two ways when he discusses definitional pragmatism. His explicit claim is:
Being able to define a concept is being able (at least) to provide necessary and sufficient conditions for its application.
His implicit claim is:
Most concepts do not have definitions because they do not have necessary and sufficient conditions for their application.
At first glance, it is tempting to charge Fodor with confusing these distinct claims. Since (1), even if true, is a claim about our epistemic capacities, nothing follows concerning the metaphysical truth of whether concepts have definitions. Yet he could respond that the general lack of the second capacity in (1) is at least good evidence for the truth of the first half of (2). More significantly, it would be cold comfort for the definitional pragmatist to be told that we might be in a situation in which (2) is false but (1) is true, since if Fodor is right that we cannot provide necessary and conditions of application for most concepts, it follows that we cannot define most concepts, and for the definitional pragmatist this means that we fail to possess most of the concepts we think we have.
The proper response to Fodor’s claims is simply to deny both of them. To take (2) first, on what grounds can we say that most concepts do not have necessary and sufficient conditions for their application? To take the age-old Aristotelian definition of man, “man is a rational animal” does give necessary and sufficient conditions for being human. (For various issues and complications surrounding this definition, see my 2007, especially chapter 5.) The definition of gold as a metal composed of the element with atomic number 79 also provides necessary and sufficient conditions for being gold. The definition of fish as aquatic verterbrates possessing gills in the mature case, as far as ichthyology goes, also gives the requisite necessary and sufficient conditions, as does the definition of water as a kind of stuff capable of existing in solid, liquid, and gaseous states and whose molecular composition is given by the formula H2O. There is nothing circular about any of these definitions, so it is not clear why Fodor levels the objection he does, since these sorts of example make the case for very many concepts. What about most concepts? That depends on whether there are good arguments for essentialism as an overall metaphysical position, and I submit that there are, though I cannot rehearse them here (see Oderberg 2007). They are based on ontology, taxonomy, scientific method, on the poverty of anti-essentialist arguments, responses to the problem of vagueness, among other considerations. The possibility of defining objects—of real definition—and hence of defining the concepts that apply to them cannot be dismissed by a simplistic appeal to the supposed lack of necessary and sufficient conditions.
More germane to the present discussion is Fodor’s first claim. If (1) is taken at face value, it might well turn out to be false. Most if not all concepts might have definitions in the sense of, at a minimum, necessary and sufficient conditions of application, but can we be so sure we are able to provide them? Without adjudicating the issue here, suppose we cannot. It does not follow that we are unable to provide partial definitions, e.g., by giving necessary but not sufficient conditions, or sufficient but not necessary conditions, or some but not all of the jointly necessary or jointly sufficient conditions. Or we might be able to give only probable necessary and sufficient conditions, or defeasible conditions. Or we might only have necessary and sufficient conditions containing a certain amount of vagueness or approximation. If any of this is true, as it seems to be for many categories of object, it does not imply we cannot give definitions. A probable, vague, incomplete, or approximate definition is not a rubber duck, but something that requires supplementation by degrees, not a change of kind, in order to be a correct definition in the sense of being exhaustive, precise, and certain. Moreover, this account of our definitional capacities is just what we should expect—on the assumption that having concepts equates to being able to give definitions—in light of the fact that our possession of concepts, for most concepts, looks similarly to be a matter of degree. An ichthyologist has a fuller grasp of the concept fish than I do. A metallurgist has a more complete grasp of the concept aluminium than you likely do. But I do possess the concept fish and you almost certainly possess the concept aluminium. One can know what a thing is without being able to say everything there is that is true of that thing; and concept possession is compatible with, in some cases, quite meagre knowledge.
Of equal if not greater cause for concern is Fodor’s insistence that “[d]efinitions turn out to contribute vanishingly little to explaining what subjects do in tasks that involve applying concepts to things that fall under them” (1995, 13). They do not, he thinks, play a predictive or explanatory role in cognitive science. Here he is talking not just about necessary and sufficient conditions, but about the very role of essentialism in our cognitive practices. He asserts: “natural kind concepts . . . [are] self-conscious and cultivated intellectual achievements.” We should not use natural kind concepts as “the paradigms on which we should model our accounts of concept acquisition and concept possession.” Moreover: “[I]n the history of science, and in ontogeny, and, for all I know, in phylogeny too, concepts of natural kinds as such only come late. Homer, and children, and animals, have few of them or none.” (All quotations from Fodor 1998, 154, 155.) Yet these assertions contradict copious empirical evidence, of the kind gathered by Susan Gelman, supporting the role of essentialist thinking from the early stages of human psychological development (Gelman 2003). Gelman asserts (at 297):
[T]he available data strongly suggest that by four years of age children treat certain categories as having rich inductive potential; privilege causes over other sorts of properties in determining category membership; invoke nonobvious, internal, or invisible qualities, and consider them more tightly linked to identity than outward properties; treat membership in a kind as stable over outward transformations; appeal to innate potential; and so forth.
That children do not essentialize is, according to Gelman, a misconception; that they do so cannot be explained by any appeal to historical accident or late developments in scientific method and our increasing familiarity with it. Notwithstanding that an essentialist might dispute some of the wording in the above quotation,3 it is clear from cognitive psychology that one cannot simply dismiss, as Fodor does, the explanatory role of essentialist thinking, and the definitional practices that go hand in hand with it.
The problem with definitional pragmatism—the reduction of concepts to capacities to give definitions—is that it makes a mystery of how it is that we even have such capacities in the first place. The worry is a particular case of the general concern over trying to account for concepts in terms of conditions of concept possession (as in Peacocke 1992). Being able to think about Fs in a certain way, to recognize them, classify them, make generalizations about them, and the like, might all be consequences of possessing the concept of an F; but they cannot, singly or jointly, constitute what it is to have that concept. They manifest possession of the concept, but possession of the concept must itself involve some kind of mental operation bearing a relation to the objects to which the concept applies. Does not being able to think of Fs in a certain way, and draw inferences about them, involve a kind of mental operation and not mere capacities? To acquire and possess a concept is actually to do or have done something using one’s intellect, not merely to be able to do something. For Fred to have the concept of gold, he must have performed some kind of operation with his mind, an operation that gives rise to various sorts of recognitional, discriminatory, and inferential—not to mention linguistic—capacities.
Yet doesn’t this approach to concepts involve an unacceptable mentalism? Concepts are supposed to be objective, public, shareable things, however they are ultimately construed. As Peacocke puts it, when speaking of concepts as mental representations, “[i]t is possible for one and the same concept to receive different mental representations in different individuals” (1992, 3). Three interpretations of this claim need to be considered, however, and although none of them undermine the approach to concepts I am defending, Peacocke’s remark does suggest a problem with concepts as understood by representationalists such as Fodor. The first interpretation conflates concepts and conceptions. Different individuals might have, say, different conceptions of space or time, while sharing the same concept of space or time. They might both know what space or time is, at least partially, but attach different connotations to the same thing, attribute different characteristics to it, and so on. It does not imply that concepts cannot be mental entities. (For more on the distinction between concepts and conceptions, see Ezcurdia 1998; Macià 1998.) On the second interpretation, one person might have a more precise or complete grasp of a concept than another, though they both share the concept. Fred might have the concept of gold, and know its atomic composition, whilst Frieda has the concept of gold yet lacks this particular bit of knowledge. Again, nothing here militates against concepts’ being mental entities.
On the third interpretation—not one to which Peacocke himself would subscribe—the claim amounts to no more than holding that different people can define the same thing in different ways. Bob might define an echidna as a mammal whereas Barbara does not. Only one of them will be correct if they really are thinking or talking about one and the same thing, yet it is at least plausible to say that the concept echidna is possessed by both Bob and Barbara, especially if both have been sufficiently exposed to echidnas to be able to do most of the recognitional and discriminatory work that manifests grasp of the concept. It would be even more plausible if, say, Barbara—who is wrong about echidnas not being mammals—based her misdefinition on a more general, and arguable, theory of biological taxonomy. The third interpretation is consistent with Bob’s and Barbara’s both having performed the same, or a sufficiently similar, mental operation in virtue of which they can be truly said to have the concept echidna.
Peacocke’s assertion does, nevertheless, highlight something of the utmost importance about concepts. It must be a non-negotiable feature of any theory of concepts that concepts be public and shareable, a fact long insisted upon by Fregeans. Frege himself, as is well known, identified concepts with the referents of predicates (Frege 1951). The view seems strange, but in one sense Frege is right: thinking purely etymologically, a concept is precisely what is conceived; and what is conceived are things in the world. If I have the concept of redness, I must conceive of redness. Yet what happens, on such a view, to concepts as mental entities? Neo-Fregeans such as Peacocke, who identify concepts with senses or modes of presentation, retain the idea that senses are objective and public, but on this view what we take hold of mentally when we have a concept are abstract objects, not concrete ones. A concrete object presents itself in a certain way, but the conceiver does not take hold of the object, only of the way in which the object presents itself. Representationalists, by contrast, hold that the publicity of concepts is wholly accounted for within their theory by the fact that mental tokens instantiate types. Thus if Charles and Carol both have the concept of redness, the mental particulars each has within their minds/brains instantiate the single representation-type redness. Margolis and Laurence go as far as to say that whether we call concepts particulars or universals is a matter of mere terminology; since the representation tokens and their types both exist, we can focus on the tokens in constructing our theory of concept acquisition and possession (Margolis and Laurence 1999, 7, 76–77).
If anything, though, the idea that the publicity of concepts as mental particulars is underwritten by their instantiating the same types fares less well than the neo-Fregean insistence that concepts are abstract entities that concept possessors grasp. In the latter case, the problem is that concept possession does not involve laying hold of the thing to which the concept applies; rather, it involves a mediated grasp of something abstract to which the thing is related in an appropriate way (e.g., via a mode of presentation). Even if we say, as plausibly we should, that the things to which the concepts are related are themselves abstract—kinds and properties (and their combinations), which in the terminology of Lowe (2006) are called substantial and non-substantial universals—grasp of these things is still mediated, on the neo-Fregean view, by the grasp of something else abstract but not identical to them. At least the neo-Fregean might have a story to tell concerning the indirect grasp of things in the world by means of senses. But to say that concepts as particulars are shareable merely in virtue of their instantiating types is to do away with the notion of grasping a concept altogether. It is wrong to say, for instance, that we grasp feelings or sensations. Yet particular feelings or sensations also instantiate types, and so we correctly say that two people can have the same pain or the same colour experience. Moreover, those particulars are caused by things or events in the world. So too the concept dog is a type whose tokens are particulars, in individuals possessing the concept, such that the particulars are caused by things in the world, namely dogs. However complicated the causal story, the basic structure is the same. Yet why then do we not grasp feelings or sensations in the way that we grasp concepts? It seems that the type-token distinction does not do the work required to show both what it is about concepts that makes them graspable, and what it is that we do when we grasp concepts.
We need, I submit, a different approach to concept acquisition and possession. The approach must respect the idea that when we grasp a concept we lay hold directly of the thing to which it applies. In other words, concept acquisition and possession must involve direct engagement with the world, and it must require some kind of action on our part with respect to the world with which we engage. But concepts must also be the matter of reason: they must be the things upon which our rational faculty operates in judgment and inference. The details of such an approach must await another occasion, but in outline it invokes the idea of abstraction from the extra-mental world. In order to acquire the concept of triangularity, say, a person must either lay hold directly of a universal present in particular instances or else, by means little understood at present, receive that universal by communication from a person who has already done so. In either case, the universal—a real thing in the world existing as multiplied in its instances—must literally enter the mind of a person for them to acquire and possess the concept. In more traditional parlance, what is so abstracted is a form, be it substantial or characterizing, or some combination of these. Forms are essentially common, hence there can be no concept of individuals as such. I have the concept tiger because I have, say, been exposed to sufficiently many instances of individual tigers (the picture is of course more complex than this). But I can have no concept of Tigger the tiger. I can have the concept of a human being, but no concept of Socrates. It is not that the concept of Socrates, or Socrateity if you will, is forever beyond my grasp; rather, there is no concept of Socrates.
What about individual essences? Suppose Socrates has a haecceity, a primitive thisness true of him. Would this not be the matter of an individual concept of Socrates? I do not see how it would. If there are haecceities (itself doubtful), then I can have the concept of haecceity as a generic property whose instances are unshareable, and hence conceive of it as applying generically to Socrates; but it does not follow that I have a concept of Socrates. In this sense, an individual concept would not be the concept of an individual. I could think of Socrates as being this individual, or as being identical to Socrates; but this does not mean I have a concept of him. Concepts are tied to understanding: to have a concept is to understand something. But I do not understand Socrates (except in certain figurative or otherwise irrelevant psychological contexts). Again, I understand what a human being is, and so understand that Socrates is a human being. Hence I have the concept of a human being, and apply it to Socrates. Part of what it is to have such a concept is to see how it does or might apply to certain individuals. But it does not follow that I thereby understand the individual except inasmuch as I understand what the individual is; but no individual concepts are involved in that. Similarly, I understand and can evaluate various propositions in which Socrates figures—that he is short, or snub-nosed. But again all I am doing is applying various concepts to an individual, and thinking of him as falling under those concepts; but thinking of an individual and having a concept of that individual are not the same, just as acquaintance with an individual and being able to reidentify an individual do not imply having a concept of the individual.
Again, suppose there were individual essences in a thicker sense, involving bundles of properties. Could I not then have a concept of an individual (as opposed to the concept individuality, which we all have and which does not depend on there being individual essences)? Well, if Socrates possessed a unique bundle of tropes (modes in traditional terminology or property instances in modern parlance) then the question would merely be pushed one stage back: could I have the concept of this particular trope, or of this unique trope bundle? Similar considerations to those just raised would reemerge. Or else, if Socrates were identical to a trope bundle then I could know that Socrates is this trope bundle; but then I would simply know that a certain singular identity statement was true, and this would no more involve my having the concept either of Socrates or of the particular bundle than my knowing that Hesperus is Phosphorus involves my having the concept of Hesperus or the concept of Phosphorus (as opposed to knowing that Hesperus/Phosphorus falls under the (schematic) concept being a planet with such-and-such characteristics). Suppose Socrates possessed a unique set of properties (rather than tropes); then I would have the concept of those properties and of Socrates’s falling under them, but would not thereby have the concept of Socrates. And if he were identical to a bundle of properties, I would again merely know the truth of a singular identity statement. If, on the other hand, some or all of the properties were unshareable, the case for an individual concept would look stronger: for Socrates would not be identical with, or possess, a bundle of individual properties. The properties would not be individual since they would not be tropes. But they would not be shareable either. The sorts of property one might appeal to are likely to be relational, in particular spatiotemporally relational: having such-and-such an origin, being born at such-and-such a time and place, and so on. Would properties answering in the right way to such schematic descriptions be unshareable? Yes, but not in the sense required for me to have an individual concept of Socrates. Properties concerning origin might not be shareable in this world: for example, if Socrates came into existence at particular time t1and place s1, then no other person in this world can have that property.4 But such a property is shareable across worlds: Socrates has it in the actual world, but it is logically possible for another person to have it. Since the property is shareable in this modal sense, what I conceive when I conceive that Socrates is or possesses the relevant property bundle is that Socrates falls under the concepts defining the bundle (in the case of possession) or I simply conceive of the bundle, and think of it as Socrates (in the case of identity). In neither case do I have an individual concept. Rather, I have a concept or concepts of wholly shareable entities.
The moral of the story concerning individual concepts is that since there are none, we have no reason to depart from the view that concepts concern entities shareable by the things that fall under them. They are, then, doubly shareable, since they can also be shared by the individuals who acquire and possess them. They are not senses or modes of presentation, since objects fall under concepts but they do not fall under senses or modes of presentation. Objects present themselves in various ways, and from these we derive conceptions of them; this is what modes of presentation most plausibly involve. If senses are just modes of presentation, the same consideration applies to them. If they are something distinct—a purely semantic entity of some sort—then a similar consideration applies, namely that objects do not literally fall under purely semantic entities. Objects fall under extra-mental entities that in some way correspond to semantic entities; but what could “correspond” mean? If it means only correlation, suitably fleshed out, then we are left with the position that if concepts are purely semantic entities that merely correspond to non-semantic entities outside the mind, then there is a conceptual veil between us and the world: we can have our concepts, but no objects literally fall under them. Concepts end up as Lockean ideas in a contemporary guise, forever mediating between us and extra-mental reality.5 We can never lay hold of an object, only of a sense to which the object in some way corresponds. But doesn’t this simply mean that Frege was right to regard concepts as universals? Not quite, since concepts as universals cannot be in the objects that are characterized by them, whereas for Frege they are: they are the denotata of predicate terms, and those denotata are the properties that characterize individuals. Yet universals, to be conceivable, must in some way be in the mind. This is no mere metaphor: concepts are the matter of judgment and reasoning, and though we can make judgments and reason about universals in rebus, we cannot perform acts of judgment and reasoning upon universals in rebus. We make judgments and inferences upon the things within our mind. So if we are to be able to lay hold of extra-mental reality directly, and at the same time be capable of acting mentally upon extra-mental reality, that reality has in some way to get into our minds. What gets into our minds, I submit, are precisely the universals—the forms—that obtain in reality, but only those universals as abstracted from the reality in which they obtain. These, then, are the concepts. They are best thought of not as abstract entities, but as abstracted entities (“extracted” has a nice connotation here as well)—entities the mind lays hold of directly and immediately.6
A problem arises for the view that the mind, considered just as the brain (or some other physical entity or system), acts in a purely material way. For if the abstracted forms—the concepts—are literally in the mind so conceived, we should be able to find them, just as we are able to find universals in rebus by finding the things that instantiate them or the things of which they are true. We can find triangularity by finding the triangles, and redness by finding the red things. But we cannot find the concepts of either of these by looking inside the brain. Nothing in the brain instantiates redness or triangularity; when Fred acquires the concept of either, nothing in his brain becomes red or triangular. Yet these concepts must be in his mind, and if the mind just is the brain they should be in his brain; yet they are not. It seems that concepts are not located anywhere in the brain. But they must be in the mind since concepts are precisely what the mind acts upon in order to make judgments and inferences. Concepts are the matter of intellectual operation, but they do not seem to be materially located, whether in the brain or any other part of the person. This is what I call the “storage problem.”
Moreover, it does not appear that all we need is a more sophisticated neuroscience or neuropsychology in order to find the concepts in our brains. For there is what might be called an ontological mismatch between concepts and any putative material locus for them, in other words between the proper objects of intellectual activity and any kind of potential physical embodiment of them. For a start, concepts and what they constitute—propositions and arguments—are abstract, whereas potential material loci for them are concrete. This is not a point about instantiation. The instantiation of the abstract by the concrete is a commonplace (however difficult it is to understand) and reveals nothing special about the human mind. Human beings, as said earlier, do not instantiate the concepts that get into their minds; they acquire, possess, and store them. It is no mere façon de parler to speak of a person’s stock of concepts, for such entities are indeed stored, in some way, in the mind. The problem of ontological mismatch, by contrast, is the problem of how an abstract thing such as a concept, with all its sui generis properties, could ever be stored or located in a concrete object such as a brain. Again, concepts are unextended; brains are extended. Here the idea is that concepts are not even categorially capable of location in a brain due to lack of extension. The lack on the part of concepts is not a mere privation, such as when a concept happens not to have a possessor, but an intrinsic incapability of possessing something—material location—by analogy with, say, the case of a number’s not being red. Looked at this way, it might be thought straight nonsense to claim that a concept is either extended or unextended. But this supports my point equally well, since it does make sense—and is true—to say that a brain is extended, thus preserving the idea of ontological mismatch. Further, concepts are universal, whereas material loci are particular. So the problem is how anything that is abstract, unextended, and universal could be embodied, located, or stored in anything concrete, extended, and particular. Concepts have an ontological profile that reveals them not to be the kind of thing that could be stored in anything physical. But they are stored in the mind. Therefore, the mind cannot be anything physical, whatever dependence it may have on physical things such as parts of bodies for certain sorts of non-intellectual operation such as sensation. (I will return to sensation later.)
To complicate matters for the materialist a little more, consider just those concepts that are not only abstract, unextended, and universal, but also semantically simple, that is to say, incapable of further analysis into conceptual constituents. Suppose, per impossibile as I have argued, that the materialist could overcome the problem concerning ontological mismatch between the features of concepts just mentioned and those of potential material loci. Suppose she held that a semantically complex concept, like black dog (to use a typical Fodorean example) had its locus in the brain spatially distributed in a way that was isomorphic to its complexity. In this case, the concept black had location A, the concept dog had location B, and some kind of structural relation between A and B constituted the relation between the concepts as elements of the unified concept black dog. Now what could she say about simple concepts? We need not enter into debate about which ones they are, but that there are some is highly plausible: take the concept being, or the concept unity, or identity. Many non-naturalists in ethics would say good was unanalyzable. Let us assume these concepts cannot be broken down into constituents, though it be possible to give a contextual explication of them, illustrate them, and so on.
Assuming all the other difficulties could be overcome, there would still not be any prospect of finding a material locus for these simple concepts unless the putative locus was itself materially simple, in the sense of being material but metaphysically indivisible.7 Speaking generally, the very idea of a material simple seems not to make any sense. If a material object were simple it would be unextended; but then it what sense would it be material? To countenance extended metaphysical indivisibles would be only to countenance objects that could not—perhaps according to metaphysically necessary laws of nature—be physically separated into parts. Such objects would still have geometrical parts, that is to say sub-regions defined by the spatial (or spatiotemporal) boundaries enclosing them. Yet simple concepts are in no sense complex: there is no sense of “part” according to which a simple concept has parts that are either separable or else inseparable according to some law but nevertheless definable in a way akin to the definition of geometrical parts of putative extended but metaphysically indivisible simples. Another approach might be to understand materially simple loci as idealizations or limits, like spacetime points. But there is no way of similarly understanding simple concepts such that the latter could be mapped onto the former: the simplicity of simple concepts just is not like any kind of simplicity that could be attributed to material loci.
Going back to the idea of treating materially simple loci for simple concepts as extensionless points, it must be emphasized that an extensionless point is not a something but a nothing, and so could not be a locus for simple concepts, which are something. Further, extensionless points cannot have any constitutive relation to the extended, which is why Aristotle was adamant that the infinite divisibility of space and matter is only potential, not actual (Ross 1930, III.5–8, 204a7–208a25). But simple concepts do have a constitutive relation to complex ones, as good, for instance, does to good person. Yet suppose, despite all the difficulties, we could make sense of the idea of a material simple: could it be a candidate locus for simple concepts? Well, are we to postulate a simple located in the brain? If so, is it the same simple in which all simple concepts are stored? If the answer is yes, it is hard to make sense of the idea of multiple simple concepts in one materially simple location—about as hard as making sense of many dimensionless points located at one dimensionless point. If the answer is no, and the materialist proposes multiple material loci, he has to account for the mental unity by which one mind has many such concepts. Perhaps he could give an account in terms of geometrically defined structures linking any simple concepts to all the other ones, but no such account is currently available. And there are still the complex concepts, like black dog, to deal with. They could not be located in material simples; at least, it is hard to see how any kind of complexity could be embodied in something simple. Even on the interpretation of material simples as extended but metaphysically indivisible, the problem of separability rears its head: for the simple so understood is not divisible, but the complex concept is. Fred can have the concept black without the concept dog, and vice versa. He can have the concept black and then acquire the concept dog, or have both concepts and then lose one of them. But the extended yet indivisible material locus is by definition never separated, let alone put back together or increased. The other option is for the materialist to hold that complex concepts have materially complex locations, such as extended brain regions. Then he has to account for mental unity given that the simple concepts have simple locations. How are the simple locations related to the complex ones—in the way that lines are related to points, or areas to lines, or volumes to edges? I do not claim to have proved that no such structural account could be given, only to have shown that the existence of simple concepts merely aggravates the already immense difficulty of smoothing over the fundamental mismatch between concepts and their putative material storage.
One of the fundamental problems of cognitive science, in its ubiquitously materialistic guise, has been to explain the storage of concepts. Most of the research, however, is either beside the point insofar as it attempts to solve the storage problem, or else yields precious little knowledge. In a recent paper, Martin and Chao (2001, 195) note: “A common feature of all concrete objects is their physical form.8 Evidence is accumulating that suggests that all object categories elicit distinct patterns of neural activity in regions that mediate perception of object form (the ventral occipitotemporal cortex).” The authors go on to describe how functional brain-imaging techniques show that representations of different object categories are located in discrete cortical regions that are “distributed and overlapping,” embedded in a “lumpy feature-space”(2001, 196). To be sure, functional imaging may well reveal correlations between certain intellectual activities and certain cortical activities: if, as the dualist should hold, persons are essentially embodied beings,9 such correlations are only to be expected since persons require corporeal activity in order to interact with the world both physically and mentally. But correlation and co-location are not the same thing.
Martin and Chao go on, prudently, to say (2001, 196): “Clearly, it would be difficult, as well as unwise, to argue that there is a ‘chair area’ in the brain. There are simply too many categories, and too little neural space to accommodate discrete, category-specific modules for every category. In fact, there is no limit on the number of object categories.” In a later paper, Martin reviews the evidence for regions of neural activity associated with the learning and recall of “object concepts” (Martin 2007). What object concepts are supposed to be is not clearly explained, though he talks about the “representation of the meaning of concrete objects and their properties” (26), itself a somewhat vague and loose formulation. What he concludes is that information about the sensory and motor properties of objects is “stored in [the] corresponding sensory and motor systems” of the brain (38), and that objects in different categories, such as faces and animals on the one hand and tools on the other, are “represented in” distinct but overlapping regions. Yet he holds that “object concepts are not explicitly represented, but rather emerge from weighted activity within property-based brain regions” (25). Again the formulation is rather vague, but the idea seems to be that knowledge of what kinds of object are and what they do is spread throughout the brain, engaging a number of predictable regions associated with sensory and motor knowledge, as well as category-specific regions that themselves overlap the former.
All of this is consistent with Martin and Chao’s considered view that there cannot be a region for each category. That distinct neural regions are associated with broad categories of object (e.g., animals versus tools), and that every representation of an object concept engages numerous distinct but overlapping areas, does nothing to undermine the idea of a fundamental ontological mismatch between concepts and their putative material embodiment. The intellect is capable of grasping a potential infinity of concepts, but no corporeal organ can harbour a potential infinity of anything. Looked at purely quantitatively, the problem can be put in terms of a dilemma. Either the more complex the concept, the larger the brain region in which it is stored; or the more complex the concept, the smaller the region. If we take the former, then the materialist is faced with the evident finite size of the brain: at some point there simply will not be enough brain space for new and more complex concepts than any that have so far been stored. If we take the latter, then we have to believe that more and more complex concepts can somehow be stored in smaller and smaller regions: yet how can this be? Are we to believe that, say, the concept black, furry, friendly, small, loyal, hungry dog is stored in a smaller region than the concept dog? How can less neural matter perform a greater storage function?
Looked at qualitatively, the intellect is distinguished by the feature that it can grasp a potentially infinite number of categories of concept, and within each category a potentially infinite number of exemplars. In other words, there is no limit to the number of kinds of thing the intellect can grasp, and no limit to the number of examples of each kind that it can recognize. By contrast, organs such as the eyes or ears can only receive categorially limited kinds of stimulus, such as colours and sounds respectively (inter alia); and within each kind of sensory stimulus they can only receive a limited number of examples; hence we cannot naturally see certain colours or hear certain sounds. The very physical finiteness of the organs of sight and hearing means they are bounded with respect to what kinds of information they can take in.
This is patently not so for the intellect; but it does not exclude the fact that the intellect, being finite in its own way, cannot discover certain things. There is a difference between the intellect’s not being able to reach certain truths by its own operation, and its suffering an intrinsic material limitation on the kind of information it can take in. The absence of such a material limitation, again, is consistent with the intellect’s being extrinsically limited in respect of the physical information it can take in—for example, not having the concept of a colour that is beyond the visible spectrum available to the eye.10 But if the sort of limitation just mentioned applies to the eyes and ears, it must apply to any proposed organ for storing concepts. The features of the eye and ear that make them singularly unsuitable for intellectual operation apply equally to the brain, the nervous system, or any other proposed material locus. It is the very materiality of such a locus that prevents it from storing the proper objects of intellectual activity.
It might be wondered why the analysis I have given of the ontological mismatch between concepts and potential material loci does not apply equally to lower mental functions such as sensations and feelings. Should not the dualist equally argue that sensations, for instance, are unextended, abstract, and universal? And if so, since many animals have them, shouldn’t we be dualists about those animals? A dualist could, of course, bite the bullet and draw the same conclusion as she draws concerning human beings. But there is no obvious reason to do so.
The only sense in which a sensation such as pain is an abstract entity is that it is a type of sensation that has particular tokens. It is not an abstracted entity in the sense we should give to concepts as explained earlier. In order to have a pain, a creature does not need to abstract anything from anything, whether in the extra-mental or the mental world. In order to have a pain, nothing universal—no form, to use the traditional terminology—needs to enter into the mind of the creature that possesses a pain. Pain is itself a universal—a kind of modification or characteristic of a thing. Its instances are the particular pains that individual pain sufferers have. Being in pain is also a universal, and its instances are the particular states of being in pain of individual sufferers. Nevertheless, for an individual to be in pain, no universal needs to enter into the mind of the sufferer; what happens when a creature is in pain is, as far as universals go, perfectly well accounted for, in contemporary terms, by the type/token distinction. A token of a certain sensation type gets into the mind of the sufferer. How does it do so? By the usual causal processes by which pains are brought about. Now there might be a further question about the qualitative aspect of pain, whether it is reducible to anything physical, and so on. To discuss this would take us into the classic issues surrounding reductionism and the identity theory that I have sought to avoid. There is no need to canvass them here because they both play no role in my argument and raise questions about the very terms in which the current debate is framed, some of which I have already discussed elsewhere (Oderberg 2005). As far as the argument presented here is concerned, all that needs to be said is that the storage problem has no evident analogue in the case of a lower mental function such as sensation in general or pain in particular.
Since, in the case of sensation, nothing abstract needs to enter into the mind of the creature that has it, so nothing unextended or universal needs to enter into its mind. What enters its mind will be wholly particular, and as such there should be no metaphysical barrier to investigating where such a particular might be located in a physical object such as the brain or a physical system such as the nervous system. By contrast, in order to have the concept of pain something abstract, that is to say abstracted, would need to enter into the mind of the possessor of that concept. The possessor would need to abstract from particular pains the universal or form pain, a real universal existing as multiplied in its various instances. The ontological mismatch problem, hence the storage problem, would again arise. But we have no reason to think that non-human animals that are capable of having pain are also capable of having the concept of pain. So again the dualist is in no way forced to countenance the existence of anything immaterial in the animals.
The idea that certain mental functions require an immaterial element in the human mind raises, of course, all the usual questions about what that element could be, and whether countenancing it is simply a case of explaining the obscure by means of the more obscure. I have discussed some of the issues related to these worries in another place (Oderberg 2005). Speaking generally here, I want to end with two observations.
First, there is much that can and should be said about the immaterial element in the human intellect, especially how it is related to the body. Hylemorphic dualism has a metaphysical story to tell about that relation and about the operations of the mind in both its material and immaterial aspects. In many ways it is a superior account to other kinds of dualism. It sees the human person as a compound of immaterial soul and material body, united in a complex way that is in some ways like, and in others unlike, the hylemorphic composition of material substances. So hylemorphic dualism is embedded within a broader metaphysic and is only comprehensible within the terms of that metaphysic. The broader account is, however, evaluable on its own merits, and itself has much to commend it (see further my 2007). Perhaps what distances hylemorphic dualism most from Cartesian dualism is precisely that it does not treat the mind as an add-on to an otherwise materialistic and mechanistic universe, but embeds it in a general ontology. True, for the hylemorphic as for other kinds of dualist, there is a radical discontinuity between mind and matter. But for the hylemorphist there is also a continuity according to which higher mental functions such as concept acquisition and possession are placed in an overall hierarchy of functions possessed by the various kinds of thing that exist. The human being himself is placed within such a hierarchy. He is not a Cartesian ego or centre of consciousness placed within a sea of materiality. The human being is himself essentially material in part, and so not wholly explicable in immaterial terms. There is a tension here, but it is a healthy tension the understanding of which enables us to comprehend the place of the human being in the cosmos.
Secondly, and on a more modest note, there is nothing wrong in itself with negative ontology any more than negative theology. To establish the basic claim of dualism, at least along the lines presented here, nothing needs to be said about what the immaterial element of the mind is, whether the mind is wholly immaterial, whether it has location, how it interacts with the body, how it could support the existence of concepts within it, and so on. All that needs to be established is that whatever the acquisition and possession of concepts really involves, it cannot be purely material. Matter simply is not sufficient to support or explain the phenomenon of human conceptual thought. To say that conceptual thought cannot be like that does not imply an obligation to explain just what it is like. To some extent, though, I have said what it must be like. There is still a lot more to be said— drawing on a story that in all its essentials goes back to Aristotle.11
On the Generation of Animals II.3, 736b28: “for bodily activity [somatiké energeia] has nothing in common with the activity of reason [nous]” (my translation); see also De Anima II.1, Ross (1931), 413a6 and De Anima III.4, Ross (1931), 429a25.
I also do not intend to discuss the prototype theory of concepts, according to which concepts are representations encoding information about statistical relations between things that fall under the concept and features the things possess. The theory, at least on some interpretations, takes concepts to have an entitative character in the sense I propose, but prototype theory is also wildly implausible and has been refuted by Fodor; see (1998), chapter 5.
Talk of “internal” and “invisible” qualities invites the thought that children are “hidden structure” essentialists of a Putnamian or even Lockean kind, but such essentialism is in itself questionable as a general theory of essence (Oderberg 2007, chapters 1 and 2) and does not seem necessary in accounting for children’s essentialist practices.
In fact it is more complicated than that. Even Socrates’s property of having come into the world at a particular spatiotemporal location is shareable in this world, being actually shared by his head; or if this can be evaded by precisifying the respective locations, then it is shared by his body; or for those who believe Socrates just is his body, then it is shared by the mereologically essential lump of matter constituting him at the very time and place at which he and it came into existence.
A word about “extra-mental.” It might be thought that concepts of mental states, for instance, are not concepts of anything extra-mental. But “extra-mental” in the present context means “outside the mind of the person having the concept.” To have a concept of pain, for instance, involves having the concept of something that exists outside the mind of the possessor, even if the possessor happens to be in pain. Their particular pain instance is in their mind; but pain itself is not.
I have argued that there are no concepts of individuals. What about
the concept of God? (I am grateful to Philip Stratton-Lake for posing
this question.) On the theory of what concepts are that I propose, there
is no concept of God either, appearances notwithstanding. Although there
is no room to go into all the details here, the basic idea is that it
is no more the case that God has instances than that Socrates or Fido
have instances. God is not identical to any universal or combination
of universals than are other individuals. Hence God is no more to be
identified with anything the mind can abstract than are other individuals.
Indeed, it is plausible that there is even more reason for denying that
there is a concept of God than that there is a concept of Socrates.
For Socrates has a definition inasmuch as he, like all other human beings,
is a rational animal. So we at least have the concept of Socrates qua
rational animal. This is not the concept of Socrates qua Socrates, i.e.
qua individual distinct from other individuals of the same (or another)
kind. Such concepts are what I deny in the present section: it is Socrates’s
very individuality that prevents there being a concept of him in this
sense. If we want to say that there is a concept of Socrates in the
sense that we have the concept of a human being and can define it, well
and good. But it is arguable that God does not even have this. God does
not have a definition in the sense in which Socrates has one, since
He does not fit into the genus/species taxonomy whereas Socrates does.
(For more on taxonomy, see my (2007).) He is not an instance any more
than He has instances. We sometimes think of God as a kind of deity
or divinity, but on reflection we can see that such a way of thinking
is at best merely analogous to the way we think of Socrates as a kind
of human, at worst incoherent. For what might the kind deity
even be? It is not the mere necessary uniqueness of God that
makes such a question seem absurd (though I suspect this to be relevant)
but the fact that there are no available criteria for what even counts
as being in the putative kind deity.
It follows that the typical textbook examples of the definition of God (a being that is omnipotent, omniscient, etc.) are not strictly definitions at all but ways of thinking of God: to use the Kripkean terminology, such property designations help to fix the reference of “God,” just as “was the teacher of Plato” helps to fix the reference of “Socrates.” That I can think of Socrates as the teacher of Plato does not imply that I have a concept of Socrates. The Anselmian account of God as a being greater than which none can be conceived should be treated similarly.
Why couldn’t the locus be materially complex? But then it would have parts, whereas simple concepts have none. Suppose, however, that there was a law of nature such that the proposed materially complex locus for a simple concept was a physical minimum; any locus smaller than that could not be the locus of a concept. If this were the case, then although the complex locus would be divisible, its parts could not themselves be loci for concepts, thus matching the property of simple concepts that they have no parts that are concepts. The problem is that the complex locus would still be divisible, even though not into parts that could themselves house concepts. Simple concepts, by contrast, have no parts whatsoever (ex hypothesi), whether semantic, syntactic, or of any other kind. Hence they would have nothing to correspond to the putative materially complex locus, and the mismatch between them would stand. (I am grateful to Andy Taylor for raising the point that prompted this note.)
Note the use of the term “form,” which in the context of the paper means something more than shape. The Aristotelian air is suggestive.
To say that persons (by which I mean human persons) are essentially embodied is not to say that they must have a body at every moment of their existence; see further Oderberg (2005).
Similarly, what I argue is consistent with our being limited by memory and other physical constraints on what concepts we can retain. That we can and do forget what we have learned does not undermine the storage problem. The process of forgetting is little understood, but whatever it involves it is not an excess of concepts physically stored in the brain, i.e. a simple lack of brain space. The physical mechanisms of memory do not help us to resolve what is a metaphysical problem.
I am grateful to colleagues and students at the University of Reading Work in Progress seminar for comments on a draft of this paper.
Ezcurdia, M. 1998. “The Concept-Conception Distinction.” Philosophical Issues 9: 187–92.
Fodor, J. 1995. “Concepts: A Potboiler.” Philosophical Issues 6: 1–24. ———. 1998. Concepts: Where Cognitive Science Went Wrong. Oxford: Clarendon Press.
Frege, G. 1951. “On Concept and Object”. Mind 60: 168–80. (Orig. pub. 1892; translated by P. Geach and M. Black. Reprinted in Translations from the Philosophical Writings of Gottlob Frege, edited by P. Geach and M. Black. Oxford: Blackwell, 1952, and in The Frege Reader, edited by M. Beaney. Oxford: Blackwell, 1997).
Gelman, S.A. 2003. The Essential Child: Origins of Essentialism in Everyday Thought. New York: Oxford University Press.
Margolis, E. and Laurence, S., eds., 1999. Concepts: Core Readings. Cambridge, Mass: Bradford Books/MIT Press.
Martin, A. 2007. “The Representation of Object Concepts in the Brain.” Annual Review of Psychology 58: 25–45.
Martin, A., and Chao, L.L. 2001. “Semantic Memory and the Brain: Structure and Processes.” Current Opinion in Neurobiology 11: 194–201.
Loeb, 1943. Aristotle: Generation of Animals, translated by A.L. Peck, Loeb Classical Library. Cambridge, MA: Harvard University Press.
Lowe, E.J. 2006. The Four-Category Ontology: A Metaphysical Foundation for Natural Science. Oxford: Clarendon Press.
Macià, J. 1998. “On Concepts and Conceptions.” Philosophical Issues 9: 175–85.
Nagel, T. 1974. “What is it Like to be a Bat?.” The Philosophical Review 83: 435–50.
Oderberg, D.S. 2005. “Hylemorphic Dualism.” In Personal Identity, edited by E.F. Paul, F.D. Miller, and J. Paul, 70–99. Cambridge: Cambridge University Press, (Orig. pub. as Social Philosophy and Policy 22, 2005, 70–99.)
———. 2007. Real Essentialism. London: Routledge.
Peacocke, C. 1992. A Study of Concepts. Cambridge, Mass: MIT Press.
Ross, W.D., ed. 1930. Aristotle: Physics. Oxford: Clarendon Press. (Vol. II of The Works of Aristotle.)
———. 1931. Aristotle: De Anima. Oxford: Clarendon Press. (Vol. III of The Works of Aristotle.)