Show Less
Open access

Essays on Values and Practical Rationality

Ethical and Aesthetical Dimensions


Edited By António Marques and João Sàágua

The essays presented here are the outcome of research carried out by members of IFILNOVA (Institute for Philosophy of New University of Lisbon) in 2016.

The IFILNOVA Permanent Seminar seeks to show how values are relevant to humans (both socially and individually). This seminar is the ‘place’ where different research will converge towards a unified viewpoint. This includes the discussion of the following questions: What is the philosophical contribution to current affairs and decisions that depend crucially on values? Can philosophy make a difference, namely by bringing practical reason to bear on these affairs and decision? And how to do it? Which are our scientific ‘allies’ in this enterprise; psychology, communication sciences, even sociology and history?

This volume shows the connection between practical rationality and values and covers the dimensions ethics, aesthetics and politics.

Show Summary Details
Open access

John McDowell on practical rationality – is he (really) talking about us? (Susana Cadilha)

← 70 | 71 →

John McDowell on practical rationality – is he (really) talking about us?



In what follows I shall try to give an account of John McDowell’s conception of practical rationality, using for the most part his collection of articles Mind, Value and Reality (1998). My overall aim is to argue that McDowell’s conception of human practical rationality is not in line with what we know about the way we think and act. Not being representative/typical of people like us, real agents, it is not, I think, a realistic conception.

I will give particular attention to two papers McDowell wrote arguing against two other famous philosophers – Philippa Foot and Bernard Williams: ‘Are moral requirements hypothetical imperatives?’ (1978) and ‘Might there be external reasons?’ (1995). There he tries to answer the following question: what does it mean to say that someone has a reason to act in a specified way? Williams (1981) famously argued that there are only internal reasons, meaning that one only has reason to do whatever practical reasoning,1 starting from one’s existing motivations, ← 71 | 72 → may reveal that one has reason to do.2 It is not that one has only reason to do what in a way satisfies some element in one’s subjective motivational set, but that those elements govern the practical reasoning leading up to the conclusion that one has reason to do something.

McDowell, on the other hand, defends that there are external reasons – that there are reasons to act unconnected with our existing motivations. How do people acquire such reasons? How do they start believing that there is a reason to act in a certain way, if there is no connection whatsoever with the subject’s motivational set? In order to be an external reason, that reason must have been there all along, so that in coming to see it, the agent must be arriving at a proper consideration of the matter. How come we manage to get things right?3

Let us focus on a typical domain of practical rationality – the ethical domain. According to Williams, ethical reasons are internal reasons; according to McDowell they are external reasons. This means there are ethical reasons for us to do something even if we are not able to see them and there is no practical reasoning or deliberative path that can take us there. The question is: how can we get things right, as would the virtuous person?4 How do we come to believe there is a reason for acting in a specified way and how can we acquire a new motivation by getting things right?

McDowell is not purely Kantian – he does not say that the agent is able to get things right because he is able to deliberate correctly, i.e. through a pure rational procedure. He clearly states that ‘the transition to being so motivated is a transition to deliberating correctly, not one effected by deliberating correctly’ (McDowell 1998: 107). No pure rational procedure would make us consider the matter aright – for instance, ← 72 | 73 → seeing that we should give back the wallet some passer-by has dropped. But that does not mean there is no reason to do that, and I would be able to see it if I was the right kind of person. If I had a proper ethical upbringing, I would have my eyes opened to some reasons I otherwise cannot see. As with someone who had not the benefit of an artistic education and hence cannot properly enjoy the experience of a work of art, someone who has not been properly brought up cannot see the reason why he should give back the wallet. But that reason exists (it is an external reason) – and ‘it might take something like a conversion to bring the reasons within the person’s notice’ (McDowell 1998: 107).5

Getting things right – figuring out which ethical reasons there are – is then a matter of ‘tuning up’ our moral perception. What distinguishes a virtuous person (who can clearly see what should be done) from a non-virtuous one is not that the former has different motivations – she simply sees things differently.

This is what leads us to McDowell’s most controversial theses. If acting correctly is just a matter of seeing/perceiving correctly, any moral fault will be a cognitive fault. This means that if I am not able to do the right thing (for instance, to give back the wallet I just found), it is not because I lack motivation to do it, but only because I lack the knowledge that it is the thing to be done. The difference between an honest and a dishonest person is not that they have different motivations or interests; rather, that difference lies exclusively in their ways of perceiving their circumstances. Thus, it would not be possible for two people to have exactly the same understanding of the circumstances and yet see different reasons to act.

Briefly, then, according to McDowell, it is a person’s understanding of how things are that gives her a reason for action. And if I were the right person, I would see the right reasons to act.6 Moral reasons, ← 73 | 74 → in particular, have no direct link with the agent’s existing motivations or interests; there is no need, in addition to her understanding of the relevant facts, for the agent to care about the situation, meaning that some desire would function as an independent and extra help in order to motivate her. Her belief does that on its own.7

The problem is, of course, a Humean one – is it possible for a purely cognitive state (a view of how things are) to entail some disposition to act, or to make the action attractive to its possessor? Hume would put it like this: does reason motivate?

McDowell would simply say that to assume that cognitive and conative/affective states have distinct existences is just a Humean dogma. Similarly, we do not have to take for granted that the world is, in itself, ‘motivationally inert’.

My worries about this view, I must say, have less to do with the worldview it presupposes than with the picture of mankind this view leaves us with. Are we really like that?

According to this view, there is no possible situation in which someone has the relevant understanding of the situation (say, that the thing to do is to give back the wallet) and is not motivated to act accordingly. Thus, believing that giving back the wallet is the thing to do necessarily entails wanting to do it (if the agent A thinks there is a reason to ᶲ in a particular case Y, then he must be willing to ᶲ in Y).8 ← 74 | 75 →

My doubts about this intellectualist account are the following: does that description really match the way people are, and act? Is it really the case that I don’t give back the wallet just because I don’t know what the thing to do is (I just have the illusion that I know)? Closely related with this is the description McDowell presents of the virtuous agent’s moral psychology: the virtuous person simply does not need to weigh reasons, because once he sees what is the thing to do, every other contrary reason that he might have simply vanishes – ‘the dictates of virtue, if properly appreciated, are not weighed with other reasons at all, not even on a scale that always tips on their side. If a situation in which virtue imposes a requirement is genuinely conceived as such, according to this view, then considerations that, in the absence of the requirement, would have constituted reasons for acting otherwise are silenced altogether – not overridden – by the requirement’ (McDowell 1998: 90).

It seems that McDowell has in mind some kind of ideal agent, not a real one. But it is a kind of ideal with no particular function attached, because there is no way to teach a non-virtuous man to become virtuous and hence no definite way to get closer to that ideal. And what about akrasia? According to this view it seems impossible that someone may act contrarily to his best judgment. If the akratic person knows he is not acting as virtue demands, then most likely he conceives the circumstances of his action as the virtuous person would conceive them. But then, if acting correctly is just a matter of perceiving the matter correctly, there is no room left to akrasia – if someone conceives the situation as the virtuous person does, then he would know what to do and any other considerations that might constitute reasons for acting otherwise would simply be silenced.

The only solution available to McDowell is simply to posit that the incontinent person’s understanding of a situation does not match that of a virtuous person.9 But in that case, the mere conceptual possibility of akrasia vanishes. If there cannot be a perfect match with the way a fully virtuous person conceives the circumstances of his action, ← 75 | 76 → then an akratic action is conceptually impossible since it is never the case that someone acts contrarily to his best judgment; people behave differently just because they have different understandings of what is to be done.

In a nutshell, what I am arguing is that, while not being a pure Kantian, still McDowell inflates the agent’s rationality by stating that if the agent thinks he has a (moral) reason to do X, then he wants/is motivated to do X. What I say is that we are not like that: sometimes, we really think that we must do X, or that we have reason to do it, but still we want to do something else.

If Hume has a minimalist conception of practical rationality (it has only an instrumental role, one of finding the right means to attain a given end), McDowell stands to blame for the opposite excess, assuming the intellectualist position that practical knowledge necessarily entails motivation to act; that the agent must want to do what he has a reason to do. Neither of these seems to give an accurate account of how rationality and desire combine in order to originate action. If it is true, on the one hand, that we can rationally deliberate about ends and not only about means (that desires are subject to rational criticism), it is also true, on the other, that there is no guarantee that the agent’s motivation will always align with the agent’s reasons, or that the agent necessarily wants to do what he thinks is the best to do.


So far I have been defending that McDowell gives an inflated account of practical rationality and thus he is not actually speaking about real agents, people like us.

Connected with that thesis, there is another way in which I think McDowell shows his alignment to that classical philosophical conception of mankind according to which human beings are the exemplars of rationality and autonomy. In fact, McDowell thinks that man is a creature who stands apart from animals by virtue of his powers of self- control, reasoning, and reflection – that there is a clear line that separates ← 76 | 77 → the human from the animal way of living.10 This is because McDowell draws a very sharp distinction between conceptual and non-conceptual, cognitive agents and thinkers.

Continuing to think about the ethical domain, it is easy to see how McDowell is prone to recognize the autonomy of any normative domain such as the ethical one. There is an is-ought gap and no communications allowed. That means that moral matters are purely conceptual and rational matters – thinking what we should do is a rational ability only humans have, and any descriptive or psychological aspect of man is pulled apart from that ability. I mean: the instinctive tendencies we share with other animals do not determine that conceptual and rational ability; and that rational ability cannot be explained in a way that is not itself rational.11 My doubts are the following: is it really the case that when it comes to moral matters our ‘first nature’ traits are simply overridden? That we get rid of all of our natural determinations? That with the ‘onset of reason’, as McDowell puts it, the practical tendencies that are part of our first nature simply vanish? My opinion is that it is not very plausible to think, with McDowell, that there is an abrupt chasm between biologically determined creatures, on the one hand, and creatures moved only by reasons, on the other. Our rational and conceptual abilities do not override our animal nature.

It seems clear to me that only rational beings are capable of elaborate moral systems and sophisticated forms of moral thinking. Sophisticated forms of moral thinking imply conceptualization and abstract ← 77 | 78 → reasoning. After all, besides being capable of feelings of outrage in face of asymmetry and unfairness (this is an inequity aversion that we share with non-human primates),12 we are also able to design sophisticated constructs such as theories of justice. The ability to morally evaluate that characterizes us at this point in our development involves the ability to pose what philosophers usually refer to as the ‘normative question’: think about what should be the case, question the assumptions and the consequences of action. Now, this is not an automatic behavior or an instinct. This fully developed ability to think morally is what characterizes us as moral beings. What seems questionable is to conceive of no continuity whatsoever between one thing and another and to sustain that our ability to think morally is of a fundamentally different nature, which keeps us irremediably apart from the ‘mere’ dispositions and feelings of non-linguistic animals. What seems questionable is the idea that being a moral agent has to do with ability for conceptual thinking, but not also with the ability to repudiate certain asymmetries in situations. It seems plausible to say that there is a link between this fully developed capacity we exhibit today and the intuitions and dispositions probably exhibited by our ancestors. My point is this: because we are linguistic beings, capable of conceptual and abstract thinking, we come to a level of sophistication in terms of moral thinking that allows us to think in terms of reasons, and to develop theories that allow the justification of moral positions before the members of the community who also have the ability to discuss them. But the fact that we have reached this level does not mean that the ability to assign value to items in the world, and perhaps the content of some evaluative positions, may not have been influenced and shaped by factors other than rational reflection. It seems to me legitimate to think that there was evaluation and value assignment before there was a rational capacity for justification. This basic capacity to experience items in the world as things requiring certain reactions or counting for certain reactions precedes a linguistically mediated reflective ability to pose the normative question. So, because we are sophisticated creatures we can take a step back with respect to these primitive evaluative dispositions or intuitions and not follow them ← 78 | 79 → compulsorily; but the fact that we are reflective creatures who can take that step back does not entail that such dispositions cannot yet influence our moral judgments.

Another thing which is difficult to believe in is the thesis that our rational and conceptual capacities are completely untainted by other aspects of our psychology. If we take a careful look, for instance, at some experiments on moral psychology,13 we may be able to see that it is not the case that our moral judgments always arise out of data manipulation and further rational deliberation. Rather, what we usually define as a moral judgment may after all have its basis in a ‘gut reaction’ and may not be an expression of propositional knowledge. When faced with certain types of morally innocuous transgressions (like using a national flag to wipe the floor, or drinking a glass of water after having spat in it), people show the same kind of reactions that moral transgressions elicit (they are thought of as being universally wrong, of a non-contingent and mandatory nature, their wrongness independent from authority), even though they cannot find a reason to do so. This appears to bring moral judgments close to a certain kind of affective response in which reflection over propositional contents plays little or no role at all.14

These experiments are in line with numerous experimental studies that represent the core of cognitive psychology, and that rest on the hypothesis that most of our judgments result from the triggering of fast and frugal heuristics, and not from deliberative processes. It is not absurd to think that the same happens with moral judgments: they result from heuristics and many of them are automatic.15 (This does not mean moral reasoning has no place, but it looks like its main function is that of a post-hoc rationalization – it is useful to justify previous intuitions or whenever a conflict between moral intuitions arises). In fact, in many different areas of research it has been found that people make evaluations (as to whether an event/object is good or bad, for instance) immediately, unintentionally and without awareness that they are doing it, so ← 79 | 80 → it may be the case that ‘what we think we are doing while consciously deliberating in actuality has no effect on the outcome of the judgment, as it has already been made through relatively immediate, automatic means’ (Bargh & Chartrand 1999: 475). It is not absurd to think that the influences of heuristics and biases uncovered in recent cognitive psychology are widespread in everyday ethical reflection.

So, it might be the case that human beings are not paragons of rationality and autonomy. But if we stick to McDowell’s theory of practical rationality, it is clear that the capacity that determines, in a given situation, what matters about that situation and that enables us to evaluate it, is a conceptual conscious ability that only rational animals possess (it is the result of being initiated into a ‘conceptual space’, as McDowell puts it). In a practical syllogism – that can be used to deliberate or to organize an agent’s reasons for action – a judgment determining which feature of the situation matters constitutes one of the premises. And it is also clear that the actions through which we manifest our moral character must be chosen; even if McDowell grants, following Aristotle, that virtuous action is the result of habit, we must not understand that as happening out of instinct or inertia – on the contrary, virtue requires that ‘specially human capacity for discursive thought’ (McDowell 1998: 39). But if we consider virtue-ethical ideals of practical rationality in light of the model of human cognition now emerging, we realize that moral behavior is not immune to cognitive biases and that it does not always flow from reflectively endorsed moral norms or robust traits of character like virtues. Rather, we see that minor situational influences (such as ambient noise, or the fact that someone is in a hurry) determine moral behavior.16 In fact, various experiments in social psychology revealed that subjects were much more likely to help someone in need if they had just found a dime, or are not in a hurry, or if the ambient noise was at normal levels. Circumstantial and morally irrelevant factors influence moral behavior in a decisive way, and can also influence the way we perceive the situation as an occasion for ethical decision. And it is extremely relevant that those cognitive biases or response tendencies are beyond the reach of individual practical rationality. ← 80 | 81 →

It thus seems as though not only McDowell’s conception of moral abilities but also his idea that it suffices to believe that X is the thing to do in order to be motivated to do it are not in line with what we know about the way we are and think. It is possible to simply argue that real agents are defective practical reasoners, but in that case we need to admit that there is a distance between the picture of human cognition that applies to virtuous people and the model of human cognition now emerging in the cognitive sciences that applies to everyone else. And how useful and illuminating can that be?

My point in this paper was just to argue that from a philosophical perspective, no less than from an empirical one, McDowell’s account of practical rationality is not realistic, since it seems to ignore features that are determinative of us as human agents.


Aristotle (2009). The Nicomachean Ethics. Oxford: Oxford University Press.

Bargh, J., & T. Chartrand (1999). ‘The unbearable automaticity of being’. American Psychologist 54 (7): 462–479.

Brosnan, S., & F. De Waal (2003). ‘Monkeys reject unequal pay’. Nature 425: 297–299.

Darley, J., & D. Batson (1973). ‘From Jerusalem to Jericho: A Study of Situational and Dispositional Variables in Helping Behavior’. Journal of Personality and Social Psychology 27: 100–108.

De Waal, F., & M. Berger (2000). ‘Payment for labour in monkeys’. Nature 404: 563.

Greene, J., Sommerville, R., Nystrom, L., Darley, J., & J. Cohen (2001). ‘An fMRI investigation of emotional engagement in moral judgment’. Science 293 (21): 2105–8.

Haidt, J., Koller, S., & M. Dias (1993). ‘Affect, culture, and morality, or is it wrong to eat your dog?’. Journal of Personality and Social Psychology 65(4):613–28. ← 81 | 82 →

Haidt, J. (2001). ‘The emotional dog and its rational tail: A social intuitionist approach to moral judgment’. Psychological Review 108: 814–834.

Haidt, J., & F. Bjorklund (2008). ‘Social intuitionists answer six questions about moral psychology’, in W. Sinnott-Armstrong (ed.), Moral Psychology, Vol.2: The cognitive science of morality: intuition and diversity. Cambridge (MA): MIT Press: 181–219.

Isen, A., & P. Levin (1972). ‘Effect of Feeling Good on Helping: Cookies and Kindness’. Journal of Personality and Social Psychology 21: 384–388.

Mathews, K., & L. Cannon (1975). ‘Environmental noise level as a determinant of helping behavior’. Journal of Personality and Social Psychology 32: 571–577.

McDowell, J. (1994). Mind and World. Cambridge (MA): Harvard University Press.

___ (1998). Mind, Value and Reality. Cambridge (MA): Harvard University Press.

Nagel, T. (1979). The Possibility of Altruism. Princeton: Princeton University Press.

Nichols, S., & T. Folds-Bennett (2003). ‘Are children moral objectivists? Children’s judgments about moral and response-dependent properties’. Cognition, 90(2): B23-B32.

Pettit, P., & M. Smith (2006). ‘External Reasons’, in Cynthia Macdonald and Graham Macdonald (eds.), McDowell and his critics. Oxford: Blackwell Publishing, 2006: 142–169.

Williams, B. (1981). ‘Internal and External Reasons’, in S. Darwall, A. Gibbard and P. Railton (eds.), Moral Discourse and Practice. Oxford: Oxford University Press: 363–372.

1 Williams presents no restricted account of practical reasoning, though – a practical reasoning is more than the mere discovery that some course of action is the means to an end; it is ‘a heuristic process, and an imaginative one’ (Williams 1981: 110). ‘A clear example of practical reasoning is that leading to the conclusion that one has reason to ᶲ because ᶲ-ing would be the most convenient, economical, pleasant etc. way of satisfying some element in S [the agent’s subjective motivational set] …. But there are much wider possibilities for deliberation, such as: thinking how the satisfaction of elements in S can be combined, e.g. by time-ordering; where there is some irresoluble conflict among the elements of S, considering which one attaches most weight to … ; or, again, finding constitutive solutions, such as deciding what would make for an entertaining evening, granted that one wants entertainment’ (Williams 1981: 110). ‘Imagination can create new possibilities and new desires’ (Williams 1981: 105).

2 It is important to notice that according to Williams’s interpretation it is not required that the agent is actually motivated to do what he has reason to do.

3 By deliberating correctly, Williams would say. If there were external reasons, there would be a procedure of correct deliberation that gives rise to a motivation, but is not controlled by nor connected to the agent’s existing motivations.

4 McDowell follows Aristotle and his virtue ethics – he thinks the most important thing in ethics is to be the right person (the well-educated one). The virtuous person is the measure of the right action, and not the other way around.

5 ‘In moral upbringing what one learns is not to behave in conformity with rules of conduct, but to see situations in a special light, as constituting reasons for acting; this perceptual capacity, once acquired, can be exercised in complex novel circumstances’ (McDowell 1998: 85).

6 McDowell is a moral particularist: there is no rule or criterion to define what the right action is; it will always depend on the particular context. The virtuous person is the one who knows how to act in each occasion, who is sensible enough to distinguish the particular features of each situation. As I said before, the virtuous one is the measure of the right action.

7 This is how T. Nagel puts it: ‘That I have the appropriate desire simply follows from the fact that these considerations motivate me; if the likelihood that an act will promote my future happiness motivates me to perform it now, then it is appropriate to ascribe to me a desire for my own future happiness. But nothing follows about the role of the desire as a condition contributing to the motivational efficacy of those considerations’ (Nagel 1979: 29–30).

8 This is usually referred as motivational internalism: ‘The names ‘internalism’ and ‘externalism’ have been used to designate two views of the relation between ethics and motivation. Internalism is the view that the presence of a motivation for acting morally is guaranteed by the truth of ethical propositions themselves. On this view the motivation must be so tied to the truth, or meaning, of ethical statements that when in a particular case someone is (or perhaps merely believes that he is) morally required to do something, it follows that he has a motivation for doing it. Externalism holds, on the other hand, that the necessary motivation is not supplied by ethical principles and judgments themselves, and that an additional psychological sanction is required to motivate our compliance’ (Nagel 1979: 7).

9 ‘The way out is to attenuate the degree to which the continent or incontinent person’s conception of a situation matches that of a virtuous person’ (McDowell 1998: 92).

10 ‘…we do not fall into rampant Platonism if we say the shape of our lives is no longer determined by immediate biological forces. To acquire the spontaneity of the understanding is to become able, as Gadamer puts it, to “rise above the pressure of what impinges on us from the world” (Truth and Method, p. 444) – that succession of problems and opportunities constituted as such by biological imperatives – into a “free, distanced orientation” (p. 445)’ (McDowell 1994: 115–116).

11 ‘Moral education enables one to step back from any motivational impulse one finds oneself subject to and question its rational credentials. Thus it effects a kind of distancing of the agent from the practical tendencies that are part of what we might call his first nature. … If the second nature one has acquired is virtue … [its] dictates acquired an authority that replaces the authority abdicated by first nature with the onset of reason’ (McDowell 1998: 188).

12 See De Waal & Berger (2000) and Brosnan & De Waal (2003).

13 Cf. Haidt et al. (1993); Haidt & Bjorklund (2008); Greene et al. (2001).

14 See also Nichols and Folds-Bennett (2003).

15 One of the simplest heuristics studied in this field is that which makes us immediately agree with and positively value what is said by people we like.

16 Cf. Isen & Levin (1972); Darley & Batson (1973); Mathews & Cannon (1975).