="http://www.w3.org/2000/svg" viewBox="0 0 512 512">

Why Philosophy Fails: A View From Social Psychology

Ben Gibran

Introduction

The central claim of this work is that philosophy is semantically indeterminate, in the sense that there is a high degree of irremediable uncertainty as to what, if anything, it means. This uncertainty is not the subjective kind, of feeling uncertain, but the objective sort, of there being no way to confirm which possible meaning (if any) is the right one, even though a number of possible meanings can be ruled out simply from the words used in the text.

If someone says, “The mind is not a material substance,” she probably does not mean that the mind is made of cheese (since cheese is a material substance); but merely ruling out what she does not mean fails to lend a determinate sense to either ‘mind’ or ‘material’ in her utterance. The temptation is to add more words to ‘clarify’ what she said, and that is where philosophy goes fundamentally wrong. This monograph attempts to explain why.

For the purpose of this argument, ‘philosophy’ is defined as a purely discursive extended inquiry. A discursive inquiry proceeds from theoretical reflection upon general ‘armchair’ beliefs; ones that the average adult person has acquired in the course of a normal life, rather than through empirical observations specifically carried out, or consulted, for the purpose of the inquiry. Not all that is generally called ‘philosophy’ falls under that definition, but certainly enough to make this critique worthwhile. On the other hand, not all purely discursive disciplines are always called ‘philosophy’ (for example, parts of social, critical or literary theory).

 

A Collective Private Language

The genesis of this work owes much to a paper entitled ‘Sociology as a Private Language’ by Arthur Brittan (1983). In it, he observes that the specialized technical vocabulary of sociology functions in some respects as a ‘private language’ whereby, as he put it, “Academics become locked into their own language games, which eventually are externalized and given shape in the texts [which] themselves are then defined as the arbitrators of . . . experience” (1983, 583).

Brittan’s argument was an informal one. In his own words, “I am not concerned with the felicities and ambiguities of linguistic philosophy or conceptual analysis” (1983, 581). Rather, he sought to warn in general terms that by focusing on the exegesis of its own increasingly technical and self-referential literature, sociology risked losing perspective on the social issues it sought to illuminate. This monograph attempts to frame Brittan’s thesis as a more formal, technical argument aimed at philosophy.

This formal argument proceeds by contrasting philosophy’s methodology with that of the sciences (not to idealize science, but to use it as a foil to highlight certain structural flaws in philosophy). Scientific and technical disciplines have a ‘building block’ configuration, in which knowledge is assembled out of predictions that have been rendered authoritative by replication, often across disciplines, and in many cases by non-experts, such as amateurs and users of everyday technology.

In general, expressions in a ‘building block’ discourse have highly determinate meanings, in so far as the results they describe are widely replicable under critical scientific scrutiny (to preempt any charge of ‘scientism’, it is worth mentioning that this claim is purely about the semantic determinacy of building-block discourses, not their veracity). In contrast, philosophy has an ‘all or nothing’ configuration, in which virtually all relevant arguments (with the exception of those already proven scientifically, logically or mathematically) are critically examined from first premises in any new inquiry. The reason for this practice is beyond the scope of this thesis, which is chiefly concerned with what is entailed by the ‘all or nothing’ approach, within a purely discursive discipline.

It follows that any rigorous philosophical inquiry calls for in-depth knowledge of the relevant literature (knowledge possessed almost exclusively by philosophy postgraduates, preferably PhDs). Some may argue that this is an unremarkable feature of most academic disciplines. However, practitioners in most other disciplines are not expected to have a detailed grasp of every aspect of their inquiry. For example, a biologist does not usually need to replicate all the secondary research that underpins her own work. She can ordinarily trust in cross-replication by others to underwrite the reliability of many findings in her literature survey, some of which may lie outside her area of expertise.

On the other hand, a philosopher would be expected to critically review every work in his bibliography ‘from the ground up’. Because such an effort requires expertise in philosophy and little else beyond common knowledge, its practitioners rely for critical feedback almost exclusively on other philosophers who have been similarly trained, and correspondingly, discount the views of anyone without such training (with the rare exception of opinion polls of what ‘ordinary people’ believe). This intellectual insularity renders philosophy into a ‘black box’; in that the philosophically unqualified are unable to make an informed judgement (in the eyes of philosophers) as to the quality of its contents, beyond the most peripheral observations.

Thanks to the ‘all or nothing’ approach, the conclusions of philosophy are presumably not testable without in-depth knowledge of the methods by which they are arrived at; knowledge which can (apparently) only be gained through an extensive program of study, in philosophy. As a result, the internal influence of cognitive biases and conformity effects cannot be independently assessed (more on that below). Given that philosophy is a ‘black box’ in the above sense, it is impossible to make a judgement that is both informed and impartial as to whether its discourse makes sense. The rational stance towards philosophy is therefore one of agnosticism as to its semantic determinacy: with the exception of sub-disciplines under ‘philosophy’ that have a building-block configuration (for example, some parts of logic, experimental philosophy, metaphysics or cognitive science).

Some may object that the simpler arguments in philosophy offer windows of scrutiny into the discipline for the educated non-expert. This claim is misleading, because most major arguments in philosophy are accompanied by counter-arguments which are prima facie equally plausible. It is impossible for even diligent non-experts to make an informed judgement as to which philosophical arguments ‘work’, without detailed knowledge of the overarching debate. Even within professional philosophy, there is considerable disagreement on the soundness of most of its substantive arguments.

In the case of ‘building block’ disciplines, no such in-depth knowledge is required for a non-expert to know, for example, that radios generally work as they should, or appendectomies are usually successful. Collectively, such publicly intelligible displays of advanced technical know-how demonstrate that the relevant theories are not semantically indeterminate, or in less technical parlance, ‘full of hot air’. Scientists rarely acknowledge the fact, but the non-expert public is heavily involved in keeping nonsense out of the sciences, effectively acting as a second, informal, layer of quality control after peer review. Unfortunately, not so in philosophy, where one has to demonstrate an expert knowledge of the subject for one’s opinion to count beyond the most basic level.

 

Cognitive Bias and Conformity

As mentioned above, philosophy’s intellectual insularity renders it vulnerable to cognitive biases and conformity effects that could distort the discipline’s ability to judge the quality of its output. The idea that philosophers are likely to retain their non-philosophical sensibilities when doing philosophy (and are therefore able to detect deviations from ‘ordinary’ meanings in philosophical discourse) fails to account for the psychological effects of conformity to philosophical modes of communication.

Such effects are exacerbated by the aforementioned non-instrumentality of philosophical discourse; which offers no opportunities for feedback from beyond the ‘ivory tower’ on the quality of philosophical research, through the equivalent of public appraisals of scientific predictive power and technical efficacy. Because the criteria for success in philosophy are (for all practical purposes) internal to its discourse, students have to master the relevant vocabulary and literature in order to be judged competent to critique the discipline’s output, rendering external feedback from non-experts largely irrelevant.

The resulting epistemic circularity strains the ability of philosophers to maintain semantic continuity between ordinary and philosophical discourse. In the course of studying philosophy in an insular and hierarchical academic setting, a scholar’s judgement is subjected to conformity effects. The intensive process of learning new philosophical terminology and usages (as well as ‘unlearning’ the old) induces a loss of perspective, which erodes the value of prior assumptions as a guide in doing philosophy (especially when the discipline aims at questioning those very assumptions).

Even when engaging alone with philosophy through self-study, a person’s judgement is always subject to cognitive biases, distortions in judgement arising from innate human dispositions that may have once conferred an evolutionary advantage (for example, the ‘optimism bias’, a tendency to rate one’s own abilities and the likelihood of future success more highly than warranted; a trait which possibly aided group survival).

Psychologists have identified over fifty types of cognitive bias (Benson, 2 September 2016), which can often distort an individual’s perception of whether he or she is making sense (Fig. 1). These biases are reinforced by the ‘bias blind spot’, the overarching bias of believing oneself to be less vulnerable than others to cognitive bias.

Figure 1: Wikipedia’s complete (as of 2016) list of cognitive biases, arranged and designed by John Manoogian III (jm3). Categories and descriptions originally by Buster Benson. license: (CC BY-SA 4.0) Source – Wikimedia: https://commons.wikimedia.org/wiki/File: The_Cognitive_Bias_Codex_-_180%2B_biases,_designed_by_John_Manoogian_III_(jm3).png

A fundamental role of institutionalized academic disciplines is the mitigation of cognitive bias; through collective peer-criticism further grounded in feedback from disinterested outsiders, less handicapped by intellectually compromising associations with the discipline in question. Intellectual accountability to external observers is key to the mitigation of conformity effects within a discipline; such as susceptibility to majority peer influence, and deference to authority figures. Peer review is an empty gesture if the process is effectively opaque and unaccountable to those outside the discipline.

Without meaningful external scrutiny, there is little to prevent the transmission of biases across a discipline through its authority figures. Such scrutiny cannot rely solely on expert peer review, because one’s peers may approve of one’s output for a variety of reasons, personal, political or professional, that have nothing to do with its semantic determinacy. Rather, effective intellectual scrutiny encompasses the collective and cumulative judgments of various disciplines and numerous ordinary people, including the readers of this work. Every time we take a paracetamol, drive a car or draw up a will, we are participating in a mass quality-control exercise that underwrites the semantic consistency of the relevant technical discourses.

To demonstrate the seriousness of the threat posed by conformity effects within a discipline, a few findings from experimental psychology are outlined here. The widely replicated Asch experiments on social conformity have an immediate relevance to how academic philosophy is conducted. In the classic experiment by Solomon Asch, a group of volunteers was asked which one of three lines on a card was the same length as a fourth reference line on another card (Fig. 2). The subjects were to take turns giving their answers. Under normal circumstances, an average of only one person out of thirty-five gave a wrong answer.

Figure 2: The cards used in the Asch conformity experiments. The reference card is on the left. The card on the right has the three comparison lines. Originally by Fred the Oyster. license: (CC BY-SA 4.0) Source – Wikimedia: https://commons.wikimedia.org/wiki/File:Asch_experiment.svg

However, in some experiments one participant was unaware that before his turn, the others (who were conspiring with the experimenter) would each deliberately call out the same wrong answer. In those ‘rigged’ experiments, about one-third of genuine subjects gave the same wrong answer as the conspirators. In this one-third, the subjects chose to second-guess the evidence of their own eyes and draw a different conclusion, based on what the majority appeared to believe (Asch, 1951). The Asch experiment demonstrates the ‘bandwagon effect’, the tendency to do or believe something because others are apparently doing or believing the same thing.

The setting of the Asch experiment is not altogether unlike a philosophy seminar; where the lines on the cards are replaced by (invisible, and therefore more elusive) abstract concepts, and the role of the experimenter is filled by an authority figure in the form of a senior academic who ‘steers’ the discussion. In a famous experiment (Fig. 3) on obedience to authority, Stanley Milgram (1963) found that when asked by an experimenter, most volunteers were willing to give very severe electric shocks to a stranger (really an actor pretending to be shocked).

Figure 3: Illustration of the setup of a Milgram experiment. The experimenter (V) convinces the subject (L) to give what the subject believes are painful electric shocks to another subject, who is actually an actor (S). Many subjects continued to give shocks despite pleas of mercy from the actors. Originally by Wapcaplet. license: (CC BY-SA 3.0) Source – Wikimedia: https://commons.wikimedia.org/wiki/File:Milgram_Experiment.png

Milgram noted that when the volunteers were allowed to choose the voltage, most stopped at the lowest levels. He concluded that the experimenter, an authority figure represented by an academic, was able to override the conscience of most volunteers. In academic philosophy, lackluster participation usually elicits a price, in terms of advancement in a degree program or progress towards tenure. The incentive is great to avoid questioning the basic assumptions of the discipline for the sake of ‘joining in’. Those with fundamental doubts are likely to leave the discipline, depriving it of internal critics; while the views of external critics are generally discounted as lacking in expertise.

The more we sacrifice time and effort on an activity, the greater our desire to defend it. Psychologists call this tendency ‘effort-justification’. In one study on effort-justification by Aronson and Mills (1959), volunteers underwent either a mild or severe initiation ceremony to join the same activity. Volunteers who got in ‘the hard way’ rated the activity more highly. The considerable time and effort required to learn philosophy is an incentive to effort-justification, which fosters a reluctance to question the fundamental assumptions ‘justifying’ the effort.

The conformity effects in the Asch and Milgram studies were severely reduced in the presence of visible dissent. Some may argue that since philosophy thrives on dissent, conformity within the discipline is likely to be negligible. However, there is very little disagreement in philosophy on the general semantic determinacy or relevance of the subject itself. Anyone who doubts the utility of philosophical discussion is likely to leave of her own accord; ensuring that basic methodological assumptions remain largely unchallenged within the discipline.

In technical subjects, there are mechanisms in place to mitigate the effects of both cognitive bias and institutionalized conformity. Enterprises such as the natural sciences, law, history, and even various arts and crafts are intellectually accountable to the lay non-specialist in a thousand small ways for what they say and do. The practitioners in these disciplines have (by virtue of their subject-matter and/or methodology) an institutionally self-imposed obligation to operate according to standards shared across a diverse range of epistemic communities outside the discipline, including the non-expert public.

Certain conditions have to apply for non-specialist feedback to contribute effectively to countering cognitive bias and institutionalized conformity. These conditions include (but are not limited to) the discipline’s self-imposed intellectual accountability to outsiders and non-experts (through their evaluation of the discipline’s predictive efficacy), freedoms of expression and information, a ‘building block’ configuration to research findings (allowing each to be independently tested through a ‘division of labor’), a rich network of causal or inferential connections between the discipline’s discourse and predicted outcomes, and pressure within disciplines towards greater predictive power and technical effectiveness.

A full analysis of these conditions is beyond the scope of this work, but what is relevant here is that even in sub-optimal conditions (as is usually the case), feedback from non-specialists on predicted outcomes sets limits to what technical discourses can get away with in terms of semantic inconsistency, incoherence or vacuity. The absence of this feedback mechanism in philosophy (essentially due to the discipline’s purely discursive methodology, with the results cited above) leaves it with little defence against cognitive bias and institutionalized conformity.

 

Conclusion

One significant practical implication of philosophy’s semantic indeterminacy is the restricted scope of its contribution to the public sphere. A formal philosophical education can impart valuable general skills of reasoning, argument and textual research, as well as some incidental knowledge (for example, of various logical fallacies), but it does not grant the philosopher a privileged general epistemic status (for instance, as a ‘moral expert’) over non-philosophers. Stanley Fish has written eloquently on this issue, in a statement which also sums up the main point of this thesis:

Now it could be said (and some philosophers will say it) that the person who deliberates without self-conscious recourse to deep philosophical views is nevertheless relying on or resting in such views even though he is not aware of doing so. To say this is to assert that doing philosophy is an activity that underlies our thinking at every point, and to imply that if we want to think clearly about anything we should either become philosophers or sit at the feet of philosophers. But philosophy is not the name of, or the site of, thought generally; it is a special, insular form of thought and its propositions have weight and value only in the precincts of its game.

Points are awarded in that game to the player who has the best argument going (‘best’ is a disciplinary judgment) for moral relativism or its opposite or some other position considered ‘major’. When it’s not the game of philosophy that is being played, but some other: energy policy, trade policy, debt reduction, military strategy, domestic life, grand philosophical theses like ‘there are no moral absolutes’ or ‘yes there are’ will at best be rhetorical flourishes; they will not be genuine currency or do any decisive work. (Fish 2011)

Philosophers often have insightful things to say, and have made significant intellectual contributions to a wide range of disciplines. But the question before us is whether such insights result from doing philosophy, or simply from the exercise of general intellectual aptitudes. After all, philosophers are people; and some people are wise and insightful. But they are not so by virtue of doing philosophy.

 

Bibliography

Aronson, E., Mills, J. ‘The Effect of Severity of Initiation on Liking for a Group.’ Journal of Abnormal and Social Psychology, 59 (1959): 177-181.

Asch, S. E. ‘Effects of Group Pressure Upon the Modification and Distortion of Judgment.’ In H. Guetzkow (ed.) Groups, Leadership and Men. (Pittsburgh, PA: Carnegie Press, 1951).

Ayer, A. J. Interview with Bryan Magee for the television programme Logical Positivism and its Legacy. (London: British Broadcasting Corporation, 1976).

Ayer, A. J. Language, Truth and Logic. (London: Penguin, 2001).

Ayer, Alfred, and Rush Rhees. ‘Symposium: Can There be a Private Language?’ Proceedings of the Aristotelian Society, Supplementary Volumes 28 (1954): 63-94.

Benson, Buster. ‘Cognitive Bias Cheat Sheet.’ Better Humans, 2 September 2016. http://www.psychologytoday.com/blog/in-practice/201301/50-common-cognitive-distortions

Brittan, Arthur. ‘Sociology as a Private Language.’ Poetics Today 4 (1983): 581-594.

Cavell, Stanley. The Claim of Reason: Wittgenstein, Skepticism, Morality, and Tragedy. (New York: Oxford University Press, 1999).

Fish, Stanley. ‘Does Philosophy Matter?’ Opinionator-NYTimes.com, 1 August 2011. http://opinionator.blogs.nytimes.com/2011/08/01/does-philosophy-matter/ (accessed November 26, 2011).

Graham, Paul. ‘How to do Philosophy.’ 2007. http://paulgraham.com/philosophy.html (accessed April 26, 2014).

Jackson, Frank. From Metaphysics to Ethics: A Defence of Conceptual Analysis. Oxford: Clarendon Press, 1998.

MacKinnon, Edward. ‘Scientific Realism: The New Debates.’ Philosophy of Science 46 (1979): 501-532.

Milgram, S. ‘Behavioral Study of Obedience.’ Journal of Abnormal and Social Psychology 67 (1963): 371-378.

Papineau, D. ‘The Poverty of Analysis.’ Aristotelian Society Supplementary Volume 83 (2009): 1-30.

Passmore, J. ‘Logical Positivism.’ In P. Edwards (Ed.). The Encyclopedia of Philosophy, Vol. 5, 52-57. (New York: Macmillan, 1967).

Rosen, Stanley. ‘The Metaphysics of Ordinary Experience.’ Harvard Review of Philosophy 5 (1995): 41-57.

Russell, Bertrand. My Philosophical Development. (London: Allen Unwin, 1959).

Ryle, Gilbert. The Concept of Mind. (London: Hutchinson, 1949).

Searle, John. ‘Minds, Brains, and Programs.’ Behavioral and Brain Sciences 3 (1980): 417-424.

Williamson, Timothy. ”Philosophical ‘Intuitions’ and Scepticism about Judgement.’ Dialectica 58 (2004): 109-153

Wittgenstein, Ludwig. Philosophical Investigations. (New York: Macmillan, 1953).

Wittgenstein, Ludwig. Tractatus Logico – Philosophicus. (London: Routledge Paul, 1961).

License

Why Philosophy Fails: A View From Social Psychology Copyright © Ben Gibran. All Rights Reserved.

Share This Book

Feedback/Errata

Leave a Reply

Your email address will not be published. Required fields are marked *