Thursday, 4 February 2016

Workshop on Belief and Emotion



On Friday 27th November, project PERFECT (Department of Philosophy), together with the Aberrant Experience and Belief research theme (School of Psychology), held a mini-workshop on the topic of Belief and Emotion. In this post I summarise the three talks given by Allan Hazlett, Neil Levy, and Carolyn Price.

Allan Hazlett opened the workshop with his paper ‘On the Special Insult of Refusing Testimony’. He argued that refusing someone’s testimony (i.e. not believing what someone tells you) is insulting, and to express such refusal amounts to a special kind of insult. Understanding telling as an attempt to engage in information sharing, Hazlett suggested that in telling someone that p, I am asking that person to believe that p because I believe it. Refusing my testimony would be to insult me because it constitutes the person's not trusting me. Hazlett concluded by asking why it is that not trusting would be insulting? He canvassed four ideas to answer this question, lack of trust (1) undermines intimacy, (2) undermines solidarity, (3) implies non-competence, and (4) constitutes non-acceptance. Elaborating on (3) Hazlett argued that trusting requires an attitude about someone’s competence, specifically about their reliability (tendency to believe truths), and their sincerity (tendency to honesty).

Neil Levy continued the workshop with his paper ‘Have I Turned the Stove Off? Explaining Everyday Anxiety’. He was interested in a certain kind of discordancy case, namely, neurotic anxiety. His focus was on the case of Joe, who believes that he turned the stove off, but nevertheless finds himself wondering ‘have I turned the stove off?’ Joe cannot be sure he turned it off, or that the apparent memory of so doing has the correct time stamp (since he cooks every morning). This case is one of discordancy since Joe engages in action which is contrary to his belief (i.e. ruminating on the matter). Levy argued against several interpretations of this case: its being a credence case (Joe has a low, non-negligible credence that he did not turn the stove off), its being an in-between belief case (Joe in-between believes that he did not turn the stove off), its being an alief case (Joe alieves that he did not turn the stove off), and its being a metacognitive error case (Joe imagines that he did not turn the stove off). Levy finished by sketching an alternative account according to which such a case is one in which the relationship between the representation and action is deviant, in its being mediated by anxiety.

Carolyn Price closed the workshop with her paper ‘Emotion, Perception, and Recalcitrance’. Price was interested in the phenomenon of recalcitrant emotion, that is, the fact that sometimes our considered judgements and our emotional responses are in tension with one another. She discussed the comparison of cases of emotional recalcitrance with cases of recalcitrant perception (conflict between judgement and perception), specifically, optical illusions. She noted that this comparison has been used to support that claim that emotion is a form of perception. However, a problem with this view is that recalcitrant emotions are judged as irrational, whereas recalcitrant perceptions are not. Price suggested that though this thought seems right, it also suggests a puzzle, namely: if emotions are not judgements, why is it that we think recalcitrant emotions are irrational? Price suggested an answer to this question which appealed to emotions and judgements answering to different standards of evidence.

Tuesday, 2 February 2016

Cognitive Bias, Philosophy and Public Policy


This post is by Sophie Stammers (pictured above), PhD student in Philosophy at King’s College London. Here she writes about two policy papers, Unintentional Bias in Court and Unintentional Bias in Forensic Investigation, written as part of a recent research fellowship at the Parliamentary Office of Science and Technology, supported by the Arts and Humanities Research Council.

The Parliamentary Office of Science and Technology (POST) provides accessible overviews of research from the sciences, prepared for general parliamentary use, many of which are also freely available. My papers are part of a recent research stream exploring how advances in science and technology interact with issues in crime and justice: it would seem that if there is one place where unbiased reasoning and fair judgement really matter, then it is in the justice system.

My research focuses on implicit cognition, and in particular, implicit social bias. I am interested in the extent to which implicit biases differ from other cognitions—whether implicit biases are a unified, novel class, or whether they may be accounted for by the cognitive categories to which we are already committed. So I was keen to get a flavour of how the general topic might be applied in a public policy environment, working under the guidance of POST’s seasoned science advisors.

Monday, 1 February 2016

Beliefs that Feel Good Workshop


On December 16th and 17th, the Cognitive Irrationality project hosted a workshop on beliefs that feel good, organized by Marie van Loon, Melanie Sarzano and Anne Meylan. This very interesting event dealt with beliefs that feel good but are epistemically problematic in some way, as well as with beliefs for which this is not the case.

While the majority of talks and discussions focused on problematic cases, such as wayward believers, self-deceptive beliefs and unrealistically optimistic beliefs, there was also a discussion of epistemic virtues and the relation between scepticism and beliefs in the world. Below, I summarize the main points made in the talks.

Quassim Cassam probed the question why people hold weird beliefs and theories. Some examples are the theory that the moon landings were faked, or that 9/11 was in reality an inside job. Quassim argued that more often not, these types of beliefs stem from epistemic vices. In as far as these vices are stealthy, i.e. not apparent to the person who has them, it can be very hard to persuade people of the falsity of their opinions, as these vices reinforce and insulate the wayward beliefs. While it can happen that these wayward believers suddenly come to realize the inadequacy of their belief system in an act of deep unlearning, this is normally not something we can bring about by means of reasoning and persuasion.

​In a talk that straddled the disciplines of philosophy, psychology and neuroscience, Federico Lauria put forward an account of self-deception as affective coping. He argued that we self-deceive in order to avoid painful truths and that a full explanation of self-deception needs to account for affective evaluation of information. He argued that self-deceptive individuals evaluate evidence that their desire is frustrated in the light of three appraisals: (i) ambiguity of the evidence, (ii) negative impact on one's well-being (mediated by somatic markers), and (iii) low coping potential. In addition, self-deceptive people evaluate the state of desire satisfaction positively, which is mediated by increased dopaminergic activity. The affective conflict between truth and happiness is at the heart of self-deception.

In my own talk, I asked whether unrealistic optimism is irrational. I argued that unrealistically optimistic beliefs are frequently epistemically irrational because they result from processes of motivated cognition which privilege self-related desirable information while discounting undesirable information. However, unrealistically optimistic beliefs can be pragmatically rational in as far as they lead individuals to pursue strategies that make a desired outcome more likely. Interestingly, where they are pragmatically rational, they are also less epistemically irrational, as the likelihood of success increases. However, this does not mean that they become realistic or justified. They merely become less unwarranted than they would otherwise have been.

Susanna Rinard proposed a novel response to scepticism, arguing that even if we concede that ordinary beliefs are not well supported by our evidence, it does not follow that we should give up on everyday beliefs. Rather, we can run the risk of having false beliefs about the external worlds because we also incur a reverse cost when suspending belief, which is that of not having true beliefs. Furthermore, suspending judgment may come with additional costs, such as being effortful and unpleasant.

Finally, Adam Carter gave an account of insight as an epistemic virtue, drawing both on virtue epistemology and the psychology of insight problem solving. Insight experiences are characterized by a certain ‘aha’ moment where relations between states of affairs previously seen as unrelated strike the individual. Adam argued that the virtue of insightfulness lies in both cultivating the anteceding stages of preparation and incubation of insights described in the empirical literature and moving from insight experiences to judgment and/or verification in a considered way, i.e. not blindly endorsing the result of every insight experience but giving it due weight.

Every talk was followed by a response and lively discussion. Thanks to Anne, Melanie and Marie for organizing a workshop which was both intellectually stimulating and friendly.

Thursday, 28 January 2016

Neuroscience and Responsibility Workshop

Responsibility Project
This post is by Benjamin Matheson, Postdoctoral Fellow at the University of Gothenburg, working on the Gothenburg Responsibility Project. (Photos of workshop participants are by Gunnar Björnsson).

The workshop on ‘Neuroscience and Responsibility’, part of the Gothenburg Responsibility Project, took place in 14 November 20145. The conference was well attended, the talks were informative, and the discussion was lively and productive.

Michael Moore (Illinois) kicked things off with his talk ‘Nothing But a Pack of Neurons: Responsible Reductionism About the Mental States that Measure Moral Culpability’. Part of Moore’s current project is to show that reductionism (roughly, the view that mental states are just brain states) is not a threat to our responsibility practices – that is to say, we can still be morally and legally responsible even if mental states reduce to brain states. The worry is that if mental states reduce to brain states, then it is not us but rather our brain states that are responsible for our actions – that is, we are ‘nothing but a pack of neurons’ and not the appropriate loci for attributions of responsibility. Moore argued that this worry is ill founded. He emphasised that reducibility does not imply non-existence, and that mistakenly conflating reductionism with this second stance (also known as eliminativism) is what leads to the worry that the truth of reductionism would imply responsibility scepticism. Moore ended by noting that the same move can assuage worries that neuroscientific discoveries are or might be a threat to our responsibility practices.