小蓝视频

Centre for Cognitive Science (COGS)

Summer 2020

Summer 2020

Tuesdays 16:00-17:30

DateSeminarVenue

Jun 23

The Truth about Free Will
Prof. Ray Tallis
Manchester

Abstract: The talk will critically examine the arguments against free will.
It will begin with a brief statement of the standard argument for determinism: that we and our actions are part of the law-governed causally closed material world. I will then proceed to a critical examination of the neurodeterminism supposedly justified by the studies of Libet and Dylan-Hayes.
The rest of the talk will focus on the positive case for free will based on an examination of the distinctive nature of real actions in everyday life. Tensed time, intentionality, and explicit possibility, which are central to voluntary behaviour, offset actions from other events in the natural world. Our undeniable capacity to discover the laws of nature, and to identity causes, and then to exploit them to deliver our ends is further hard evidence of our privileged position in the order of things; of a distance from the law-governed causally closed universe that permits our margin of freedom.

Biography: Raymond Tallis is a philosopher, poet, novelist and cultural critic, and a retired physician and clinical neuroscientist. He ran a large clinical service in Hope Hospital Salford and an academic department in the University of Manchester. His research focussed on epilepsy, stroke, and neurological rehabilitation.

Zoom

https://universityofsussex.zoom.us/j/94508499194

Jun 30

How to Build a Conscious Machine
Keith Frankish
Crete

Abstract: Is the project of creating artificial consciousness a feasible one? It depends on what we think consciousness is. If we adopt a qualitative view of consciousness, then, I shall argue, the project is not feasible, even in principle. Even if there could be artificial consciousness, we could not deliberately set out to create it; we'd have no idea what to do or how to tell if we had succeeded.If we adopt a functional view of consciousness, on the other hand, then the project is feasible; we can see how to proceed, at least in outline. The functional view has a consequence, however: namely, it denies the existence of the supposed qualitative properties of experience, qualia. Functionalists must say that qualia are illusory. Many people regard this view as untenable and self-defeating, but I shall argue that, properly understood, it is a coherent and attractive one. The first step in building a conscious robot is to adopt an illusionist theory of consciousness.

Zoom

https://universityofsussex.zoom.us/j/92008829270

Jul 7

Don't Ask: Classification in Comparative Cognitive Science
Ali Boyle
Leverhulme Centre for the Future of Intelligence/Bonn

Abstract: Many projects in comparative cognitive science (which I mean to include research in both comparative psychology and artificial intelligence) are structured around what I’ll call ‘classificatory questions’ – that is, questions about whether nonhuman cognitive systems have the same cognitive capacities as humans. These projects often generate unproductive, often apparently verbal disputes about how cognitive capacities should be delineated. In part because of this, some researchers have argued that we should stop asking classificatory questions, and instead adopt a ‘bottom-up’ approach focussed on cognitive mechanisms. Against this, I offer a defence of classificatory projects – arguing, first, that bottom-up approaches raise many of the same difficult questions about the delineation of cognitive capacities, and second, that these questions can be addressed once we recognise that researchers’ theoretical interests play a role in delineating the objects of study. On this view of things, apparently verbal disagreements may reflect deeper disagreement about why we are engaged in classificatory projects. So, this defence of classificatory projects in comparative cognitive science comes with a qualification: researchers can’t sensibly pursue classificatory projects for their own sake, but only to satisfy some further theoretical interest.

Zoom

https://universityofsussex.zoom.us/j/97897408661

Jul 14

Protein computation and its implications
Tom Khabaza
Freelance

Abstract:In the brain, the chemical processes in post-synaptic proteins provide a great deal of computation. These processes have inherent properties similar to Piaget’s concepts of assimilation & accommodation, and models based on these processes are an important step forward for cognitive science and AI. This presentation introduces the topic of protein computation, and describes the building blocks & future directions for a new model of learning and cognition inspired by protein computation processes and by the research of Seth Grant & his team. The low-level properties of protein computation make the model naturally adaptive, distributed, resilient and exploratory. Protein computation models have both technical and ethical implications for the future of AI.

Zoom

https://universityofsussex.zoom.us/j/91140256007

Jul 21

Minding the Moral Gap in Human-Machine Interaction
Shannon Vallor
Edinburgh

Abstract:Given the enduring challenges of interpretability, explainability, fairness, safety, and reliability of machine learning systems, as well as expanding legal and ethical constraints imposed on such systems by regulators and standards bodies, it will be the case for the foreseeable future that AI/ML systems deployed in many high-stakes decision contexts will be required to operate under human oversight, what is often called ‘meaningful human control.’ Oversight is increasingly demanded in a broad range of application areas, from medicine and banking to military uses. However, this reassuring phrase conceals grave difficulties. How can humans control or provide effective oversight for ML system operations or machine outputs for which human supervisors lack deep understanding—an understanding often precluded by the very same causes (speed, complexity, opacity and non-verifiability of machine reasoning) that necessitate human supervision in the first place? This quandary exposes a gap in AI safety and ethics governance mechanisms that existing methods are unlikely to close. In this talk I explore two dimensions of this gap which are frequently underappreciated in research on AI safety, explainable AI, or ‘human-friendly AI’: the absence of a capacity for ‘moral dialectic’ between human and machine experts, and the absence of an affective dimension to machine reasoning.

Zoom

https://universityofsussex.zoom.us/j/99931340072

Contact COGS

For suggestions for speakers, contact Simon Bowes

For publicity and questions regarding the website, contact Simon Bowes.

Please mention COGS and COGS seminars to all potentially interested newcomers to the university.

A good way to keep informed about COGS Seminars is to be a member of COGS.  Any member of the university may join COGS and the COGS mailing list by using the subscription form at .

Follow us on Twitter:

https://universityofsussex.zoom.us/j/94508499194