小蓝视频

Centre for Cognitive Science (COGS)

Spring 2021

Summer 2020

Tuesdays 16:00-17:30

DateSeminarVenue

Feb 2

As Soon as There Was Life There Was Danger
Joseph Ledoux
New York

Abstract: Organisms face challenges to survival throughout life. When we freeze or flee in danger, we often feel fear. Tracing the deep history of danger give a different perspective. The first cells living billions of years ago had to detect and respond to danger in order to survive. Life is about not being dead, and behavior is a major way that organisms hold death off. Although behavior does not require a nervous system, complex organisms have brain circuits for detecting and responding to danger, the deep roots of which go back to the first cells. But these circuits do not make fear, and fear is not the cause of why we freeze or flee. Fear a human invention; a construct we use to account for what happens in our minds when we become aware that we are in harm’s way. This requires a brain that can personally know that it existed in the past, that it is the entity that might be harmed in the present, and that it will cease to exist it the future. If other animals have conscious experiences, they cannot have the kinds of conscious experiences we have because they do not have the kinds of brains we have. This is not meant as a denial of animal consciousness; it is simply a statement about the fact that every species has a different brain. Nor is it a declaration about the wonders of the human brain, since we have done some wonderful, but also horrific, things with our brains.

Zoom

https://universityofsussex.zoom.us/j/94304123905/

feb 16

Chemistry vs neurones: pre- and post-natal (or post-hatching) spatial intelligence, in chickens, foals, and mathematicians!
Prof Aaron Sloman
Birmingham

Abstract:I'll present a new, biology-based, line of defence for an expanded version of Immanuel Kant's anti-Hume view of ancient discoveries in geometry, with implications regarding spatial consciousness in humans and other animals, including requirements for spatial intelligence in newly-hatched or newborn animals with sophisticated, but unlearned, competences available soon after hatching or birth, such as chicks that walk, peck for food, and follow a hen, and foals that can walk to a mare to suckle, and run with the herd to escape a predator within hours of birth, without any time to learn how find the nipple, how to suck or how to run, avoiding obstacles, etc.

Many animals need complex spatial competences before they have had time to learn them. I'll suggest (following Kant) that those competences, including some spatial reasoning competences in young humans that also support ancient mathematical competences, cannot be based on varieties of formal reasoning used in modern mathematics and logic-based theorem provers, nor on learning by statistics-based neural nets -- since they are incapable of discovering, or even representing mathematical impossibility or necessity. No amount of statistical evidence can prove either impossibility or necessity.

Instead of neural nets, biological evolution seems to have found ways of re-using *chemical* control mechanisms required for increasingly complex assembly processes in developing organisms, which differ across different types of organism, e.g. microbes, plants of many types including grasses, climbers, giant redwood trees, insects, and egg-laying and live birth vertebrates, etc. and also differ over time within individuals.

As complexity in a developing pre-natal/unhatched organism increases, the complexity of the chemical control mechanisms used in assembling and connecting new components must also increase, and later mechanisms, instead of using only direct chemical reactions (which suffice for the earliest processes based on DNA), will increasingly need to use *information* about what has been constructed so far and what needs to be added or modified next. For much of the time, only chemical information is available. (Compare the uses of information in extending meccano or tinkertoy constructions.)

At later stages of development the information will be more complex and the construction mechanisms will be more complex, performing more complex tasks, e.g. creating a skeleton, muscles, circulatory systems, bones, nervous systems, digestive systems and increasingly complex control mechanisms, which must be chemistry based before brains are available.

In control processes that precede construction of brains (and are needed for construction of brains) the information processing mechanisms cannot be neural mechanisms, because they don't yet exist.

The only alternative is use of chemical control. Developing organisms will require increasingly complex chemical information processing mechanisms as complexity of the new organism increases. The intelligence of newly hatched or newborn animals must be chemistry-based. I'll report on some surprising implications of these ideas and identify work still to be done, requiring deep multi-disciplinary collaboration. Among the implications are limitations of neural nets that collect statistics and derive probabilities. They are incapable of replicating or explaining ancient mathematical competences and related forms of spatial reasoning.

These ideas also point to gaps in the work of Penrose and Hameroff on mathematical consciousness, e.g. as recently explained in their presentations in January 2020: https://www.youtube.com/watch?v=xGbgDf4HCHU. They seem to ignore the possible roles of ancient chemistry-based information processing mechanisms in brains and in organisms without brains. Hameroff emphasises microtubules in brains, but microtubules are involved in early gene expression processes long before brains are created. Perhaps they are most important in information processing mechanisms preceding production of brains?

For more information and ideas for further developments, see the 'chemneuro' web page -- which will be expanded after the talk: https://www.cs.bham.ac.uk/research/projects/cogaff/misc/sloman-chemneuro-sussex.html

Zoom

https://universityofsussex.zoom.us/j/91573951777

Mar 2

Assessing the presence of consciousness in non-human systems
Henry Shevlin
Cambridge

Abstract: TThe scientific study of consciousness has made considerable progress in the last three decades, notably among cognitive theories of consciousness such as the Global Neuronal Workspace account, Higher-order Thought theory, and Attention Schema theory. While such theories are typically concerned to identify correlates of conscious and unconscious processing in human beings, in light of heightened recent interest in the evolution of consciousness and determining the presence of consciousness in animals and even systems, a key question for researchers is whether and how we can apply these frameworks to non-human subjects. In this talk, I review the prospects of this endeavour and discuss some challenges. I focus in particular on what I call the Specificity Problem, which concerns how we can determine an appropriate level of fineness of grain to adopt when moving from human to non-human cases. In light of this and other problems, I argue that most theories of consciousness currently lack the theoretical resources to allow for their straightforward application to non-humans. I go on to consider whether the 'Theory-Light' approach to non-human consciousness recently developed by Jonathan Birch (forthcoming, Noûs) might constitute a plausible alternative method for assessing consciousness in non-human cases. While it has impressive utility, I suggest it is unlikely to give clear answers to all the important examples we may be interested in, especially in non-biological systems. Finally, I argue for a Modest Theoretical Approach, that aims to find a middle ground between the two strategies, combining behavioural and theoretical approaches to offer a powerful but robust approach to the problem. Full paper available at: http://dx.doi.org/10.1111/mila.12338

Zoom

https://universityofsussex.zoom.us/j/97295986825

Apr 27

Towards a quantitative understanding of high-order phenomena in neural systems
Dr Fernando E Rosas
Imperial

Abstract:While the notions of synergy and emergence suggest promising avenues to tackle problems such as the mind-body relationship, they have been as much a cause of wonder as a perennial source of philosophical headaches. Part of the difficulty in deepening our understanding on these subjects lies in the absence of simple analytical models and clear metrics that could serve the community to guide discussions and mature theories. In this talk we present practically useful approaches to capture synergistic and emergent phenomena in multivariate systems, and discuss their applicability in scenarios of interest.

The talk first introduces concepts and metrics to capture synergistic phenomena, understood as statistical regularities that involve the whole but not the parts of a system. We then discuss extensions of these principles to dynamical scenarios, which allow us to establish the Integrated Information Decomposition (ΦID) framework. We show how ΦID allows us to better understand well-known metrics of dynamical complexity such as integrated information, and suggest refinements that appear to have improved practical effectiveness. Additionally, ΦID lets us capture high-order dynamical phenomena that have not been reported in the literature, which can be used as a foundation to formalise causal emergence. We discuss how these ideas and tools enable new ways for studying high brain functions, which lay in a middle ground between computational and dynamical system approaches.

Zoom

May 4

Toward Broad and Deep Language Processing for Intelligent Systems
Dr Marjorie McShane
Rensselaer Polytechnic Institute

Abstract: The early vision of AI included the goal of endowing intelligent systems with human-like language processing capabilities. This proved harder than expected, leading the vast majority of natural language processing practitioners to pursue less ambitious, shorter-term goals. Whereas the utility of human-like language processing is unquestionable, its feasibility is quite justifiably questioned. In this talk, I will not only argue that some approximation of human-like language processing is possible, I will present a program of R&D that is working on making it a reality. This vision, as well as progress to date, is described in the book Linguistics for the Age of AI (MIT Press, 2021), whose digital version is open access through the MIT Press website.

Bio: Dr. McShane is a faculty member of the Cognitive Science Department at Rensselaer Polytechnic Institute and Co-director of the Language-Endowed Intelligent Agents (LEIA) lab. She earned her PhD from Princeton University and since then been working on computational linguistics and intelligent agent modeling.

Zoom

May 18

Panpsychism as a Theory of Consciousness
Dr Philip Goff
Durham

Abstract: What do we need out of a theory of consciousness? I will argue that the task of accounting for consciousness is partly experimental and partly philosophical. The experimental task is to establish the neural correlates of consciousness. The philosophical task is to choose among the various theories philosophers have formulated for explaining why brain activity is correlated with consciousness: materialism, dualism, panpsychism, etc. These theories should aim to: (A) fit the empirical data, (B) eliminate explanatory gaps, (C) be as simple as possible. On the basis of this methodology, I will argue that panpsychism is the most plausible philosophical component of a theory of consciousness.

Zoom

May 25

The mechanism of introspection
Wayne Wu
Carnegie Mellon

Abstract:With regard to phenomenal consciousness, appearance is reality: The phenomenal appearance of experience just is its phenomenal reality. For this reason, we treat introspection as authoritative about conscious phenomena. In reporting what our experience is like, we report how consciousness appears to us, so how it is. From the empirical point of view, however, we still do not know how introspection works. There are few cognitive models of introspection and no consensus. Yet introspection is a detection and discrimination capacity, so is subject to context varying noise. Accordingly, it is an open question when introspection is reliable and when it is not. Theorists of consciousness are using an uncalibrated method of measurement to ground their explanations, a precarious position for a science. In this talk, I present a theory of the introspective process that emphasizes the role of attention in guiding response. The mechanism is neither fancy nor inscrutable. It allows us to delimit clear cases of introspective reliability and introspective unreliability. Unfortunately, the window of concrete reliability is narrow. Many claims about consciousness are arguably based on demonstrably unreliable introspection. I illustrate this in three well-known cases: inattentional blindness, consciousness and the dorsal stream as vision for action, and the rubber hand illusion (there are others). In each case, I show how introspection is not reliable and how that has led us astray in our conclusions in these cases. I pose questions about how the science of consciousness should proceed.

Zoom

Contact COGS

For suggestions for speakers, contact Simon Bowes

For publicity and questions regarding the website, contact Simon Bowes.

Please mention COGS and COGS seminars to all potentially interested newcomers to the university.

A good way to keep informed about COGS Seminars is to be a member of COGS.  Any member of the university may join COGS and the COGS mailing list by using the subscription form at .

Follow us on Twitter: 

https://universityofsussex.zoom.us/j/94508499194