Previous Whitehead lectures
Primary page content
Summer 2024
How Humans Came to Construct Their Worlds
Michael Arbib (UC San Diego and USC)
Time: 24 April 2024, 4pm
Location: Goldsmiths Cinema, RHB
Zoom link available after registering here
At a global level, Homo sapiens has reshaped the planet Earth to such an extent that we now talk of a new geological age, the Anthropocene. But each of us shapes our own worlds, physically, symbolically, and in the worlds of imagination. This Symposium focuses especially on one form of construction, the construction of buildings, while stressing that such construction is ever shaped by diverse factors from landscape to culture and the construction of history embodied in it - and more.
Please see more about How Humans Came to Construct Their Worlds event.
Spring 2024
The Neurocognition of Liveness
Guido Orgs, UCL
Time: 28 Feb 2024, 4pm
What makes live experiences special? Liveness is a central feature of music concerts, dance performances and theatre plays, but it is also relevant to political rallies, sporting events and meeing online. In this talk, I will discuss some of the theoretical and practical challenges for a neuroscience of liveness. I propose that live experiences can be conceptualized and measured as a form of sustained entrainment between the minds, brains, and bodies of at least two people in a defined here and now.
Please see more about The Neurocognition of Liveness event.
Bridging between micro and macro-level cultural dynamics of music: Advanced online experiments and big data
Manuel Anglada Tort, Psychology
Time: 13 March, 4pm
Music, like language, is a complex cultural system that arises from the interplay of human biology, cognition, and cultural transmission. However, explaining how these processes contribute to the emergence of musical systems remains a key challenge. In this talk, I will present research that addresses this gap by leveraging recent advances in computational and experimental techniques.
For more information on this event, see the full event listing.
Latent Diffusion: Collaborator, Muse or Villain
Ira Greenberg (SMU, Dallas)
Time: 20 March 2024, 4pm
Latent Diffusion (‘LD’) presented as a recent advance in image generation, can be viewed as a logical extension of generative art, dating back to the 1950’s. Generative art, through the use of random algorithms, can approximate emergent properties of analog materials and practices. Prior to LD, most generative art was limited to mathematically derived geometric output, as algorithms to describe more figurative/natural forms exceeded most artists’ capabilities.
Please see more about Latent Diffusion: Collaborator, Muse or Villain event.
Emergence in the human brain: theory, practice, and opportunities
Fernando Rosas (University of Sussex)
21 Jun 2023, 5pm
Abstract
Emergence is a profound subject that straddles many scientific scenarios and disciplines, including how galaxies are formed, how flocks and crowds behave, and how human experience arises from the orchestrated activity of neurons. At the same time, emergence is a highly controversial topic, surrounded by long-standing debates and disagreements on how to best understand its nature and its role within science. A way to move forward in these discussions is provided by formal approaches to quantify emergence, which give researchers frameworks to guide discussions and advance theories, and also quantitative tools to rigorously establish conjectures about emergence and test them on data. This talk presents an overview on the theory and practice of these formal approaches to emergence, and highlights the opportunities they open for studying the relationship between the brain and the mind. The talk presents illustrative examples of the application of principles of emergence to practical data analysis, discussing several key interpretation issues and avenues for future work.
Biography
Fernando Rosas works as lecturer at the University of Sussex and research fellow at Imperial College London and the University of Oxford. His work is focused on the development of conceptual and computational tools to better understand the collective dynamics of complex systems. His background includes studies in mathematics, physics, philosophy, and music composition.
Possibility Architectures: Exploring Human Communication with Generative AI
Simon DeDeo (Carnegie Mellon University and Santa Fe Institute, USA)
24 May 2023, 5pm
Abstract
What we mean is given not just by what we say, but the things we could have said but didn't. More than that: as we navigate the possibilities before us, we conjure, at a distance, the ones we will later encounter. These emergent architectures, the product of cultural evolution, are crucial to understanding human communication. They appear as everything from verbal tics and cliché to higher-level figurative constructs such as irony, metaphor, and style. They sculpt possibility space on different timescales in ways that answer to cognitive bottlenecks and resource constraints, and social demands ranging from empathetic collaboration to self-censorship. Applying information-theoretic tools to the outputs of GPT-2, ChatGPT, and BERT, we reveal basic patterns in these possibility architectures. Functional, "Bauhaus" prose from the New York Times, for example, arranges possibilities in dense but predictable ways that are very different from the ill-defined Levy-flights of dream journal and the Gothic structure of Cormac McCarthy's *Stella Maris*. Our work reveals new scientific possibilities for Generative AI, and new insights into what — for now — makes us uniquely human.
Biography
Simon DeDeo is a cognitive scientist. He is an Associate Professor at Carnegie Mellon University, and External Faculty at the Santa Fe Institute, and leads the Laboratory for Social Minds.
Linking relational and absolute: Constraints in spatial cognition
Professor Jurģis Šķilters (Dept. of Computing, University of Latvia, Riga, Latvia)
22 Jan 2020, 4:00pm - 6:00pm
137a, Richard Hoggart Building
Abstract
In my talk, I will, first, explain Eigenplace function that links relational (topological and geometric) spatial information with representation of exact locations in real world environments. I will argue that topological and geometric relations are constrained by functional ones that are generated in everyday interaction with spatial objects. Experimental evidence regarding functional constraints will be provided.
In the final part of my talk, I will argue that relational topological units are primary in respect to geometric ones and will define a topological grouping principle. Some applications in the areas of computation and visual perception will be provided and some research problems will be briefly discussed.
Biography
Professor Jurģis Šķilters is the Chair and Principal Investigator at the Laboratory for Perceptual and Cognitive Systems at the Faculty of Computing, University of Latvia. He is also the Director of Ph.D. Program in Psychology, University of Latvia.
His main research interests are visual perception and spatial cognition. After obtaining his doctoral degree from the University of Mainz, Germany, he has been conducting research and lecturing in several countries (Latvia, Italy, USA, Israel, Norway, Belgium to mention just a few). He has been visiting professor in USA and Italy. During the last 15 years, he has been a director or co-director of several large international European and transatlantic research projects. Currently he is a PI in an EU-Horizon ITN: e-Ladda (Early Language Development in the Digital Age) coordinated by NTNU, Trondheim, Norway.
Since 2004 he is the Editor-In-Chief: The Baltic International Yearbook of Cognition, Logic and Communication. Prof. Šķilters is the founder of the first international Cognitive Sciences center in the Baltic region. 2019 he was the Co-Director of the Program Committee of the European Summer School in Logic, Language, and Information.
He is author or co-author of more than 100 peer reviewed publications and volumes. Most recently (2019) he has co-edited (together with Michael Glanzberg and Nora Newcombe) Events and Objects in Perception, Cognition, and Language and 2016 (together with Susan Rothstein) Number: Cognitive, Semantic and Crosslinguistic Approaches. 2015 he edited (together with Michael Glanzberg and Peter Svenonius) Perspectives on Spatial Cognition. Manhattan, KS: New Prairie Press.
Immersive Virtual Reality: Moral dilemmas, aggression and automatic choices
Sylvia Terbeck, Liverpool John Moores University
29 Jan 2020, 4:00pm - 5:00pm
137a, Richard Hoggart Building
Abstract
In moral psychology people often use theoretical vignettes to assess moral decision making. In these types of studies, people are asked to imagine and emergency situation – for example to kill one person in order to save five people – and to then rate how morally acceptable they would regard this action and if they would be likely to do it. These experiments however have low ecological validity and might be confounded with social desirability biases. We re-created the traditional footbridge dilemma in Immersive Virtual Reality (IVR) and found that participants significantly more endorsed the action, even if they regarded it as morally unacceptable.
Furthermore, using a robotic manipulandum to simulate realistic force, and a 3D human manikin skin to increase realism, individuals still took the utilitarian action (i.e., killed one to save five). Thus, it can be suggested that humans exhibit moral hypocrisy; they act different to what they theoretically regard as morally acceptable when placed into this situation. IVR could also be utilised to measure aggression in individuals, since this is also a psychological construct which is difficult to assess on a questionnaire. In a recent study we developed a novel psychological IVR aggression test. Finally, we have also developed an IVR automated car scenario in which one is asked to decide to kill oneself (an altruistic act) or to help other selflessly. Our theory suggest a more self-centred view of morality in moral action and a stronger social approach in theoretical moral decisions.
Biography
Dr Sylvia Terbeck is a Senior Lecturer in Psychology at Liverpool John Moores University. Before this, she was a Lecturer in Psychology at Plymouth University. She completed her DPhil in Experimental Psychology in December 2013 at Oxford University. Her research centers around intergroup relations and moral decision making. She has published a significant number of papers in social neuroscience as well as studies involving the use of Immersive Virtual Reality. Dr Terbeck has received research funding from the British Academy. She has presented her work in a TEDx talk, and is currently the departments public engagement co-ordinator. Dr Terbeck has written a single authored book entitled “The Social Neuroscience of Intergroup relations” (Springer, 2016). She has now accepted a book contract with Routledge to edit a 2nd book entitled “How to succeed as a female psychologist: A new examination of the discipline.”
Can Computers Create Art?
Dr. Aaron Hertzmann (Adobe Research)
2 Oct 2019, 6:00pm - 7:30pm
LG02, Professor Stuart Hall Building. Campus map
Abstract:
Aaron will discuss whether computers, using Artificial Intelligence (AI), could create art. He will cover the history of automation in art, examining the hype and reality of AI tools for art together with predictions about how they will be used. Aaron will also discuss different scenarios for how an algorithm could be considered the author of an artwork, which, he argues, comes down to questions of why we create and appreciate artwork.
Short Bio:
Aaron Hertzmann is a Principal Scientist at Adobe, Inc., and Affiliate Faculty at University of Washington. He received a BA in computer science and art & art history from Rice University in 1996, and a PhD in computer science from New York University in 2001. He was a Professor at University of Toronto for 10 years, and has also worked at Pixar Animation Studios and Microsoft Research. He has published over 90 papers in computer graphics, computer vision, machine learning, robotics, human-computer interaction, and art. He is an ACM Fellow.
Location details for the event: https://www.gold.ac.uk/find-us/places/professor-stuart-hall-building-psh/
Available (in open access) relevant article by A. Hertzmann:
https://www.mdpi.com/2076-0752/7/2/18
Academic page (University of Toronto)
The Experiential Continuum: From Plant Sentience to Human Consciousness
Alfredo Pereira Jr – São Paulo State University (UNESP) and (Visiting Researcher at) Goldsmiths University of London
Wednesday 9 October 2019 **4 pm in RHB 137a** Campus map
Abstract: Empirical and theoretical developments in Neuroscience and Psychology support a concept of the Experiential Continuum, applied to the whole phylogenetic scale, and containing three layers (non-conscious, non-conceptual conscious and conceptual conscious experience), and six phases, according to the degree or self-awareness (Sentient, Interpretive, Automatized, Thought, Intuitive and Voluntary).
Recent discoveries about plant sensitivity support the attribution of Sentience and Interpretation (biosemeiosis) to them. Automatized processes, as, for instance, driving a car while focusing attention on talking through the cell phone, is considered to be conscious monitoring activity. Thinking can be related to the operation of neuronal networks; in the evolutionary scale, we find evidence for rudimentary thinking in molluscs and insects. Intuition is related to unconscious homeostatic processes in the nervous system that abruptly emerge to consciousness. Voluntary action is dependent on neuro-muscular structures found in animals, supporting agency.
On this conceptual basis, it is possible to provide clarification about current usages of the terms “affect”, “feeling” and “emotion” in contemporary theories of human consciousness. The affective drive is a set of genetically based psycho-physiological functions found in animals. “Feeling” refers to all types of conscious expression of affect, from sensations to social emotions. The non-conscious physiological and behavioural processing and expression of affect compose unconscious emotion.
Brief Bio: Alfredo Pereira Jr is Professor of Philosophy of Science at São Paulo State University (UNESP; 1988-present). Previously, he was a Post-Doctoral Fellow at MIT (1996-98) and Visiting Researcher at the Universities of Zurich (2012), Copenhagen (2012) and Pavia (2015). He has published 200 papers and chapters, organized three books on consciousness (Springer, Routledge and Cambridge U.P.), and several discussion groups (including Nature Networks groups, 2007-2010).
Characterising brain network dynamics in rest and task
Wednesday 23 October 2019 **4 pm in RHB 137a** Campus map
Diego Vidaurre - Oxford Centre for Human Brain Activity (OHBA), University of Oxford
Abstract: The brain needs to activate multiple networks in a temporally coordinated manner in order to perform cognitive tasks, and it needs to do so at different temporal scales, from the slowest cyrcadian cycles to fast subsecond rhythms. I propose a probabilistic framework to investigate brain functional reorganisation, capable to reliably access the dynamics contained in the signal even at the fastest time scales.
Using this approach, we investigate several aspects of the intrinsic dynamics of the human brain. First, we found that the brain spontaneously transitions between different networks in a predictable manner and follows a hierarchical organization that is remarkably simple, is heritable and significantly relates to behaviour. Second, we investigate the spectral properties of the default mode network using MEG, which is revealed to be composed of two components, anterior and posterior, with very distinct spatial, temporal and spectral properties - both having a strong implication of the posterior cingulate cortex, yet in very different frequency regimes.
Finally, I show an extension of this model for task data where we incorporate the stimulus information into the model, in such a way that we can reliably find between-trial temporal differences in stimulus processing - which we argue are crucially related to learning and plasticity, and can avoid the interpretation caveats of traditional decoding techniques.
Bio: Diego obtained his PhD in Statistics in the Universidad Politécnica de Madrid, and has worked in the University of Oxford as a postdoc in Computational Neuroscience since 2013. In 2018, he was appointed as an Assistant Professor in Osaka University, in Japan. He has just been appointed as an Associate Professor in Aarhus University, in Denmark.
Music, speech and conversational grooves
Wednesday 30 October 2019 **4 pm in RHB 137a** Campus map
Ian Cross - Centre for Music and Science, University of Cambridge
Abstract: I will suggest that time in music and in speech is underpinned by the same perceptual, cognitive and neural processes, and that regular temporal structure —periodicity— serves largely the same functions in speech and in music. I start by exploring evidence for temporal regularity in speech and suggests that this regularity serves the functions of enhancing communicative predictability and mutual affiliativeness between interlocutors. Results from studies that explore conversational and musical interaction will be discussed, and new results concerning effects of musical interaction on subsequent conversational interaction will be presented. The paper concludes by noting the need to develop integrated approaches to the study of music and speech as cognate components of the human communicative toolkit.
Brief Bio: Ian Cross is Professor and Director of the Centre for Music and Science at the University of Cambridge. His early work helped set the agenda for the study of music cognition; he has published widely in the field of music and science, from the psychoacoustics of violins to the evolutionary roots of musicality. His current research explores whether music and speech are underpinned by common interactive mechanisms. He is Editor-in-Chief of SAGE's new Open Access journal Music & Science, is a Fellow of Wolfson College, Cambridge and is also a classical guitarist.
The perceptual prediction paradox: Seeing what we expect (and what we don’t)
Wednesday 13 November 2019 **4 pm in RHB 137a** Campus map
Daniel Yon – Goldsmiths University of London
Abstract: From the noisy information bombarding our senses our brains must construct percepts that are veridical – reflecting the true state of the world – and informative – conveying the most important information for adjusting our beliefs and behaviour.
Theories in cognitive science suggest both of these challenges are met by mechanisms that use our beliefs and prior knowledge to shape what we perceive. However, current models are mutually incompatible.
In this talk, I will contend that ideas from research on learning and inference may resolve this paradox – explaining why our experience of the world around us is sometimes dominated by our existing beliefs, and sometimes focuses on information that surprises us the most.
Brief Bio: Daniel Yon is a new lecturer in the Department of Psychology at Goldsmiths. He studied psychology as an undergraduate at the University of Oxford (2010-2013) before completing an MSc and PhD at Birkbeck, University of London (2013-2017). Daniel was a postdoc at Birkbeck for two more years before joining Goldsmiths in September 2019. His research uses a mixture of behavioural, neuroimaging and computational methods to investigate how we perceive and interact with the world around us, with a particular focus on how our expectations shape our perceptions and decisions.
The Mind of the Bee
Date: 4 pm Wednesday 16 January 2019
Venue: Venue: 137a, Richard Hoggart Building, Goldsmiths University of London
Speaker: Lars Chiitka http://www.sbcs.qmul.ac.uk/staff/larschittka.html
Bees have a diverse instinctual repertoire that exceeds in complexity that of most vertebrates. This repertoire allows the social organisation of such feats as the construction of precisely hexagonal honeycombs, an exact climate control system inside their home, the provision of the hive with commodities that must be harvested over a large territory (nectar, pollen, resin, and water), as well as a symbolic communication system that allows them to inform hive members about the location of these commodities. However, the richness of bees’ instincts has traditionally been contrasted with the notion that bees’ small brains allow little behavioural flexibility and learning behaviour. This view has been entirely overturned in recent years, when it was discovered that bees display abilities such as counting, attention, simple tool use, learning by observation and metacognition (knowing their own knowledge). Thus, some scholars now discuss the possibility of consciousness-like phenomena in the bees. These observations raise the obvious question of how such capacities may be implemented at a neuronal level in the miniature brains of insects. We need to understand the neural circuits, not just the size of brain regions, which underlie these feats. Neural network analyses show that cognitive features found in insects, such as numerosity, attention and categorisation-like processes, may require only very limited neuron numbers. Using computational models of the bees' visual system, we explore whether seemingly advanced cognitive capacities might 'pop out' of the properties of relatively basic neural processes in the insect brain’s visual processing area, and their connection with the mushroom bodies, higher order learning centres in the brains of insects.
Lars Chittka is distinguished for his work on the evolutionary ecology of sensory systems and cognition, using insect-flower interactions as a model. He developed perceptual models of bee colour vision, allowing the derivation of optimal receiver systems as well as a quantification of the evolutionary pressures shaping flower signals. He also made fundamental contributions to the understanding of animal cognition and its fitness benefits in the economy of nature. He explored phenomena such as numerosity, speed-accuracy tradeoffs, false memories and social learning in bees. His discoveries have made a substantial impact on the understanding of animal intelligence and its neural-computational underpinnings.
Lars is a recipient of the Royal Society Wolfson Research Merit Award and an Advanced Fellowship from the European Research Council (ERC). He is also an elected Fellow of the Linnean Society (FLS), the Royal Entomological Society (FRES) as well as the Royal Society of Biology (FSB). He is also the founder of the Psychology Department at Queen Mary University of London, where he is a Professor of Sensory and Behavioural Ecology.
The Psychology of Magic, or Why the Mind is Tricked
Date: 4 pm Wednesday 13 February 2019
Venue: Council Chamber, Deptford Town Hall, Goldsmiths University of London
Speaker: Nicky Clayton & Clive Wilkins https://www.psychol.cam.ac.uk/people/nicola-clayton https://www.psychol.cam.ac.uk/people/clive-wilkins
What do cognitive illusions demonstrate about the psychology of the human mind? We argue that magic effects reveal critical cognitive constraints on our ability to engage in mental time travel and theory of mind, as well as on perception and attention.
Nicky Clayton and Clive Wilkins have a science-arts collaboration The Captured Thought.
Clive Wilkins is a writer and an artist. He is also a Member of the Magic Circle and Artist in Residence in the Department of Psychology at the University of Cambridge.
Nicky Clayton is Professor of Comparative Cognition in the Department of Psychology at the University of Cambridge and a Fellow of the Royal Society. She is also Scientist in Residence at Rambert Dance Company.
Can the contents of Consciousness be studied quantitatively?
Date: 4 pm Wednesday 27 February 2019
Venue: 137a, Richard Hoggart Building, Goldsmiths University of London
Speaker: Tristan Bekinschtein https://www.neuroscience.cam.ac.uk/directory/profile.php?trisbke
We talk phenomenology and experiences and then we measure reaction times and errors. Can we study the contents of our mind? I would argue that we are always studying content in psychology but not caring or not willing to engage in the question. Two main methods to capture what we think - direct and indirect - may allow us to formalize the questions about content, and two methods in cognitive neuroscience to map underpinnings of the contents, neural decoding and intensity tracking. I will illustrate them with EEG and fMRI experiments during pharmacologically induced states, sleep transitions and meditative techniques.
Tristan is a biologist, Master in Neurophysiology and PhD in Neuroscience, Buenos Aires University. He has been an EU Marie Curie Fellow and senior researcher at the MRC-Cognition and Brain Sciences Unit, Cambridge, and a Fyssen fellow at ICM, Paris. In 2011 he founded the Consciousness and Cognition Lab, now at the Department of Psychology, University of Cambridge. He is Wellcome Trust Fellow and Turing Institute Fellow.
Painting Machine and Creative Practice
Date: 4 pm Wednesday 6 March 2019
Venue: RHB 137a, Ground floor, Richard Hoggart Building
Speaker: Liat Grayver & Marvin Gülzow
Artist Liat Grayver & Computer Scientist Marvin Gülzow will discuss the current state of robotic painting, its possibilities and limitations as well as future plans.
The painting robot E- David (Electronic Drawing Apparatus for Vivid Image Display, developed since 2008) is the focal point of the collaboration between the artist Liat Grayver and the research team headed by Prof. Oliver Deussen at the University of Konstanz. Since 2016 they investigate the creation of an autonomous painting machine as a tool to reflect on the human and computational models in the act of creating a painting. By using visual feedback the machine is capable of creating unique works through the application of series of paint strokes which are non-deterministic in terms of their colour blend and the materiality of their layering. This unique platform allowed Grayver to reconceptualize the very foundations of the painting practice, starting with the bodily movement of the single brushstroke all the way to questions concerning control and loss thereof in the creative process. The project is a meeting point between the fields of robotics, software and painting, aspiring to constitute a novel venue for the establishment of new and innovative ground in the act of both painting and human-machine interaction. Works created during this collaboration have been acknowledged and exhibited in a wide range of international arts institutions and museums.
Liat Grayver graduated with a Meisterschueler degree (post-graduate ) from the Meistersklasse of Prof. Joachim Blank (Media Art 2018) following her studies as post-graduated and MFA in the class of Prof. Heribert. C. Ottersbach (painting and printmaking 2015) at the Art Academy of Leipzig (HGB). Since January 2016, Grayver has been collaborating with the University of Konstanz on the e-David Project, exploring various approaches to integrate robotic and computer languages in the processes of painting and creative image-making. Her works have been exhibited in galleries, art fairs, and museums throughout Europe, Israel, and South Korea. Since 2014, she is based in Berlin and working in Berlin, Leipzig, Konstanz and Tel-Aviv.
https://www.liatgrayver.com/
Marvin Gülzow completed his Master of Computer Science at the University of Konstanz in 2018 and is now a PhD candidate there. For his thesis, he continues development on both software and hardware for e-David. His primary research interests are developing methods which allow the painting machine to autonomously experiment with its tools and extending the automatic painting process to include more advanced brush handling techniques.
Conscious agency and the preconscious/unconscious self
Date: 4 pm Wednesday 20 March 2019
Venue: Council Chambers, Deptford Town Hall, Goldsmiths University of London
Speaker: Professor Max Velmans
We habitually think of our Self as a conscious agent operating largely in terms of how we consciously experience those operations. However, psychological and neuroscientific findings suggest that mental operations that seem to be initiated by the conscious Self are largely preconscious or unconscious. In this talk I examine how these aspects of the Self and its operations combine in the exercise of free will—and suggest that the conscious wishes, choices and decisions that we normally associate with “conscious free will” result from preconscious processes that provide a form of “preconscious free will”. The conscious experiences associated with other so-called “conscious processing” in complex tasks such as speech perception and production, reading and thinking, also result from preconscious processing, which requires a more nuanced analysis of how conscious experiences relate to the processes with which they are most closely associated. We need to distinguish processes that are conscious a) in the sense that we are conscious of them, b) in the sense that they result in a conscious experience, and c) in the sense that consciousness plays a causal role in those processes. We also examine how consciousness enables real-ization: it is only when one experiences something for oneself that it becomes subjectively real. Together, these findings suggest that Self has a deeper architecture. Although the real-ized aspects of the Self are the consciously experienced aspects, these are just the visible “tip” of a far more complex, embedding preconscious/unconscious ground.
Max Velmans is Emeritus Professor of Psychology, Goldsmiths, University of London and Fellow of the Academy of Social Sciences. His main research focus is on integrating work on the philosophy, cognitive psychology and neuropsychology of consciousness, and, more recently, on East-West integrative approaches. He has over 100 publications on these topics including his books Understanding Consciousness (2000, 2009), The Science of Consciousness (1996), Investigating Phenomenal Consciousness (2000), The (co-edited) Blackwell Companion to Consciousness (2007, 2017), Towards a Deeper Understanding of Consciousness (2017) and the four-volume collection Consciousness (Critical Concepts in Psychology) (2018). He was a co-founder and, from 2004-2006, Chair of the Consciousness and Experiential Psychology Section of the British Psychological Society, and an Indian Council of Philosophical Research National Visiting Professor for 2010-2011
Autonomic control, interoception, and experience
Date: 4 pm Wednesday 27 March 2019
Venue: Council Chamber, Deptford Town Hall, Goldsmiths University of London
Speaker: Hugo Critchley https://www.bsms.ac.uk/about/contact-us/staff/professor-hugo-d-critchley.aspx
The internal state of our bodies influence how we experience ourselves and the external environment. With a focus on the phasic signals that accompany individual heartbeats, I will discuss evidence implicating the predictive (autonomic) control and interoceptive representation of physiological state as the correlate of mental effort, the basis for affective feelings, and the substrate of self-representation.
Such embodiment of mental processes underpins the experience of perceiving and acting on the world. Knowledge about the brain mechanisms supporting interoception informs our understanding of normative conscious processes and of how psychiatric symptoms arise through their disorder.
Hugo Critchley is Professor of Psychiatry at Brighton and Sussex Medical School and co-Director with Anil Seth of the Sackler Centre of Consciousness Science at the University of Sussex. Hugo’s clinical interest in neuropsychiatry and training in brain imaging and autonomic medicine has allowed him to pursue an interdisciplinary research programme that combines cognitive psychology and neuroimaging with detailed physiological measurements and studies of patients.
Literature and art as a cognitive object: towards a novel mentalistic theory of literature and art
Date: 4 pm Wednesday 29 May 2019
Venue: Cinema, Richard Hoggart Building, Goldsmiths University of London
Speaker: Dr Patricia Kolaiti https://www.cambridge.org/gr/academic/subjects/languages-linguistics/semantics-and-pragmatics/limits-expression-language-literature-mind?format=HB
This talk will outline the key theoretical hypothesis of ‘Literature and Art as a Cognitive Object’ (CogLit), a two-year research project based in the School of Humanities, University of Brighton and funded by a Marie Curie Individual Fellowship, European Commission. Shifting the focus from the properties of the artwork/ literary text itself to literature/ art as a case of human agency, ‘CogLit’ will set out a radically new view of the interplay between literature, art and mind and make one of the first systematic and empirically tractable proposals in the 21st century on the essence of literature and art.
In this talk, Dr Patricia Kolaiti will try to give the audience a taste of some of the fascinating questions about the nature of literature and art that ‘CogLit’ will be focusing on and hint on the challenges raised for traditional perceptions of literature and art by adopting a novel cognitive perspective. This talk will move beyond the existing binary oppositions of artifact-oriented and receiver-oriented approaches to literature and art, put the artist/producer at the centre of attention and gesture towards a new producer-oriented theoretical model of literature and art as a cognitive concept standing in a causal relation to a specialized type of creative mental states and processes. These specialized mental states and processes are metaphysically- and psychologically-real entities. Investigating them may help provide answers to persistent ontological questions in literary and art theory and delineate new and exciting potential for research in the linguistic and cognitive study of literature and art by enabling us shift the focus from the artifactual properties of artworks as objects out there in the world to their complex retroactive relationship with the micro-mechanisms of the mind that creates them.
Dr Patricia Kolaiti is a Marie Skłodowska-Curie Individual Fellow in the School of Humanities, University of Brighton, working on the 2-year interdisciplinary project ‘Literature and Art as a Cognitive Object’ (‘CogLit’) funded by the European Commission. ‘CogLit’ aims to develop a novel theoretical account of literature and art as a cognitive object and build two-way interactions between literary and art study, linguistics and the cognitive sciences. Patricia holds a PhD from UCL and was Associate Researcher with the Balzan project on ‘Literature as an Object of Knowledge’ (based at St John’s College Research Centre, Oxford and led by Prof. Terence Cave). Her first monograph The Limits of Expression: Language, Literature, Mind has recently been published by Cambridge University Press. She is a member of the Cognitive Futures in the Humanities Network, and a co-founder of the Beyond Meaning Network and the Poetry as an Action Research Group. Patricia is also a published poet and performer with a strong presence in the contemporary Greek literary scene: her collection Celesteia was nominated for the 2008 First Book Diavazo Award in Greece.
For an overview of the CogLit project’s main theoretical hypotheses, please have a look at the CogLit webpage: https://blogs.brighton.ac.uk/coglit/
A link to Dr Kolaiti's monograph “The Limits of Expression: Language, Literature, Mind” that has just come out by CUP: https://www.cambridge.org/gr/academic/subjects/languages-linguistics/semantics-and-pragmatics/limits-expression-language-literature-mind?format=HB
Morphogenetic Creations: A tale of complexity, emergence, and human-computer collaboration
Date: 4 pm Wednesday 10 October 2018
Venue: 137a Ground Floor, Richard Hoggart Building, Goldsmiths University of London
Speaker: Andy Lomas
Andy Lomas on "Morphogenetic Creations: A tale of complexity, emergence, and human-computer collaboration"
Morphogenetic Creations is an ongoing series of artworks that explore how intricate complex form, as often seen in nature, can be created emergently through computational simulation of growth processes. Inspired by the work of Alan Turing, D'Arcy Thompson and Ernst Haeckel, it exists at the boundary between art and science.
This talk looks at both the development of these artworks and the artist's changing relationship with the computer: developing from simply being a medium to create computational art to one where it becomes an active collaborator in the process of exploring the possibilities of generative systems.
Drawing analogies with Kasparov’s Advanced Chess and the deliberate development of unstable aircraft using fly-by-wire technology, the talk argues for a collaborative relationship with the computer that can free the artist to more fearlessly engage with the challenges of working with emergent systems that exhibit complex unpredictable behaviour.
Andy Lomas is a digital artist, mathematician, Emmy award-winning supervisor of computer-generated effects, and lecturer in Creative Computing at Goldsmiths.
He has exhibited internationally, including at the Pompidou Centre, V&A, Royal Society, Science Museum, SIGGRAPH, Japan Media Arts Festival, Ars Electronica Festival, Kinetica, Los Angeles Municipal Art Gallery, Los Angeles Center for Digital Art, Centro Andaluz de Arte Contemporaneo, Watermans, the Science Museum and the ZKM. His work is in the collections at the V&A, the Computer Arts Society and the D'Arcy Thompson Art Fund Collection. In 2014 his work Cellular Forms won The Lumen Prize Gold Award.
His production credits include Walking With Dinosaurs, Matrix: Revolutions, Matrix: Reloaded, Over the Hedge, The Tale of Despereaux, and Avatar. He received Emmys for his work on The Odyssey (1997) and Alice in Wonderland (1999).
Recent related published article (open access):
"On Hybrid Creativity", in Arts 2018, 7(3), 25
Dhanraj Vishwanath: Deconstructing Realness
Date: 4 pm Wednesday 24 October 2018
Venue: Venue: 137a, Richard Hoggart BuildingGoldsmiths University of London
Speaker: Dhanraj Vishwanath
The discovery of perspective projection during the Italian Renaissance led to the ability to create realistic 2-dimensional images (paintings) of 3-dimensional scenes. However, artists like da Vinci bemoaned the fact that the contents of a perspective painting, however well executed, lacked the sense of spatial realness: the impression of visual solidity, tangibility and immersiveness characteristic of real objects and scenes. Since Wheatstone’s invention of the stereoscope in 1838, it has been widely believed that the underlying cause of this phenomenology of realness (a.k.a. stereopsis) is the binocular disparities that objects in real scenes generate at the retinae. In this presentation, I will argue for an alternative view that the phenomenology of realness is not primarily linked to binocular disparity, but to the brain’s derivation of the egocentric scale of the visual scene. I will present a range of theoretical arguments as well as psychophysical and neurophysiological data in support of this alternative view. This view not only provides an explanation for why binocular disparities yield the most vivid impression of stereopsis but also why the impression of stereopsis can be attained in the absence of binocular disparity. Importantly, it provides an integrative understanding of the perceptual differences in viewing real objects, stereoscopic images and pictorial images. I will discuss the implication of this work for 3D film and VR technology.
Dhanraj Vishwanath is a lecturer in perception at the University of St. Andrews. He originally trained in Architectural Design at the University at Buffalo (SUNY). He obtained his PhD in Cognitive Psychology at Rutgers University, NJ and was an NIH postdoctoral fellow at UC Berkeley. His main empirical research interests are in 3D space perception, visuomotor control (eye movements) . He has a special interest in foundational and philosophical aspects of perception, perceptual phenomenology as well as the links among perception, art and design. He has published and lectured widely in all these subjects.
Profile at St-Andrews: https://tinyurl.com/y8ltkb7r
Sound design: so simple, yet so complicated
Date: 4 pm Wednesday 31 October 2018
Venue: 137a Ground Floor, Richard Hoggart Building, Goldsmiths University of London
Speaker: Sandra Pauletto
This talk will reflect on a series of sound design and sonification projects and will discuss the fascinating challenges presented by this area of research. It will consider what connects seemingly disparate works: research projects evaluating the effectiveness of sonification displays and sound design in health application (Data Mining through an Interactive Sonic Approach, SCORe), the use of sound in art installations (Listening and Silence, Virtual Symbiosis), and film and theatre approach to sound design.
Sandra Pauletto read music at the Conservatorio di Musica Tartini of Trieste (Italy), physics at The University of Manchester, and music technology (MSC and PhD) at the University of York. She was UK representative in the EU-COST Action on Sonic Interaction Design (2007-11), Deputy Director of the Centre for Chronic Diseases and Disorders (C2D2) of the University of York (funded by the Wellcome Trust) and worked as PI, Co-I and RA on a number of research projects funded by the EPSRC, British Academy, C2D2. She is Senior Lecturer in Sound Design at the University of York.
Profile at York University: https://www.york.ac.uk/tftv/staff/academic/pauletto/
Why do people watch other people playing video games?
Date: 4 pm Wednesday 28 November 2018
Venue: 137a, Richard Hoggart Building, Goldsmiths University of London
Speaker: Dr Mark R Johnson
Dr Mark R Johnson (University of Alberta) on the rise of the broadcasting and spectating of digital play.
Ever since the earliest days of video games, many people have chosen to watch others playing these interactive technologies instead (or as well as) playing them themselves.
Although this began with just looking over the shoulder of one's friend in the arcade, nowadays over two million individuals from around the world regularly broadcast themselves playing video games over the internet, to viewing audiences of over 100 million. Several thousand individuals are able to make a full-time income, potentially in the six-figures, by monetising their broadcasts. Equally, the rise in the last decade of "E-sports" - professionalised competitive video gameplay - has also highlighted this desire to watch others playing.
Drawing on interview and ethnographic research, this talk will explore the interwoven phenomena of live streaming and E-sports, and focus on three elements:
- Who is broadcasting/playing, and who is watching?
- What are the lives of these highly-visible video game players like?
- What do the futures of these two domains look like in the next five-to-ten years?
This talk will seek to explore these major changes in the sociotechnical entanglements of the video game industry and video game consumption - significantly larger than the film and music industries combined - and to begin to think about why precisely people would sometimes rather watch others playing video games, instead of simply playing them themselves.
Mark R Johnson is a Killam Memorial Postdoctoral Fellow at the University of Alberta in Canada. His work focuses on the intersections between play and money, such as professionalised video game competition (E-sports), the live broadcast and spectating of video games on personalised online "channels", and the blurring of video games and gambling in numerous contexts.
His first book, 'The Unpredictability of Gameplay', is due to be published by Bloomsbury Academic in 2018 and presents a Deleuzean analysis of randomness, chance and luck in games, the effects different kinds of unpredictability have on players, and the communities that arise around them. He is currently writing two new monographs, one about the phenomenon of live streaming on Twitch.tv and the work, labour, lives and careers of those who make their living on the platform, and another about the growth of "fantasy sports betting" as a form of gambling disguised under the aesthetic, thematic and mechanical forms of sports management video games.
Outside academia, he is also an independent game developer, a regular games writer, blogger and podcaster, and a former professional poker player.
http://www.ultimaratioregum.co.uk/about-me
Why it’s great to be a baby
Date: 4 pm Wednesday 5 December 2018
Venue: Venue: 137a, Richard Hoggart Building, Goldsmiths University of London
Speaker: Caspar Addyman https://www.gold.ac.uk/psychology/staff/addyman-caspar/
On the whole, babies enjoy being babies. Everything is fun, new and exciting and everyone is your friend. Your mummy and daddy provide food, shelter and unconditional love and every day is an adventure. But why are we born so helpless and is there any the purpose to our prolonged period of infancy?
Based on Caspar’s forthcoming book The Laughing Baby, this talk will explain why human infancy is unique and how the relationship between mother and infant are the foundation of our intelligence, our empathy, our language and our art.
Caspar is a developmental psychologist and director of the Goldsmiths’ InfantLab He is interested in how babies adapt to the world and how we support their learning. He has investigated early concept learning, the foundations of language, time perception and sleep. His most recent research has looked at the importance of laughter and positive emotion in early life. Caspar’s book The Laughing Baby is published by Unbound in 2019.
The Nature of Perception
Date: 4 pm Wednesday 12 December 2018
Venue: Venue: 137a, Richard Hoggart Building, Goldsmiths University of London
Speaker: Brian Rogers https://www.psy.ox.ac.uk/team/brian-rogers
In order to understand and study perception we need to consider what it is that we are trying to explain. According to the Gestalt psychologist Kurt Koffka, the answer is “Why do things look as they do?”, that is, how can we account for our perceptual experiences of colours, shapes, three-dimensional forms and their motions. But does appearance matter and what is its causative status? Others, notably the American psychologist James Gibson, have argued that the underlying purpose of perception is to guide action - “Perceiving is an achievement of the individual, not an appearance in the theatre of his consciousness”. From an evolutionary perspective, it has to be true that the ability to have experiences of the world would never have evolved without an action system that allowed the animal to use perceptual information to survive. Perception and action should not be seen as separate or independent processes but rather as parts of a “perceptual system”. In addition, we need to consider the particular environments of different animals and the way in which their sensory systems have adapted to the particular characteristics of those environments. While it is true that the human perceptual system is more flexible, adaptable, and involved in a much wider variety of behaviours than the perceptual systems of other species, what we share with other species is the evolutionary legacy of exploiting the meaningful characteristics of the particular environment.
Brian Rogers has taught psychology and carried out research in the field of perception for over forty years, initially in Bristol (both as an undergraduate and graduate student) and St Andrews before coming to Oxford in 1984. His main research interests have been in 3-D vision, motion perception, perceptual theory and the visual control of action. He co-authored several books with Ian Howard including “Binocular Vision and Stereopsis” (1995), “Seeing in Depth” (2002) and “Perceiving in Depth” (2012) and has recently published a Very Short Introduction (VSI) on “Perception” for Oxford University Press. Currently, he is an Emeritus Professor of Experimental Psychology at Oxford, an Emeritus Fellow at Pembroke College and Professor of Psychology at St. Petersburg University.
All seminars are held at 4pm in the Ben Pimlott Lecture Theatre, unless otherwise stated. Check our map for . For enquiries related to the lectures, please contact Karina Linnell or Frederic Leymarie.
Spring 2018
The Seductive Myth of Time Travel
Date: 4pm Wednesday 17 January 2018
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths University of London
Speaker: Professor Ray Tallis
The myth of time travel seemed to acquire scientific respectability from relativity theory which made time a space-like fourth dimension. This is an illusion and the speaker will examine why this is the case, noting that the time traveller’s journey, her ability to arrive at a chosen destination, and anything she might hope to achieve at her destination are metaphysically impossible.
Raymond Tallis is a philosopher, poet, novelist and cultural critic, and a retired physician and clinical neuroscientist. He ran a large clinical service in Hope Hospital Salford and an academic department in the University of Manchester. His research focussed on epilepsy, stroke, and neurological rehabilitation. Professor Tallis has published fiction, poetry, and 25 books on the philosophy of mind, philosophical anthropology, literary and cultural criticism. Aping Mankind (2010) was reissued in 2016 as a Routledge Classic. His latest book - Of Time and Lamentation: Reflections on Transience (2017) - is an inquiry into the nature of time. In 2013, he published a volume co-edited with Jacky Davis NHS SOS which examined the damaging impact of Tory policies on the NHS.
A Dialogue between Mathematics and Art: Interface, Concepts, Applications
Date: 4pm Wednesday 24 January 2018
Venue: RHB 101 (Curzon Cinema), Richard Hoggart Building, Goldsmiths
Speaker: Dr. Maria Mannone
Branches of mathematics such as category theory are known for the abstraction and generalization power. They can also be successfully applied in several fields, such as arts. We give examples of the use of mathematics to define a conceptual framework to study musical composition and performance, as well as to contextualize comparisons and translations from music to visual arts and vice versa. We contextualize, in fact, sonification and visualization in a categorical framework. Such a developing research field, which can be called "MathemART" or "cARTegory theory", leads to applications in mathematics pedagogy and in music pedagogy, and it also may also constitute a source of inputs for technology development.
The frontier of these studies is about the application to machine learning, to automatically classify musical-performance elementary gestures within mathematically-defined equivalence classes. Mathematical thinking helps the development of new interfaces as a continuous dialogue between music and technology; thus, mathematics itself can be considered as a sort of creative interface, connecting thinking with sound and acoustic parameters.
Bio:
Maria Mannone earned Masters in Theoretical Physics, Orchestral Conducting, Composition, and Piano in Italy, and the Master 2 ATIAM at IRCAM-UPMC Paris VI Sorbonne. She achieved the Ph.D. in Composition at the University of Minnesota, where she developed an interdisciplinary research also in collaboration with the Fine Institute of Theoretical Physics. Author and co-author of books and articles, she gave invited lectures in USA, Asia, and Europe, included at the Joint Mathematics Meetings (JMM) 2017 in Atlanta (Georgia).
Next, she will give a talk at the JMM 2018 in San Diego (California). Her music has been performed by the Orchestra Sinfonica Siciliana, during Festival delle Orestiadi di Gibellina, and the Arts Quarter Festival at the University of Minnesota. Google Scholar citations
Greening the Grey Matters - The Vital Psychological, Environmental and Economic Benefits of Green Spaces within the Urban Realm
Date: 4pm Wednesday 31 January 2018
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths University of London
Speaker: Edible Busstop
The crucial importance of green spaces in the city - small or large, passive or interactive - for our physical and psychological health is undervalued when it comes to garnering significant investment. The Edible Bus Stop team’s hands-on experience leads them to understand that green spaces within the urban realm have vitally important roles to play on a fundamental level. The interwoven factors of the psychological, environmental and economic benefits of urban green spaces are examined in this lecture.
The Edible Bus Stop team will present studies supporting the view that green spaces in our cities are essential for our physical health and mental well-being and that this, in turn, creates knock-on economic advantages. The environmental benefits of greenery in the city are well documented and - in the light of our dangerous air pollution levels - planting more is ever more important with every breath we take. However, the positive psychological and economic impact that green spaces bring are often overlooked and underestimated. There is a significant lack of real investment in these spaces and yet the transformation they provide is key to whether an area is pleasant to live, shop and work in.
BIO: The Edible Bus Stop Studio is a landscape architecture and design consultancy. Our projects are diverse and creative, our methodology provocative and playful. We transform spaces into design-led active places in both permanent and temporary settings, inspiring a wider audience to engage in social and environmental issues.
The lecture will be given by the studio’s two directors:
- Founding Director Mak Gilchrist, FRSA, helps bring the studio’s schemes to life, consulting on the design, co-producing the builds and leading on community engagements. She is published, and presents talks on the importance of active green spaces in the public realm. Her focus is on the encouragement of positive social interaction and the importance of biodiversity in cities and its psychological benefits.
- Creative Director Will Sandy leads on all the designs from the initial concept to the detailed engineering elements of each project and co-produces the project’s site delivery. Will is published, and presents on the subject of innovative designs within the public realm. His focus is on re-thinking the urban environment and how to encourage social interaction by utilising design to create narrative environments in our cities.
Active inference and artificial curiosity
Date: 4pm Wednesday 21 February 2018
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths University of London
Speaker: Prof Karl Friston, Institute of Neurology, UCL, UK
This talk offers a formal account of insight and learning in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how agents learn from a small number of ambiguous outcomes to form insight.
I will simulations of abstract rule-learning and approximate Bayesian inference to show that minimising (expected) free energy leads to active sampling of novel contingencies. This epistemic, curiosity-directed behaviour closes ‘explanatory gaps’ in knowledge about the causal structure of the world; thereby reducing ignorance, in addition to resolving uncertainty about states of the known world. We then move from inference to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries in their generative models of the world. The ensuing Bayesian model reduction evokes mechanisms associated with sleep and has all the hallmarks of ‘aha moments’.
BIO: Karl Friston is a theoretical neuroscientist and authority on brain imaging. He invented statistical parametric mapping (SPM), voxel-based morphometry (VBM) and dynamic causal modelling (DCM). These contributions were motivated by schizophrenia research and theoretical studies of value-learning, formulated as the dysconnection hypothesis of schizophrenia. Mathematical contributions include variational Laplacian procedures and generalized filtering for hierarchical Bayesian model inversion. Friston currently works on models of functional integration in the human brain and the principles that underlie neuronal interactions.
His main contribution to theoretical neurobiology is a free-energy principle for action and perception (active inference). Friston received the first Young Investigators Award in Human Brain Mapping (1996) and was elected a Fellow of the Academy of Medical Sciences (1999). In 2000 he was President of the international Organization of Human Brain Mapping. In 2003 he was awarded the Minerva Golden Brain Award and was elected a Fellow of the Royal Society in 2006.
In 2008 he received a Medal, College de France and an Honorary Doctorate from the University of York in 2011. He became of Fellow of the Royal Society of Biology in 2012, received the Weldon Memorial prize and Medal in 2013 for contributions to mathematical biology and was elected as a member of EMBO (excellence in the life sciences) in 2014 and the Academia Europaea in (2015). He was the 2016 recipient of the Charles Branch Award for unparalleled breakthroughs in Brain Research and the Glass Brain Award, a lifetime achievement award in the field of human brain mapping. He holds Honorary Doctorates from the University of Zurich and Radboud University.
The potential of brain rhythms to gauge the resiliency and vulnerability of an individual to mental illness
Date: 4pm Wednesday 21 March 2018
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths University of London
Speaker: Dr Ali Mazaheri, School of Psychology, University of Birmingham
The ongoing EEG contains rhythmic activity produced by various frequency-specific networks in the brain. These rhythms have been shown in previous work to capture the functional architecture of the brain at rest as well as during cognition. The term ‘resiliency’ in psychological sciences refers to an individual's ability to successfully adapt/recover from an adverse event. Conversely, the term ‘vulnerability’ refers to factors that make someone at risk of or predisposed to an illness. In the current lecture, I will present findings from both the typically healthy and clinical population showing that specific characteristics of brain rhythms - both at rest and during specific tasks - can be used to gauge the resiliency of individuals to developing pain, and their vulnerability to dementia and to developing PTSD after a traumatic event. I will also discuss the possible future direction of this research with regards to both basic science and translational endeavours.
BIO: I did my undergraduate and MSc degrees at the University of Toronto. I completed my PhD at the Donders Centre for Cognitive Neuroimaging, in Nijmgen, The Netherlands, under the supervision of Prof Ole Jensen. I then did a post-doc at UC-Davis under the supervision of Prof Ron Mangun. ;
Following this, I was an assistant professor at the Department of Psychiatry, University of Amsterdam from 2011-2014. I then moved to Birmingham for a senior lecturer post in January 2015.
EVENT POSTPONED
Why do people watch other people playing video games?
The rise of the broadcasting and spectating of digital play
Speaker: Dr. Mark R Johnson, Affiliation: Department of Political Science, University of Alberta
Date: New date TBC.
Venue: Curzon Cinema (RHB 101), Richard Hoggart Building
Abstract: Ever since the earliest days of video games, many people have chosen to watch others playing these interactive technologies instead (or as well as) playing them themselves. Although this began with just looking over the shoulder of one's friend in the arcade, nowadays over two million individuals from around the world regularly broadcast themselves playing video games over the internet, to viewing audiences of over one hundred million in total, with several thousand individuals able to make a full-time income, potentially in the six-figures, by monetising their broadcasts.
Equally, the rise in the last decade of "Esports" - professionalised competitive video game play - has also highlighted this desire to watch others playing, with international competitions selling out arenas that can hold ten thousand spectators, and giving out tens of millions of dollars in prize money to the most skilled cyber-athletes that citizens tune in to view.
Drawing on two years of (still ongoing) interview and ethnographic research in numerous countries, this talk will explore the interwoven phenomena of live streaming and Esports, and focus on three elements.
- Firstly: who is broadcasting/playing, and who is watching? What are the demographics, interests, backgrounds and motivations of those involved in both halves of these emerging ecosystems?
- Secondly: what are the lives of these highly-visible video game players like, specifically in terms of labour and the transformation of play into work; and how do viewers view, and how does this differ (or remain similar to) television or cinematic media consumption?
- Thirdly, what do the futures of these two domains look like in the next five-to-ten years? What is the impact these phenomena are having on the games industry specifically, and media consumption more generally?
The talk will therefore seek to explore these major changes in the sociotechnical entanglements of the video game industry and video game consumption - significantly larger than the film and music industries combined - and to begin to think about why precisely people would sometimes rather watch others playing video games, instead of simply playing them themselves.
Short Bio: Mark R Johnson is a Killam Memorial Postdoctoral Fellow at the University of Alberta in Canada. His work focuses on the intersections between play and money, such as professionalised video game competition (Esports), the live broadcast and spectating of video games on personalised online "channels", and the blurring of video games and gambling in numerous contexts.
His first book, 'The Unpredictability of Gameplay', is due to be published by Bloomsbury Academic in 2018 and presents a Deleuzean analysis of randomness, chance and luck in games, the effects different kinds of unpredictability have on players, and the communities that arise around them. He is currently writing two new monographs, one about the phenomenon of live streaming on Twitch.tv and the work, labour, lives and careers of those who make their living on the platform, and another about the growth of "fantasy sports betting" as a form of gambling disguised under the aesthetic, thematic and mechanical forms of sports management video games.
Outside academia, he is also an independent game developer, a regular games writer, blogger and podcaster, and a former professional poker player.
Autumn 2017
Towards a more human machine perception of realism in mixed reality
Speaker: Alan Dolhasz (Birmingham City University)
When: 4pm - 5.30pm Wednesday 4 October
Where: Lecture Theatre, Ben Pimlott Building
Our ability to create synthetic, yet realistic representations of the real world, such as paintings or computer graphics, is remarkable. With the continual improvement in creative digital tools we are able to blur the line between the real and synthetic even further. Simultaneously, our ability to consciously detect minute imperfections within this imagery, which break down the illusion of realism, improves with experience. This problem of shifting realism thresholds remains paradoxical and largely underexplored.
As human expectations in this context grow, tools to assist in this problem are scarce, and computational models of perception still far from human performance. While it is possible for computers to make binary decisions regarding the realism or plausibility of imperfections and image artifacts, the problem of making them utilise similar features and methods to humans is nontrivial.
Visual realism in the context of mixed reality and synthetic combinations of objects and scenes is a complex and deeply subjective problem. Human perception of realism is affected by a range of visual properties of the scene and objects within it, from attributes of individual textures, surfaces and objects, to illumination, semantics and style of visual coding, to name a few. On top of this, individual subjective traits and experience of observers further complicate this issue.
In this talk, Alan Dolhasz discusses his work attempting to understand, quantify and leverage human perception of combinations of objects and scenes in order to develop machine perception systems that could aid us in creating more realistic synthetic scenes, as well as detect and localise imperfections.
Alan Dolhasz is a researcher and part-time PhD student at the Digital Media Technology (DMT) Lab, Birmingham City University, with a background in film, sound and visual effects. His research interests include human perception, computer vision, machine learning and mixed and augmented reality. Prior to his research position, he lectured Sound for Visual Media and Sound Synthesis and Sequencing, as well as running a production company, focusing on filmmaking and visual effects compositing, which largely contributed to developing his research area. Alan also works closely with industry, developing application cases for the research work coming out of the DMT Lab.
Media Art between audience and environment: Italian case study
Speakers: Isabella Indolfi for SEMINARIA (Biennial festival of Environmental Art ) and Valentino Catricalà for Fondazione Mondo Digitale (Media Art Festival - Rome, IT)
When: 4-5pm Wednesday 11 October 2017
Where: Ben Pimlott Lecture Theatre
This talk is an attempt to analyze the latest trend of media art in the contemporary art field. Media Art is nowadays a stable field characterized by festival, research center, museum, etc. In the last 60 years, this field has created different ways to reread spaces, buildings, and environments, interacting with the audience and actively involving them. In this way media art has modified the relationship between audience and space. The Media Art Festival in Rome is an example of this.
The talk is divided in two parts. The first one is focused on the concept of media art and the differences between terms such as digital art, new media art, etc., trying to look back at the history and the archeology of the field.
The second one is an attempt to analyze the new relation between technologies and environment: a new trend of media art which is well represented by the Biennial festival Seminaria where artists, temporarily residing on location, are invited to collaborate and integrate on social and geographic variables of an entire village and community, through spatial and relational practices. Life-sized installations, immersive, accessible and habitable, virtually or physically, allow viewers to become inhabitants and meaningful activators.
So Media Art, on the edge of Land Art, Relational Art and many others cross-cultural developments, gives a new idea of public space, which can be sensitive and permeable to the audience and to the environment.
Isabella Indolfi received a Master’s degree in Sociology and New Media from University La Sapienza of Rome. She is an independent curator and consultant for contemporary art and develops and produces projects in collaboration with artists, institutions, festivals, galleries and museums.
With sociological theories in mind, she has approached contemporary art from the perspective of public, social, and relational aspects. For this reason she has founded and supervises the Biennial festival of Environmental Art SEMINARIA since 2011 and has collaborated with institutions and other festivals in the creation of numerous public art installations. Media and Communication studies have led her research to focus on digital art and the latest art languages. She has recently collaborated with the Fondazione Romaeuropa for Digital Life 2014 and 2015; she has curated the Trilogy "Opera Celibe" exhibitions for Palazzo Collicola Visiva Arti in Spoleto. Since 2016 she is artistic consultant for the Cyland Media Art Lab in St. Petersburg; and she is a member of the jury of the Media Art Festival at MAXXI Roma 2017.
Valentino Catricalà (Ph.D) is a scholar and art curator specialised in the analysis of the relationship of artists with new technologies and media. He received a Ph.D. from the Department of Philosophy, Communication and Performing Arts - University of Roma Tre. He has been Ph.D. visiting at ZKM-Center for Arts and Media (Karlsruhe, Germany), University of Dundee (Scotland), Tate Modern (London). Valentino has been part time postdoc research fellow at University of Roma Tre.
Valentino is currently the artistic director of the Rome Media Art Festival (MAXXI Museum) and Art Project coordinator at Fondazione Mondo Digitale. Valentino is also the curator of the project “artists in residence” for the Goethe Institut. Valentino is currently teaching at Rome Fine Arts Academy. Valentino has curated exhibitions in museum and private Galleries and has written essays in international University Journal (see academia.edu). Valentino collaborates with important contemporary art magazine as Flash Art, Inside Art and Segno.
- http://www.seminariasogninterra.it
- http://www.mediaartfestival.org
- http://www.mondodigitale.org/it
- http://www.isabellaindolfi.it
Painting with real paints - e-David, a robot for creating artworks using visual feedback
Speaker: Prof. Oliver Deussen, Visual Computing, Konstanz University
When: 4pm Wednesday 18 October 2017
In Computer Graphics, the term Non-Photorealistic Rendering is used for methods that create "artistically" looking renditions. In last years deep neural networks revolutionized this area and today everybody can create artistically-looking images on their cellphones. Our e-David project targets another goal: we want to understand the traditional painting process, imitate it using a machine and employ techniques from computational creativity on top of this to create artworks that have their own texture and look.
The machine supervises itself during painting and computes new strokes on the difference between content on the canvas and intended result. The involved framework enables artists to implement their own ideas in form of constraints for the underlying optimization process. In the talk I will present e-David as well as recent projects and outline our future plans.
Bio:Prof. Deussen graduated at Karlsruhe Institute of Technology and is now professor for visual computing at University of Konstanz (Germany) and visiting professor at the Shenzhen Institute of Applied Technology (Chinese Academy of Science). In 2014 he was awarded within the 1000 talents plan of the Chinese Government. He is vice speaker of the SFB Transregio "Quantitative Methods for Visual Computing" that is a large research project conducted together with University of Stuttgart. From 2012 to 2015 he served as Co-Editor in Chief of Computer Graphics Forum, currently he is Vice-President of Eurographics Association.
He serves as an editor of Informatik Spektrum, the journal of the German Informatics Association and is the speaker of the interest group for computer graphics. His areas of interest are modeling and rendering of complex biological systems, non-photorealistic rendering as well as Information Visualization. He also contributed papers to geometry processing, sampling methods and image-based modelling.
The neuroscience of music performance: understanding exploration, decision-making and action monitoring to learn about virtuosity and creativity
Maria Herrojo Ruiz, Department of Psychology, Goldsmiths University of London
4pm Wednesday 25 October
Ben Pimlott Lecture Theatre, Goldsmiths
Expert music performance relies on the ability to remember, plan, execute, and monitor the performance in order to play expressively and accurately. My research focuses on examining the neural processes involved in mediating some of these cognitive functions in professional musicians, but also in non-musicians and in patients with movement disorders.
This talk will illustrate different aspects of our current work at Goldsmiths. First, I will present new data from our research on error-monitoring during music performance, which takes a novel perspective by examining the interaction between bodily (heart) and neural signals in this process.
In addition, I will present results from our studies in non-musicians investigating the mechanisms by which anxiety modulates learning of novel sensorimotor (piano) sequences. Using electrophysiology and a behavioural task with separate phases of learning – including an exploratory and a reward-based phase – our research could dissociate the influence of anxiety on these two components. I will finish my talk by highlighting what our data on exploration and performance monitoring can teach us about virtuosity and creativity.
BIO
Maria Herrojo Ruiz is a lecturer in the Psychology Department at Goldsmiths. She studied Theoretical Physics in Madrid, Spain, and later specialised as a postgraduate student in Physics of Complex Systems. She did her doctoral dissertation in Neuroscience as a Marie Curie Fellow in Hanover, Germany, focusing on the neural correlates of error-monitoring during music performance. As principal investigator in two successive research grants in Berlin, Germany, Maria has been conducting research on the role of the cortico-basal ganglia-thalamocortical circuits in mediating learning and monitoring of sensorimotor sequences, both in healthy human subjects and in patients with movement disorders. Her current research at Goldsmiths focuses on the neural correlates of exploration during piano performance and sensorimotor learning, their modulation by anxiety, and the brain-body interaction during music performance.
From dancing robots to Swan Lake: Probing the flexibility of social perception in the human brain
Emily S. Cross, Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Wales
4pm Wednesday 29 November
Ben Pimlott Lecture Theatre, Goldsmiths
As humans, we gather a wide range of information about other agents from watching them move. A network of brain regions has been implicated in understanding others' actions by means of an automatic matching process that links actions we see others perform with our own motor abilities.
Current views of this network assume a matching process biased towards familiar actions; specifically, those performed by conspecifics and present in the observer's motor repertoire. However, emerging work in the field of social neuroscience is raising some interesting challenges to this dominant theoretical perspective. Specifically, recent work has questioned if this system is built for and biased towards familiar human actions, then what happens when we watch or interact with artificial agents, such as robots or avatars?
In addition, is it only the similarity between self and others that leads to engagement of brain regions that link action with perception, or do affective or aesthetic evaluations of another’s action also shape this process?
In this talk, I discuss several recent brain imaging and behavioural studies by my team that provide some first answers to these questions. Broadly speaking, our results challenge previous ideas about how we perceive social agents and suggest broader, more flexible processing of agents and actions we may encounter.
The implications of these findings are further considered in light of whether motor resonance with robotic agents may facilitate human-robot interaction in the future, and the extent to which motor resonance with performing artists shapes a spectator’s aesthetic experience of a dance or theatre piece.
BIO
Emily S. Cross is a professor of cognitive neuroscience at Bangor’s School of Psychology. She completed undergraduate studies in psychology and dance in California, followed by an MSc in cognitive psychology in New Zealand, and then a PhD in cognitive neuroscience at Dartmouth College in the USA. Following this, she completed postdoctoral fellowships at the University of Nottingham and the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany.
The primary aim of her research is to explore experience-dependent plasticity in the human brain and behaviour using neuroimaging, neurostimulation and behavioural techniques. As her research team is particularly interested in complex action learning and perception, they often call upon action experts and training paradigms from highly skilled motor domains, such as dance, music, gymnastics, contortion, and acrobatics.
In addition, she has a longstanding interest in aesthetic perception, and has performed a number of studies exploring the impact of affective experience on how we perceive others. More recently, as part of an ERC starting grant, she and her team are examining how social experience or expectations about artificial agents shape how we perceive and interact with robots and avatars.
Her research has been supported by a number of funding bodies in the USA and EU, including the National Institutes of Health, Volkswagen Foundation, Economic and Social Research Council, Ministry of Defence and European Research Council.
Tactile perception in and outside our body
Speaker: Professor Vincent Hayward
When: 4pm - 5pm Wednesday 6 December
Where: Ben Pimlott Lecture Theatre
The mechanics of contact and friction is to touch what sound waves are to audition, and what light waves are to vision. The complex physics of contact and its consequences inside our sensitive tissues, however, differ in fundamental ways from the physics of acoustics and optics. The astonishing variety of phenomena resulting from the contact between fingers and objects is likely to have fashioned our somatosensory system at all its levels of it organisation, from early mechanics to cognition. The talk will illustrate this idea through a variety of specific examples that show how surface physics shape the messages that are sent to the brain, providing completely new opportunities for applications of human machines interfaces.
Speaker
Vincent Hayward is a professor (on leave) at the Université Pierre et Marie Curie (UPMC) in Paris. Before, he was with the Department of Electrical and Computer Engineering at McGill University, Montréal, Canada, where he became a full Professor in 2006 and was the Director of the McGill Centre for Intelligent Machines from 2001 to 2004.
Hayward is interested in haptic device design, human perception, and robotics; and he is a Fellow of the IEEE. He was a European Research Council Grantee from 2010 to 2016. Since January 2017, Hayward is a Professor of Tactile Perception and Technology at the School of Advanced Studies of the University of London, supported by a Leverhulme Trust Fellowship.
- http://philosophy.sas.ac.uk/about/people
- http://www.actronika.com
- http://people.isir.upmc.fr/hayward
Summer 2017
The Neural Aesthetic
Speaker: Gene Kogan
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths
When: 4pm Wednesday 3 May 2017
Artist and programmer Gene Kogan discusses how artists and musicians are using deep learning for creative experimentation.
Over the last two years, deep learning has made inroads into domains of interest to artists, designers, musicians, and the like. Combined with the appearance of powerful open source frameworks and the proliferation of public educational resources, this once esoteric subject has become accessible to far more people, facilitating numerous innovative hacks and art works. The result has been a virtuous circle, wherein public art works help motivate further scientific inquiry, in turn inspiring ever more creative experimentation.
This talk will review some of the works that have been produced, present educational materials for how to get started, and speculate on research trends and future prospects.
Biography
Gene Kogan is an artist and a programmer who is interested in generative systems, artificial intelligence, and software for creativity and self-expression. He is a collaborator within numerous open-source software projects, and leads workshops and demonstrations on topics at the intersection of code, art, and technology activism.
Gene initiated and contributes to ml4a, a free book about machine learning for artists, activists, and citizen scientists. He regularly publishes video lectures, writings, and tutorials to facilitate a greater public understanding of the topic.
www.genekogan.com / ml4a.github.io / @genekogan
Design for Human Experience & Expression at the HCT Laboratory
Dr. Sid Fels, Electrical & Computer Engineering Department, UB
4pm Wednesday 24 May 2017
Goldsmiths Cinema, Richard Hoggart Building
Research at the Human Communications Technology (HCT) laboratory (hct.ece.ubc.ca) has been targeting design for human experience and expression.
In this presentation, I’ll start with a discussion of gesture-to-speech and voice explorations, including Glove-TalkII and the Digital Ventriloquized Actors (DIVAs). I’ll connect these to other explorations of the new interfaces for musical and visual expression that we have created. I will discuss our work on modelling human anatomy (www.parametrichuman.org) and function, such as speaking, chewing, swallowing and breathing (www.magic.ubc.ca) with biomechanical models using our toolkit Artisynth (www.artisynth.org).
This work is motivated by our quest to make a new vocal instrument that can be controlled by gesture. I’ll discuss some of the activities we have been doing on some new 3D displays: pCubee and Spheree. Finally, these investigations will be used to support a theory of designing for intimacy and discussions of perspectives on human computer interaction for new experiences and forms of expression.
How Do We Interact in Immersive Virtual Reality?
Speaker: Prof. Anthony Steed (UCL)
Date: 4pm Wed. January 25, 2017
Venue: Ben Pimlott Lecture Theatre
Abstract: The recent publicity around virtual reality has been driven by the novelty of head-mounted displays. Google, Facebook, HTC, Microsoft and Sony have all launched related displays. The publicity focuses on how the participant can be immersed within computer-generated sensory stimuli. However, the basic form of such head-mounted interfaces hasn’t changed for a couple of decades. Today’s consumer systems are certainly much more powerful but in the rush to get content out, developers and engineers have been guilty of over-looking some basic science in the field.
In this talk I will talk, from an engineering and design standpoint, how the ideas of embodied cognition can shape virtual reality experiences. Within virtual reality, you can be embodied in a virtual character and this can change how you interact with the world. I will focus on a thread of experimental work in our laboratory that demonstrates how self-representation impacts the way one interacts with the world, and with other people. The experiments will span body ownership illusions, the impact of self-representation on cognitive ability and the use of a self-avatar in tele-collaboration. I will also briefly explore the technical challenges facing virtual reality in the next 10 years.
Short bio: Professor Anthony Steed is Head of the Virtual Environments and Computer Graphics (VECG) group at University College London. Prof Steed's research interests extend from virtual reality systems, through to mobile mixed-reality systems, and from system development through to measures of user response to virtual content. He has published over 200 papers in the area, and is the main author of the book “Networked Graphics: Building Networked Graphics and Networked Games”. He was the recipient of the IEEE VGTC’s 2016 Virtual Reality Technical Achievement Award.
https://wp.cs.ucl.ac.uk/anthonysteed
The artist emerges: the psychology of artistic production and appreciation
Speaker: Dr Rebecca Chamberlain, Goldsmiths
4pm Wednesday 1st February 2017
Lecture Theatre, Ben Pimlott Building
Among the many skills that humans evolved to design their environments, art-making is among the oldest, far predating evidence of written communication. However, we are still in the early stages of understanding how and why individuals create and respond so powerfully to works of art.
In this talk I will explore the psychological mechanisms by which expertise in artistic production and appreciation emerge, evaluating the role of practice and talent in the development of these abilities, drawing on my own and others’ research. I will also look at the interplay between artistic production and appreciation in the relatively new field of embodied aesthetics. Finally, I will address the putative therapeutic value of artistic production and appreciation, through its potential to promote mindfulness and emotional expression.
Biography: Dr Rebecca Chamberlain completed her PhD in psychology at UCL in 2013, followed by a post-doctoral research fellowship in Professor Johan Wagemans’ Gestalt Perception group at KU Leuven in Belgium. In 2017 she joined Goldsmiths as a lecturer in the Department of Psychology. Her research aims to understand artistic expertise and aesthetic perception from a psychological and a neuroscientific point of view.
What is Virtual Reality and How Does it Work for Social Psychologists?
Dr. Sylvia Xueni Pan
4pm Wednesday 8 February 2017
Ben Pimlott Lecture Theatre
Virtual Reality may be new for many people, and it is certainly going to shape the future of many things, including gaming, training, and possibly education. But how does VR work for social psychologists?
As early as 2002, Blascovich et al. proposed the use of (immersive) VR "as a methodological tool for social psychology" because it helps improve the trade-off between ecological validity and experimental control.
In this talk, Dr Sylvia Pan will go through several examples from her own work from the past 10 years where VR was used to answer research questions in social interaction, and points out the benefits and pitfalls.
Biography: Sylvia Xueni Pan is a lecturer in Computing at Goldsmiths, University of London. She received a BSc in Computer Science at Beihang University, Beijing, China in 2004, an MSc in Vision, Imaging and Virtual Environments at University College London (UCL), UK in 2005, and a PhD in Virtual Reality at UCL in 2009. Before joining Goldsmiths in 2015, She worked as a research associate in Computer Science, UCL, and in the Institute of Cognitive Neuroscience (ICN), UCL, where she remains an honorary research fellow.
Over the past 10 years she developed a unique interdisciplinary research profile with journal and conference publications in both VR technology and social neuroscience.? Her work has been featured multiple times in the media, including BBC Horizon and the New Scientist magazine.
Brain plasticity in amputees
Speaker: Tamar Makin, University of Oxford
Time: 4pm Wednesday 15 February 2017
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths
Abstract: Following arm-amputation brain areas that previously operated the hand will become “freed-up” to work for other body parts. This process of brain plasticity is widely held to result in the experience of phantom limb pain (pain that is perceived to be arising from the missing hand), and is therefore considered to be maladaptive. I will present evidence to challenge the proposed link between brain plasticity and phantom pain, and instead demonstrate that representation of the missing hand persists decades after amputation. I will show that the cortical resources of the missing hand can be used by a multitude of body parts, and even artificial limbs. Based on this evidence, I suggest that plasticity in amputees is experience-dependant, and is not inherently maladaptive.
Biography: I graduated from the Brain and Behavioural Sciences programme at the Hebrew University of Jerusalem in 2009. I was then awarded several career development fellowships to establish my research program on brain plasticity in amputees at FMRIB, the neuroimaging centre the University of Oxford, first as Research Fellow and later as a Principle Investigator. In 2016 I joined the faculty of UCL to continue this work.
The Invention of Consciousness
SPEAKER: NICHOLAS HUMPHREY, EMERITUS PROFESSOR LSE
DATE: 4pm Wednesday 15 March 2017
VENUE: Lecture Theatre, Ben Pimlott Building
Abstract: In English we use the word "invention" in two ways. First, to mean a new device or process developed by experimentation, and designed to fulfil a practical goal. Second, to mean a mental fabrication, especially a falsehood, developed by art, and designed to please or persuade. In this talk I'll argue that human consciousness is an invention in both respects. First, it is a cognitive faculty, evolved by natural selection, designed to help us make sense of ourselves and our surroundings. But then, second, it is a fantasy, conjured up by the brain, designed to change the value we place on our own existence.
Brief Biography: Nicholas Humphrey is a theoretical psychologist who has migrated from neurophysiology, through animal behaviour to evolutionary psychology and the study of consciousness. He did research on mountain gorillas with Dian Fossey in Rwanda, he was the first to demonstrate the existence of “blindsight” after brain damage in monkeys, he proposed the celebrated theory of the “social function of intellect”, and he has recently explained the evolutionary basis of the placebo effect. He has held positions at the universities of Oxford and Cambridge, and is now emeritus professor at the LSE. Honours include the Martin Luther King Memorial Prize, the Pufendorf medal and the International Mind and Brain Prize.
Gestures in Contemporary Music Performance
When: 4pm-5pm Wednesday 22 March 2017
Where: Lecture Theatre, Ben Pimlott Building
Speaker: Giusy Caruso (Institute for Psychoacoustics & Electronic Music, Ghent University)
Giusy Caruso (Ghent University) shows how motion capture technologies help performers to frame their interpretative outlooks.
Contemporary music often challenges traditions of performance, demanding physical movements that go beyond existing codes and practices. In this talk, Giusy Caruso (Institute for Psychoacoustics & Electronic Music, Ghent University) shows how motion capture technologies enable performers to better understand the relationship between goals, actions, and sounds.
She describes a general framework for understanding music performance founded on the notion of performance spaces and frames, and on recent insights in embodied interactions with music. She then illustrates how this framework applies to contemporary piano performance based on Karnatic music tradition.
BIOGRAPHY
Born in Cosenza (Italy) and living in Brussels, Giusy Caruso is a professional concert pianist and artist researcher graduated cum laude in Piano and in Philosophy. She is interested in musicology, theatre, dance, improvisation, yoga, Eastern cultures and music technology. Her performance projects - based on a repertory ranging from classical to contemporary style - often interact with visual arts, theatre and dance.
Caruso continues her concert activity and lectures throughout Europe, Asia and America while working at present as artist researcher at IPEM (Institute for Psychoacoustics and Electronic Music) Department of Musicology, Art, Music Performance and Theatre Studies at the University of Ghent, in affiliation with the KASK School of Arts of Ghent, Royal Conservatory.
Hearts And Minds: The Interrogations Project.
A VR Project on US military interrogation practices and human rights abuse
Speaker: Roderick Coover, Temple University, Philadelphia
When: 4pm-5pm Wednesday 29 March 2017
Where: Cinema, Richard Hoggart Building, Goldsmiths
This talk presents Hearts And Minds: The Interrogations Project, an immersive and interactive work for CAVES and other VR environments that foregrounds veterans testimonies of US military interrogation practices and human rights abuses during the Iraq War. The work tells the stories of often young and ill-trained soldiers who never entered the military expecting to become torturers and who find themselves struggling to reconcile the activities they were asked to do.
Created by an international team of filmmakers, artists, scientists, and researchers, the work employs interactive arts to give space to the veteran's moving stories. Users navigate through seemly ordinary American domestic spaces: a boy’s bedroom, a family room, a backyard, a kitchen. Triggering individual objects, such as a toy truck or wire cutters, users arrive in surreal landscapes of memory where they encounter moving stories. Hearts and Minds is formatted for CAVES, 360 cinemas, tablets, ipads and Oculus Rift.
Premiered at the Run Run Shaw Gallery 360 Hong Kong for ISEA and featured at international arts and film festivals, it is winner of the 2016 Award for best work by the Electronic Literature Organization and is selected for presentation at the Nobel Prize Peace Forum in September 2017. Hearts and Minds was made at the Electronic Visualization Lab of the University of Illinois Chicago and the Digital Scholarship Lab at Temple University in collaboration with Scott Rettberg, Daria Tsoupikova, Arthur Nishimoto, John Tsukayama and Jeffrey Murer.
Bio: Roderick Coover is Founding Director of the PHD-MFA Program in Documentary Arts and Visual Research and Professor of Film and Media Arts at Temple University in Philadelphia.
He makes films, interactive cinema, installations and webworks. A pioneer in interactive documentary arts and poetics, his works are distributed through Video Data Bank, DER, Eastgate Systems and featured at events at wide-ranging, international museums, festivals and public venues like ISEA, SIGGRAPH, Museo Arts Santa Monica Barcelona and Bibliotheque National De France Paris.
He is also the author or editor of works in print including the book Switching Codes: Thinking Through Digital Technology In The Humanities And Arts (University of Chicago Press). Coover is the recipient of Whiting, Mellon, LEF, and SPIRE awards, among others. More at www.roderickcoover.com
How does the development of inhibitory control interact with the development of conceptual understanding?
Wednesday 19 October 2016
Andrew Simpson (University of Essex)
Ben Pimlott Lecture Theatre, Goldsmiths, University of London
ABSTRACT: Inhibitory Control is a component of Executive Function. It is the capacity to avoid an incorrect response or irrelevant information in order to meet a current goal. Children’s performance on a wide range of ‘inhibitory tasks’ improves dramatically between three and five years.
We propose that these tasks can be divided into two groups, and have inhibitory demands that are created in different ways. In ‘Response-given’ tasks, the task’s structure contrives to automatically trigger a specific incorrect response which must then be inhibited. In ‘Open’ tasks, the child’s own reasoning, based on their conceptual understanding of the task, leads initially to the to-be-inhibited response. This means that the conceptual understanding a child brings to a task determines whether it has inhibitory demands. Conceptualize the task one way, a to-be-inhibited response is generated, and inhibition is required; conceptualize it another way, no to-be-inhibited response is generated, and no inhibition is required.
We suggest that this insight has significant implications for our theories of cognitive development. The view that weak Inhibitory Control simply acts as a brake on early conceptual development is far too limited. Instead, Inhibitory Control and conceptual understanding interact, in complex ways, across the course of infancy and childhood.
BIOGRAPHY: Andrew’s original undergraduate degree was in genetics from the University of Sheffield, and he obtained a PhD in molecular biology from Queens' College, Cambridge in 1990. He then worked in London for several years at the Ministry of Agriculture, Fisheries and Food (now DEFR) as a science advisor and administrator. At the same time, Andrew studied for a BSc in Psychology at Birkbeck College, London.
Following completion of this degree, he worked as a Research Fellow at the University of Birmingham. While at Birmingham, Andrew started a part-time PhD in cognitive development, which he completed in 2005. He later also lectured part-time at London Metropolitan University, before joining the academic staff of the University of Essex in 2008
What Can Deep Neural Networks Learn From Music?
Date: 4.30-5.30pm Monday 7 November 2016 (NB: different day and time)
Speaker: Douglas Eck, Research Scientist, Google Brain
Place: Ben Pimlott lecture hall
Abstract: I'll discuss the Magenta Project, an effort to generate music, video, images and text using machine intelligence. Magenta poses the question, “Can machines make music and art? If so, how? If not, why not?” (Partial answer: machines won't replace artists or musicians anytime soon, thankfully!)
The goal of Magenta is to produce open-source tools and models that help creative people be even more creative. I will give an overview of Magenta with focus on closing the loop between musicians and code. I'll discuss recent progress in audio and music score generation, and will focus on the challenge of improving machine learning generative models based on user and artist feedback.
Bio: Douglas Eck is a Research Scientist at Google working in the areas of music and machine learning. Currently he is leading the Magenta Project, a Google Brain effort to generate music, video, images and text using deep learning and reinforcement learning. One of the primary goals of Magenta is to better understand how machine learning algorithms can learn to produce more compelling media based on feedback from artists, musicians and consumers. Doug led the Search, Recommendations and Discovery team for Play Music from the product's inception as Music Beta by Google through its launch as a subscription service. Before joining Google in 2010, Doug was an Associate Professor in Computer Science at University of Montreal (MILA lab) where he worked on rhythm and meter perception, machine learning models of music performance, and automatic annotation of large audio data sets.
research.google.com/pubs/author39086.html
Linguistic and perceptual colour categories
4pm-5pm Wednesday 9 November 2016
Speaker: Christoph Witzel, Justus-Liebig-Universität Gießen, Germany
Ben Pimlott Lecture Theatre, Goldsmiths, University of London
ABSTRACT: Colour categorisation has been the prime example used to investigate the relationship between perception and language. By now, the perspective on this theme has developed from a simple contrast between nature and nurture towards a focus on the complex interplay between perception, culture, and ecology. However, it is still an open question whether there is a perceptual counterpart of linguistic colour categories. In an extensive series of studies, we investigated different ways in which linguistic colour categories may be related to colour perception to give an answer to this question.
BIOGRAPHY: Christoph Witzel obtained a university degree in psychology and another one in political science and cultural anthropology from the University of Heidelberg in Germany. He did his PhD in Experimental Psychology at Gießen University in Germany, then a postdoc at the University of Sussex, Brighton, UK and at the Université Paris Descartes, Paris, France. Now, he is back for a postdoc at Gießen University. Christoph’s research focuses on colour vision and extends to other topics, such as Synaesthesia and Sensory Substitution.
Cultural Computing: Looking for Japan
Wednesday 16 November 2016
Speaker: Naoko Tosa
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths
Naoko Tosa is a pioneer in the area of media art and is an internationally renowned Japanese media artist. Her artworks became well known worldwide in late 1980s after one of her early artworks was selected for the “New Video, Japan” exhibition at MOMA, New York.
In this talk - part of the Whitehead Lecture Series - she demonstrates the role of information technology in enabling new understandings of our multicultural world, and discusses cross-cultural cultures from the viewpoint of an artist who is herself deeply immersed in both eastern and western cultures. She then proposes a new vision that is founded upon the relationships between diverse cultures.
Biography: Naoko Tosa's artworks have been exhibited at the Museum of Modern Art (New York), Metropolitan Art Museum (New York) and at many other locations worldwide. She held a solo exhibition at Japan Creative Center (Singapore) in 2011. Her artworks have recently focused on visualising the unconsciousness.
She has been appointed as Japan Cultural Envoy 2016 by the Commissioner of the Agency for Cultural Airs. She is expected to promote Japanese culture to the world by exhibiting her artworks and also through her networking activities with overseas people of culture.?
She has won numerous awards, including awards from ARS Electronica, UNESCO's Nabi Digital Storytelling Competition of Intangible Heritage, Yeosu Marine Expo (Korea) and Good Design Award Japan.
In 2012, she exhibited her digital artwork called 'Four God Fag' which symbolises four traditional Asian gods connecting Asia. In 2014 she was awarded Good Design Award Japan by her projection mapping using only actual images. In 2015 she carried out projection mapping celebrating RIMPA 400 anniversary and attracted more than 16,000 attendees.
She is currently a professor at Kyoto University's Center for the Promotion of Excellence in Higher Education. After receiving a PhD in Art & Technology research from the University of Tokyo, she became a fellow at the Centre for Advanced Visual Studies at MIT.?
Her new book 'Cross-Cultural Computing: An Artist's Journey' is available from Springer UK.
Sheldon Brown (University of California San Diego)
4pm-5pm, Wednesday 23 November 2016
Lecture Theatre, Ben Pimlott Building, Goldsmiths
Art is often considered to be an exemplary outcome of imaginative processes, and it is also thought of as a means of engaging the imagination of audiences, but what is the phenomena that is inferred by the word "imagination"?
In this lecture, Sheldon Brown will show a series of works that aim to directly engage components of cognition that might be aspects of what is generally considered to be imagination. These artworks are an aspect of how Sheldon is attempting to understand the phenomena at the Arthur C. Clarke Center for Human Imagination at UC San Diego, through collaborations with neuroscience, cognitive science, computational science, engineering, medicine, literature, and the arts.
Bio: Sheldon Brown combines computer science research with vanguard cultural production. He is the John D. and Catherine T. MacArthur Foundation Endowed Chair of Digital Media and Learning at UCSD, and is the Director of the Arthur C. Clarke Center for Human Imagination where he is a Professor of Visual Arts and a co-founder of the California Institute of Telecommunications and Information Technologies (Calit2).
His interactive artworks have been exhibited at the Museum of Contemporary Art in Shanghai, The Exploratorium in San Francisco, Ars Electronica in Linz Austria, The Kitchen in NYC, Zacheta Gallery in Warsaw, Centro Nacional in Mexico City, Oi Futuro in Rio de Janeiro, Museum of Contemporary Art San Diego, and others.
He has also been featured at leading edge techno-culture conferences such as Supercomputing, SIGGRAPH, TedX, GDC, has been commissioned for public artworks in Seattle, San Francisco, San Diego and Mexico City, and has received grants from the NSF, AT&T New Experiments in Art and Technology, the NEA, IBM, Intel, Sun Microsystems, SEGA SAMMY, Sony, Vicon and others.Biography: Sheldon Brown holds the John D. and Catherine T. MacArthur Foundation Endowed Chair in Digital Media and Learning. He is the Director of the Arthur C. Clarke Center for Human Imagination and is UCSD Site Director of the NSF Sponsored Center for Hybrid Multicore Productivity Research.
Recent projects include:
- The Scalable City an interactive game installation, 3D movie and other artifacts show at venues including the Shanghai MOCA, The Exploratorium, The National Academy of Science, Ars Electronica, and many others.
- StudioLab, 2003 installation at Image/Architecture, Florence Italy
- Smoke and Mirrors, 2000-2002 an installation at the Fleet Science Museum, and a touring environment; Istoria, a series of sculptures
Attention and Cross-Cultural differences: Using a computational model to unveil some of the processes involved
4pm-5pm Wednesday 30 November 2016
Speaker: Eirini Mavritsaki, Birmingham City University
Ben Pimlott Lecture Theatre, Goldsmiths, University of London
ABSTRACT: Although the majority of research in Visual Attention is based on European American cultures, additional research has shown differences in attending the visual field between members of collectivist East Asian cultures and individualist European American cultures. Research into picture perception showed that East Asians are more likely to attend to the perceptual field as a whole and to perceive relationships between a salient object and its background than European Americans. In our lab, we used a computational model and behavioural studies to further study the observed differences. The spiking Search over Time and Space (sSoTS) model simulates the traditional visual search task. In sSoTS, we reduced saliency levels and by doing this could simulate effects of similar manipulations in traditional visual search experiments. To further investigate the relationship between targets and distractors in the two cultural groups, and the differences amongst the cultural groups, we used traditional visual search experiments in both bottom-up and top-down search and were able to replicate the effects found in the literature. In this talk I will present the results from this work and discuss the role of saliency.
BIOGRAPHY: Dr Eirini Mavritsaki is a Senior Lecturer and Director of the Centre for Applied Psychological Research in Birmingham City University. Eirini did her PhD in The University of Sheffield and her postdoctoral research in The University of Birmingham and The University of Oxford in the areas of visual attention and learning. Eirini’s work in visual attention has been expanded to developing a model based analysis of fMRI data and investigating the underlying processes involved in visual neglect, visual extinction and disorders like ADHD, Alzheimer’s and Parkinson’s disease, while she further expanded her research to investigate cross-cultural differences in visual attention using behavioural and computational modelling studies. Eirini’s work has been awarded the Cognitive Psychology Prize by the British Psychological Society in 2012.
Composer, Performer, Listener
4pm-5pm Thursday 8 December 2016
Speaker: Jason A. Freeman
Venue: Goldsmiths Cinema, Richard Hoggart Building (RHB 110)
Abstract: Jason Freeman develops novel and innovative compositional designs and technologies to make contemporary art music accessible to a large and diverse audience. His music responds to the challenge of arts engagement in contemporary society, inviting performers and audiences to collaborate with him and with each other to create music. In each project, he fundamentally reimagines the roles of audiences and performers, combining new interfaces with traditional instruments, participation with performance, computer code with music notation, and transformative paradigms with traditional ensembles and venues. In this talk, Freeman will explore key compositional and technological ideas that enable these novel relationships between composer, performer, and listener in his work, including real-time music notation, live coding, laptop ensembles, mobile technology, and open-form scores.
Bio: Jason Freeman is a Professor of Music at Georgia Tech. His artistic practice and scholarly research focus on using technology to engage diverse audiences in collaborative, experimental, and accessible musical experiences. He also develops educational interventions in K-12, university, and MOOC environments that broaden and increase engagement in STEM disciplines through authentic integrations of music and computing. His music has been performed at Carnegie Hall, exhibited at ACM SIGGRAPH, published by Universal Edition, broadcast on public radio’s Performance Today, and commissioned through support from the National Endowment for the Arts. Freeman’s wide-ranging work has attracted support from sources such as the National Science Foundation, Google, and the Aaron Copland Fund for Music. He has published his research in leading conferences and journals such as Computer Music Journal, Organised Sound, NIME, and ACM SIGCSE. Freeman received his B.A. in music from Yale University and his M.A. and D.M.A. in composition from Columbia University.
Organised and co-hosted by Dr Rebecca Fiebrink, Department of Computing, Goldsmiths, University of London.
Anne Verroust-Blondet on "Sketch-based 3D model retrieval
2:00 - 3:00pm 24 May 2016
RHB Cinema, ground floor, Richard Hoggart Building
"Sketch-based 3D model retrieval using visual part shape description and view selection" by Dr. Anne Verroust-Blondet from INRIA, Paris, France (work done in collaboration with Zahraa Yasseen and Ahmad Nasri)
Abstract: Hand drawings are the imprints of shapes in human's mind. How a human expresses a shape is a consequence of how he or she visualizes it. A query-by-sketch 3D object retrieval application is closely tied to this concept from two aspects. First, describing sketches must involve elements in a figure that matter most to a human. Second, the representative 2D projection of the target 3D objects must be limited to "the canonical views" from a human cognition perspective. We advocate for these two rules by presenting a new approach for sketch-based 3D object retrieval that describes a 2D shape by the visual protruding parts of its silhouette. Furthermore, we present a list of candidate 2D projections that represent the canonical views of a 3D object.
The general rule is that humans would visually avoid part occlusion and symmetry. We quantify the extent of part occlusion of the projected silhouettes of 3D objects by skeletal length computations. Sorting the projected views in the decreasing order of skeletal lengths gives access to a subset of best representative views. We experimentally show how views that cause misinterpretation and mismatching can be detected according to the part occlusion criteria. We also propose criteria for locating side, off axis, or asymmetric views.
Short Bio: Anne Verroust-Blondet is a senior research scientist in the RITS research group of Inria Paris, France. She obtained her "Thèse de 3e cycle" and her "Thèse d'Etat" in Computer Science (respectively in database theory and in computer graphics) from the University of Paris-Sud. Her current research interests include 2D and 3D visual information retrieval, object recognition, 2D and 3D geometric modeling and perception problems in the context of intelligent transportation systems.
References: https://who.rocq.inria.fr/Anne.Verroust/
Sylvain Calinon on "Human-robot interaction
2:00pm - 3:00pm, 14 June 2016
RHB Cinema, ground floor, Richard Hoggart Building
Dr. Calinon from IDIAP, Switzerland on "Robot skills acquisition by Human-robot interaction"
Abstract: In this presentation, I will discuss the design of user-friendly interfaces to transfer natural movements and skills to robots. I will show that human-centric robot applications require a tight integration of learning and control, and that this connexion can be facilitated by the use of probabilistic representations of the skills.
In human-robot collaboration, such representation can take various forms. In particular, movements must be enriched with perception, force and impedance information to anticipate the users' behaviours and generate safe and natural gestures. The developed models serve several purposes (recognition, prediction, online synthesis), and are shared by different learning strategies (imitation, emulation, incremental refinement or exploration).
The aim is to facilitate the transfer of skills from end-users to robots, or in-between robots, by exploiting multiple sources of sensory information and by developing intuitive teaching interfaces.
The proposed approach will be illustrated through a wide range of robotic applications, with robots that are either close to us (robots for collaborative artistic creation, robots for dressing assistance), parts of us (prosthetic hands), or far away from us (robots with bimanual skills in deep water).
Bio: Dr Sylvain Calinon is a permanent researcher at the Idiap Research Institute (http://idiap.ch), heading the Robot Learning & Interaction Group. He is also a Lecturer at the Ecole Polytechnique Federale de Lausanne (http://epfl.ch) and an External Collaborator at the Department of Advanced Robotics, Italian Institute of Technology (IIT).
From 2009 to 2014, he was a Team Leader at IIT. From 2007 to 2009, he was a Postdoc at EPFL. He holds a PhD from EPFL (2007), awarded by Robotdalen, ABB and EPFL-Press Awards. He is the author of about 80 publications and a book in the field of robot learning by imitation and human-robot interaction, with recognition including Best Paper Award at Ro-Man'2007 and Best Paper Award Finalist at ICIRA'2015, IROS'2013 and Humanoids'2009.
He currently serves in the Organizing Committee of IROS'2016 and as Associate Editor in IEEE Robotics and Automation Letters, Springer Intelligent Service Robotics, Frontiers in Robotics and AI, and the International Journal of Advanced Robotic Systems.
Personal webpage: http://calinon.ch
Predicting the unpredictable? Anticipation in spontaneous social interactions
4pm Wednesday 13 January 2016
Speaker: Dr Lilla Magyari, Department of General Psychology, Pázmány Péter Catholic University, Budapest, Hungary
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths, University of London
Abstract: This talk will focus on some of the cognitive processes underlying our ability to coordinate our actions in spontaneous social interactions. In particular, I will present EEG and behavioural studies about spontaneous verbal interactions, such as everyday natural conversations.
I will also present some preliminary experimental data about non-verbal interactions, such as movement improvisation (i.e. dance improvisation). The key focus of my talk will be whether or not and how participants in spontaneous social interactions anticipate others' actions and the timing of these actions.
Bio: Lilla Magyari studied psychology and Hungarian grammar and literature at the ELTE University in Budapest in Hungary and cognitive neuroscience in Nijmegen, in the Netherlands.
She obtained her Ph.D. in the Language & Cognition Department of the Max Planck Institute for Psycholinguistics. Her dissertation explored the cognitive mechanisms involved in the timing of turn-taking in everyday conversations. Complementing her Ph.D. studies, she also worked at the Neuroimaging Center of the Donders Institute for Brain, Cognition and Behaviour, focusing on the implementation of methods for EEG/MEG data-analysis within the FieldTrip software-package.
She also studied theatre-directing at the Amsterdam School of the Arts for a year. Currently, she lives in Budapest where she works as an assistant professor at the Department of General Psychology of Pázmány Péter Catholic University. Her research investigates linguistic and cultural differences in turn-taking of natural conversation, empirical aesthetics and coordination in movement improvisation.
Considering Movement in the Design of Digital Musical Instruments
4pm Wednesday 20 January 2016
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths, University of London
Nicholas Ward (Digital Media & Arts Research Centre, University of Limerick) explores how designers consider human movement in the creation of new digital musical instruments.
This talk will explore how we consider human movement in the design of Digital Musical Instruments. Starting from a consideration of effort in performance, several existing approaches to the description of movement in the field of DMI design will be discussed. Following this I will consider how approaches from the fields of Tangible Interaction and Product Design, which attempt to privilege Human Movement, might inform DMI design. Two examples of work where a consideration of movement drove the design process will be presented.
Biography
Nicholas Ward is a lecturer at DMARC (the Digital Media and Arts Research Centre, University of Limerick). He holds a PhD from the Sonic Arts Research Centre at Queen’s University, Belfast. His research explores physicality and effort in the context of digital musical instrument performance and game design. Specifically he is interested in movement quality, systems for movement description, and their utility within a design context.
Dynamic Facial Processing and Capture in Academia and Industry
Wednesday 27 January 2016
Speaker: Dr. Darren Cosker
Director (CAMERA) / Associate Prof./ Reader, Department of Computer Science, University of Bath, Royal Society Industry Fellow, Double Negative Visual Effects
The visual effects and entertainment industries are now a fundamental part of the computer graphics and vision landscapes - as well as impacting across society in general. One of the issues in this area is the creation of realistic characters - including facial animation, creating assets for production, and improving work-flow. Advances in computer graphics, vision and rendering have underlined much of the success of these industries, built on top of academic advances. However, there are still many unsolved problems - some obvious and some less so.Biography
Dr. Darren Cosker is a Royal Society Industrial Research Fellow at Double Negative Visual Effects, London, and a Reader/Associate Prof. at the University of Bath. He is the Director of the Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), a £10 million incentive co-funded by EPSRC/AHRC and industry. Previously, Dr. Cosker held a Royal Academy of Engineering Research Fellowship (2007-2012), at the University of Bath. He is interested in the convergence of computer vision, graphics and psychology, with applications to creative industries, sport and health.
My Path to Spatial Interaction & Display: Navigation Beyond the Screen
4pm Wednesday 10 February 2016
Speaker: Dale Herigstad, Advanced Interaction Consultant. Co-founder of SeeSpace.
Venue: Curzon Goldsmiths, Richard Hoggart Building
In my world, motion graphics and computer graphics have always been about objects in a 3D space. And now, as the world moves "beyond the screen" with VR and AR, interaction is more complex and requires greater levels of simplicity. I will show past and current experiments that explore evolving approaches to interaction in spatial contexts, and seek to demonstrate logical progressions over time.
Now living in London, Dale Herigstad spent over 30 years in Hollywood as a Creative Director for motion graphics in TV and film. His mission has been to apply the principles of rich media design to interactive experiences. He began designing interfaces for Television more than 20 years ago, and was a founder of Schematic, a pioneering design firm founded in innovation.
Biography
Dale has developed a unique spatial approach to designing navigation systems for new screen contexts. He was a part of the research team that conceptualised digital experiences in the film “Minority Report,” and has led the development of gestural navigation for screens at a distance. And as screens begin to disappear, Dale is focusing on navigation and display of information and graphics that are “off screen”. Virtual space and place are new frontiers of design.
He has an MFA from California Institute of the Arts, where in 1981 he taught the first course in Motion Graphics to be offered to designers in the United States. He served on the founding advisory board of the digital content direction at the American Film Institute in Los Angeles, and also was an active participant in the development of advanced prototypes for Enhanced TV at the American Film Institute for many years. Dale is a member of the Academy of TV Arts & Sciences, and has 4 Emmy awards.
More recently, Dale co-founded SeeSpace, whose first product, InAiR, places dynamic IP content in the space in front of the Television, perhaps the first Augmented Television experience. And Dale is now researching and developing the design methodology for navigating virtual information for AR and VR.
The critical self and awareness in people with dementia
4pm Wednesday 2 March 2016
Speaker: Robin Morris, Professor of Neuropsychology, Institute of Psychiatry, Psychology and Neuroscience, London
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths, University of London
Abstract
A prominent aspect of having dementia is the loss of awareness of function. The lecture relates this loss to disturbances of self-knowledge and the neurocognitive systems that support awareness. It explores the notion of the formation of the critical self and how this provides a preserved sense of self in people with dementia but at the cost of loss of awareness. It also considers how awareness may continue to operate paradoxically at a pre-conscious level and how this influences the experience of people with dementia.
Short bio
Robin Morris is Professor of Neuropsychology at the Institute of Psychiatry, Psychology and Neuroscience as well as Head of the Clinical Neuropsychology Department at King’s College Hospital in London. He has worked at the Institute of Psychiatry for 27 years and combined research into patients with acquired brain disorder with working as a clinician. His research interests include the neuropsychology of awareness, executive functioning and memory. He is recipient of the British Psychological Society, Division of Neuropsychology, award for outstanding contribution to neuropsychology internationally.
Sketched Visual Narratives for Image and Video Search
Wednesday 23 March 2016
Speaker: Dr John Collomosse, Senior Lecturer in the Centre for Vision Speech and Signal Processing (CVSSP) at the University of Surrey.
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths, University of London
Abstract
The internet is transforming into a visual medium; over 80% of the internet is forecast to be visual content by 2018, and most of this content will be consumed on mobile devices featuring a touch-screen as their primary interface. Gestural interaction, such as sketch, presents an intuitive way to interact with these devices. Imagine a Google image search in which specify your query by sketching the desired image with your finger, rather than (or in addition to) describing it with text words. Sketch offers an orthogonal perspective on visual search - enabling concise specification of appearance (via sketch) in addition to semantics (via text).
In this talk I will present a summary of my group's work on the use of free-hand sketches for the visual search and manipulation of images and video. I will begin by describing a scalable system for sketch based search of multi-million image databases, based upon our state of the art Gradient Field HOG (GF-HOG) algorithm. Imagine a product catalogue in which you sketched, say an engineering part, rather than using a text or serial numbers to find it? I will then describe how scalable search of video can be similarly achieved, through the depiction of sketched visual narratives that depict not only objects but also their motion (dynamics) as a constraint to find relevant video clips. I will show that such visual narratives are not only useful for search, but can also be use to manipulate video through specification of a sketched storyboard that drives video generation - for example, design of novel choreography through a series of sketched poses.
The work presented in this talk has been supported by the EPSRC and AHRC between 2012-2015.
Bio
Dr John Collomosse is a Senior Lecturer in the Centre for Vision Speech and Signal Processing (CVSSP) at the University of Surrey. John joined CVSSP in 2009, following 4 years lecturing at the University Bath where he also completed his PhD in Computer Vision and Graphics (2004). John has spent periods of time at IBM UK Labs, Vodafone R&D Munich, and HP Labs Bristol.
John's research is cross-disciplinary, spanning Computer Vision, Computer Graphics and Artificial Intelligence, focusing on ways to add value and make sense of large, unstructured media collections - to visually search media collections, and present them in aesthetic and comprehensible ways. Recent projects spanning Vision and Graphics include: sketch based search of images/video; plagiarism detection in the arts; visual search of dance; structuring and presenting large visual media collections using artistic rendering; developing characters animation from 3D multi-view capture data. John holds ~70 refereed publications, including oral presentations at ICCV, BMVC, and journal papers in IJCV, IEEE TVCG and TMM. He was general chair for NPAR 2010-11 (at SIGGRAPH), BMVC 2012, and CVMP 2014-15 and is an AE for C&G and Eurographics CGF.
The integrative self
7 October 2015, 4pm
We will hold this lecture in the Professor Stuart Hall Building in LG01 instead of in the Ben Pimlott Lecture Theatre.
Speaker: Professor Glyn Humphreys, Department of Experimental Psychology, University of Oxford
Abstract: In this talk I will propose a new account of how self association mediates information processing, proposing that self association enhances binding in perception and memory. Furthermore, I will present evidence that the effects of the self on binding are distinct from those of reward. The results indicate that self-reference plays a particular functional role in cognition.
Brief Bio: Glyn Humphreys is Watts Professor and Head of the Department of Experimental Psychology, Oxford University. He has published 16 books and over 650 articles in leading international journals. He has been awarded the Spearman Medal, the Cognitive Psychology Prize, the President’s award and a Lifetime Achievement award from the British Psychological Society. He has given the Freda Newcombe and Donald Broadbent lectures to the British Neuropsychology Society and the European Society for Cognitive Psychology. He has edited leading journals in the field including the Journal of Experimental Psychology: Human Perception and Performance and he has been President of both the British Neuropsychology Society and the Experimental Psychology Society.
The emerging cognitive neuroscience of hypnosis
14 October 2015, 4pm
Speaker: Dr Devin Blair Terhune, Department of Psychology, Goldsmiths, University of London
Abstract: Hypnosis represents a valuable method for the top-down regulation of conscious states and it is becoming increasingly used in cognitive neuroscience. Here I will aim to provide a broad overview of the phenomenon of hypnosis from the purview of cognitive neuroscience. After dispelling a number of widespread myths and misconceptions about hypnosis, I will introduce its core features and describe how it is used in experimental contexts. I will then highlight behavioural and neuroimaging work that has begun to clarify the mechanisms of hypnosis and outline current thinking in the field regarding how the brain is able to produce the top-down regulation observed in hypnosis. Finally, I will provide a number of examples of how hypnosis can be used in an instrumental manner in order to study different facets of cognition and psychopathology. Research in this nascent domain can provide unique but complementary evidence regarding the mechanisms underlying different facets of cognition and consciousness including top-down regulation, agency, and awareness.
Brief Bio: Devin Blair Terhune is a recently-appointed Lecturer in the Department of Psychology at Goldsmiths, University of London. He completed his PhD at Lund University in Sweden on the cognitive neuroscience of hypnosis and has been a postdoctoral research fellow in the Department of Experimental Psychology at the University of Oxford for the past five years. In addition to his work on hypnosis, he is interested in time perception, mind wandering, synaesthesia and different facets of conscious awareness.
You move, I watch, it matters: A neurocognitive approach to understanding aesthetic appreciation of human action
11 November 2015, 4pm
Speaker: Dr Guido Orgs
Abstract: Why do we enjoy dancing together or watching flash-mobs on you tube? Why is Strictly Come Dancing more popular than the Xfactor? As our visual world becomes ever more dynamic, traditional photography is replaced by video and animation. Yet, little is known about what makes human movement so appealing to watch. Existing psychological theories of aesthetic appreciation have largely focused on static paintings, sculpture and music. In this talk I will outline a neuro-cognitive model that combines principles from communication and dynamical systems theories with the cognitive neuroscience of action perception to provide a conceptual framework for an aesthetic science of human movement. With a specific focus on dance, the theory identifies three key components: the performer-transmitter, the movement message and the spectator-receiver. I will review the constraints of nonverbal communication via movement by describing the brain mechanisms involved in action/body perception and behavioural coordination. In a dimensional model, I will attempt to link these mechanisms to aesthetic appreciation of human movement and discuss the role of expertise and cultural differences. Based on multidisciplinary work involving choreographers and dancers on the one hand, and cognitive neuroscientists on the other hand, research on action aesthetics may provide applications that range from optimising animation in computer games and fashion photography to developing new diagnostic tools or even treatments for autism and obsessive compulsive disorder.
Brief Bio: Guido Orgs received his training in both Performing Dance (Folkwang University of the Arts, Essen Germany) and Psychology (University of Düsseldorf, Germany). After completion of his PhD in Cognitive Neuroscience, he performed with German Dance Company NEUER TANZ/VA WÖLFL from 2008 to 2011, performing at international theatres and dance festivals, including the Theatre de la Ville, Paris and kunstenfestivaldesarts, Brussels. In 2009 he joined the Institute of Cognitive Neuroscience at UCL to conduct research on how we perceive other people’s movements and how the brain mechanisms of movement perception underlie the aesthetics of dance. Currently, he investigates movement synchronization in dance, collaborating with social psychologist Daniel Richardson, UCL and choreographer Matthias Sperling. Since September 2015 he is a Lecturer in Psychology at Goldsmiths, University of London.
Technologies of Corporeality
18 November 2015, 4pm
We will hold this lecture in the Ben Pimlott Lecture Theatre.
Abstract: In this talk, we will address the creative exploration of movement as the material for digital interfaces and performances. We will give some examples from our past projects and discuss ideas and plans for current and future activities. Featured projects include a dancing robot, an electronic theatre and creativity engine, performing avatars and the question of materiality in virtual reality.
Biographies:
Sita Popat is Professor of Performance and Technology at the University of Leeds. Her research addresses relationships between bodies and digital media, using dance and performance as starting points to examine embodied experiences at human/technology interfaces. Her publications include Invisible Connections: Dance, Choreography and Internet Communities (Routledge, 2006) and Digital Movement: Essays in Motion Technology and Performance, co-edited with Nicolas Salazar Sutil (Palgrave 2015). She is Associate Editor of the International Journal of Performance Arts and Digital Media (Taylor & Francis) and a Trustee of DV8 Physical Theatre. In her spare time she plays a gnome healer in World of Warcraft.
Nicolas Salazar Sutil is a Chilean movement theorist and Laban trained practitioner. His work focuses on the intersection between human movement and formal language, movement and technology, movement and new materialism, and mobility studies and critical thinking. He has collaborated around the creation of sci-art, digital choreography and e-theatre with artists, computer scientists, scientists and mathematicians in the UK, US and Latin America. He is the author of the book Motion and Representation (MIT Press) and co-editor with Sita Popat of Digital Movement: Essays in Motion Technology and Performance.
Asymmetry of the brain and human meaning
2 December 2015, 4pm
Speaker: Dr Iain McGilchrist, Quondam Fellow of All Souls College, Oxford and Consultant Psychiatrist Emeritus of the Bethlem Royal & Maudsley Hospital, London
Abstract: Almost everything you think you know about differences between the brain hemispheres is wrong. The topic was taken over and distorted by pop psychology, and hence understandably, but nonetheless irrationally, neglected by the mainstream. That changed with the publication of Iain McGilchrist's The Master and his Emissary: The Divided Brain and the Making of the Western World by Yale in 2009. So why is the brain, an organ that exists only to make connections, divided and asymmetrical? Why is it that, as every physician knows, the side, not just the site, of a brain lesion can make a huge difference to what happens? What does it tell us about the structure of the world we inhabit? Iain McGilchrist will argue that lateralisation is now the topic in neuroscience of greatest significance for understanding the human condition.
Brief bio: Iain McGilchrist is a former Fellow of All Souls College, Oxford, a Fellow of the Royal College of Psychiatrists, a Fellow of the Royal Society of Arts, and former Consultant Psychiatrist and Clinical Director at the Bethlem Royal & Maudsley Hospital, London. He was a Research Fellow in neuroimaging at Johns Hopkins Hospital, Baltimore, and a Fellow of the Institute of Advanced Studies in Stellenbosch. He delivered the Samuel Gee lecture at the Royal College of Physicians in 2014. He has published original articles and research papers in a wide range of publications on topics in literature, medicine and psychiatry. He is the author of Against Criticism (Faber 1982), The Master and his Emissary: The Divided Brain and the Making of the Western World (Yale 2009), The Divided Brain and the Search for Meaning; Why Are We So Unhappy? (e-book short) and is currently working on a book entitled The Porcupine is a Monkey, or, Things Are Not What They Seem to be published by Penguin Press.
Visual mining - interpreting image and video
9 December 2015, 4pm
Speaker: Professor Stefan Rüger, Knowledge Media Institute, The Open University, UK
Abstract: Like text mining, visual media mining tries to make sense of the world through algorithms - albeit by analysing pixels instead of words.
This talk highlights recent important technical advances in automated media understanding, which has a variety of applications ranging from machines emulating the human aesthetic judgment of photographs to typical visual mining tasks such as analysing food images.
Highlighted techniques include near-duplicate detection, multimedia indexing and the role of machine learning. While the first two enable visual search engines so that, eg, a snapshot of a smart-phone alone links the real world to databases with information about it, machine learning ultimately is the key to endowing machines with human capabilities of recognition and interpretation.
The talk will end by looking into the crystal ball to explore what machines might learn from automatically analysing tens of thousands of hours of TV footage.
Brief bio: Prof Stefan Rüger read Physics at Freie Universität Berlin and gained his PhD in Computer Science at Technische Universität Berlin (1996).
He carved out his academic career from postdoc to Reader at Imperial College London (1997-2006), where he also held an EPSRC Advanced Research Fellowship (1999-2004). In 2006 Stefan became a Professor of Knowledge Media when he joined The Open University's Knowledge Media Institute to cover the area of Multimedia and Information Systems. He currently holds an Honorary Professorship from the University of Waikato, New Zealand, and has held Visiting Fellowships at Imperial College London and Cranfield University, UK, where he supervises MSc student projects.
Stefan is interested in the intellectual challenge of visual processing with a view to automated multimedia understanding.
Adventures in applied vision: Using the visual system as a window to the mind
1 October 2014, 4pm
Speaker: Dr Lee de-Wit, University of Leuven, Leuven
Abstract: Richard Gregory has suggested that our ability to so easily find the words to classify certain types of illusions may have been no coincidence; it may be that our perception of the world boot-straps, or shapes, our cognition of the world. I will argue that some of the most interesting phenomena in human cognition (relativity, grouping, ownership, abstraction), have, at the least, analogies in visual perception. In fact, it is possible that some of these commonalities are more than just analogies, because many clinical populations (such as Autism and Schizophrenia) suffer changes to cognition that go hand in hand with changes in perception.
There may therefore be a very real sense in which, even if the eye is not a window to the soul, good measures of visual perception could provide a window onto the more general workings of the human mind. Vision science however has generally not focused on the development of tests or measures of perception that might be of interest to a broader community of experimental or clinical psychology. In this talk I will describe a number of projects in which I and my collaborators in Leuven are trying to develop modern tests for accessing visual perception, in a way that will not just fuel fundamental vision research, but which could provide tools for screening, diagnosing and understanding different neuropsychological and clinical populations (Autism, Schizophrenia, ADHD) and test broader theoretical frameworks like predictive coding.
Brief Bio: Dr Lee de-Wit studied Experimental Psychology at Bristol, where he met Tom Troscianko, Richard Gregory and Nick Scott-Samuel, who collectively inspired his interest in vision science. He then moved to Durham were he completed an ESRC masters with Charles Fernyhough, exploring individual differences in Theory of Mind ability and the propensity to mentalize. He then completed a PhD with David Milner (FRS) and Robert Kentridge, looking at the influence of the way we organize visual input on the allocation of covert attention. During his PhD he ran a number of studies with patient DF, which prompted an interest in the potential for Neuropsychological research.
During the PhD hw also worked in the lab of Catherine Tallon-Baudry in Paris looking at the role of consciousness in perceptual organization and attention, and the lab of Geraint Rees at UCL were he looked at the neural correlates of perceiving the hollow face illusion. After the PhD, he moved to a big lab, with longer term funding, and found a perfect home in Johan Wageman's GestaltRevision program, first as a post-doc on that program and then with a fellowship from the research foundation (Flanders, FWO). His post-doctoral work has continued to focus on behavioural work looking at perceptual organization, and in particular what happens to parts when integrated into wholes. He also made a further foray into fMRI with Hans Op de Beeck looking at the neural correlates of perceptual organization. His focus has however shifted towards the clinical assessment of vision, both to improve applied research, but also as a theoretical tool, using visual deficits as a potential window to broader changes in cognitive and neural function in different patient groups.
Philosophical Ontology and Computational Models
8 October 2014, 4pm
Speaker: David Westland
Abstract: Models of computation (e.g. finite state machines, cellular automata) have been used extensively in the so-called 'digital physics' movement, as well as some areas of applied ontology. But their use has not extended very well to analytic ontology, where philosophers propose and attempt to answer general questions concerning the possible structures of reality. In this discussion I would like to introduce a domain of mainstream philosophy that is currently receiving a great deal of attention: the properties and laws debate. The basic problem of this discussion is how to understand the fundamental nature of predicates (e.g. 'is round') and their close connection to behavior (e.g. 'round entities tend to roll down inclined planes').
A dominant view, which is based upon David Hume's empiricist philosophy, is that laws of nature are mere descriptions of the world, where the world itself is construed as a vast pattern of objects that are characterised by properties and relations. Importantly, advocates of this approach deny that causes 'bring about' their effects in any serious sense, such that there is no real explanation for the occurrence of a specific event. Common sense suggests that striking a match 'necessitates' its ignition, but the neo-Humean tradition proposes that the distribution of events is completely accidental. My central aim in this discussion, however, is to support a rival position (termed dispositionalism), according to which the natures of properties are intimately connected with their behavior. So construed, properties are 'active' entities that are called upon to explain events. That said, I suggest that the dispositionalist project is subject to severe difficulties because it is presently committing itself to a 'list' conception of ontology.
By this I mean that philosophers are approaching ontology as a business of postulating what kinds of entity exist (i.e. dispositional predicates such as 'roundness') and merely linking these entities up with certain truths (i.e. propositions of behavior such as 'round entities, ceteris paribus, roll down inclined planes'). The promising response, I argue, is to rethink the basic blueprint of a properties and laws ontology in terms of a finite state machine, where if-then imperatives are used to construct future times (modelled as outputs) on the basis of laws of nature (modelled as a transition table) and present times (modelled as inputs). The core idea is that this computational approach to ontology offers a favorable setting for understanding reality as a 'self-active' phenomenon, whereby the key dispositionalist notions of explanation and activity are properly realised. I conclude with some remarks on the relationship between cellular automata and the central questions of the properties and laws debate.
Brief Bio: David Westland is currently based at the Dept. of Philosophy at the University of Durham, where he has worked closely with Dr. Sophie Gibb and - before his untimely death in January 2014 - the Internationally renowned metaphysician Professor E. J. (Jonathan) Lowe on the topic of ontological structuralism and natural laws. David's research has focused around modal issues in anti-Humeanism, dynamic theories of time, and the connection between computational models and analytic ontology.
Sorting out the mess that technology makes of non-verbal communication
22 October 2014, 4pm
Speaker: David Roberts
Abstract: Spatial and temporal context helps to convey the source of an emotional reaction, evident from a person's appearance. Yet current technologies favour either visual or spatial properties of non-verbal behaviour. Balancing the two is challenging without temporal disturbance. This talk juxtaposes crossing the stage line in video with removing the mask from the avatar. It talks about the dawn of a step change in interactive media, as fundamental as taking the camera off the tripod was to film. It also explains why almost a decade of failing to get a roof on a garden gazebo is helping the development of technologies for telepresence and mental health.
Brief Bio: David Roberts is a Professor of Telepresence at the University of Salford. After heading up one of the larger UK VR research groups for almost a decade, he has recently jumped ship into psychology, focussing on technologies for understanding cognition and improving mental health. His PhD is in Cybernetics from the university of Reading. He has approaching 100 publications in telepresence and distributed simulation. He is currently joint PI in an EU project to teleport Europe's space scientists together to Mars; or at least give them impression. He lead a recent EPSRC project that developed the first technology to communicate eye gaze between distal human people. David may well have made more of a pig's ear of telepresence, more often, than anyone else. He knows how technology fails social human communication, and has some ideas of what needs to be fixed, and how this can be approached.
Sculpting the teenage brain
29 October 2014, 4pm
Speaker: Prof Ilona Kovacs
Abstract: The extremely long phase of cortical organization provides the human brain with an extended sensitivity for environmental impact, including aspects of both adaptability and vulnerability. While it is clear that the protracted plasticity of the human brain has evolutionary advantages, it has not yet been clarified what neural mechanisms bring the cortical networks to a fully developed stability, and permit a child to enter adulthood. We will try to contribute to the clarification of this issue.
Brief Bio: Ilona Kovacs is Professor of Psychology, head of the Laboratory of Psychological Research, and chair of the Department of General Psychology at PPCU, Budapest. She studied for a degree in Psychology at Eotvos University, Budapest, and then spent more than 10 years at Rutgers University in the US. Her main interest is in human vision, including developmental and clinical aspects.
Colour Theory: A Modern Interpretation
19 November 2014, 4pm
Speaker: Stephen Westland
Abstract: Colour is a naturally multi-disciplinary topic that spans the arts and sciences. Colour theory is a term used not to describe theory pertaining to colour but rather to describe a specific set of knowledge about colour that is traditionally taught in art schools. However, some of this knowledge clashes with the modern scientific understanding of colour. In this talk Professor Westland will describe the essential knowledge from colour theory and trace its legacy from some of the earliest thoughts about colour in the ancient world. The talk will cover topics such as colour primaries, colour mixing and colour harmony and will contrast traditional colour theory with the scientific view. It will be suggested that the way in which colour is taught in primary schools is problematic and fuels the gap between how colour is understood in art and science.
Brief Bio: Stephen Westland is a Professor in Colour Science and Technology at the University of Leeds. He has published widely in areas such as colour management, colour imaging and colour design. He has published over 150 peer-reviewed articles and two editions of Computational Colour Science using MATLAB. He is the founding editor of the Journal of the International Colour Association.
Ontology and Ontologies: Relations, Applications, Limitations
26 November 2014, 4pm
Speaker: Peter Simons
Abstract: The word ‘ontology’ comes from philosophy and denotes a part of metaphysics concerned with the most general categories of being. In informatics it denotes any conceptual scheme dealing with a given domain, usually subject to certain normative controls but relatively independent of detailed implementation.There are dozens if not hundreds of ontologies in the latter sense. So the question arises as to whether there are criteria for preferring or choosing one ontology over another, and what role, if any, the philosophical discipline may play in such considerations. Drawing on his experience both as a philosophical ontologist and as a software engineering consultant, the speaker will attempt to articulate what constitutes a good ontology, what –– and what not –– to expect from it, and how helpful it may be in solving theoretical and practical problems.
Brief Bio: Peter Simons is Professor of Philosophy at Trinity College Dublin. He is the author or co-author of four books and over 250 articles on many aspects of philosophy, with an emphasis on metaphysics and its applications. A Fellow of the British, Royal irish and European Academies, he has worked in Ireland, the UK, and Austria, and has taught and given numerous talks around Europe, North America and Asia.
Copy me - what & why do we imitate?
3 December 2014, 4pm
Speaker: Antonia Hamilton
Abstract: Copying is a ubiquitous human behaviour which provides a useful model of nonverbal social interaction. Though copying is easy to recognise, the cognitive processing underlying copying is very complex. Here I describe studies of when and why people chose to copy some actions but not others. This includes studies of children, adults and people with autism. Differences in copying behaviour between these groups give us important insight into the mechanisms of selective imitation. Finally, I will present new data on how people imitate and recognise imitation in virtual reality, and will consider how human-avatar interactions can help in the study of social neuroscience.
Disturbing Vision
10 December 2014, 4pm
Speaker: Arnold Wilkins
Abstract: When we look at scenes from nature, our brains can interpret the image in the eye efficiently because of its characteristic structure and colour. When the image is unnatural, the brain uses more oxygen and the image can be uncomfortable to look at. In general the more uncomfortable, the greater the usage of oxygen by the brain, suggesting the discomfort is homeostatic. Some unnatural patterns of stripes can provoke a headache, even seizures. Even the stripes of music manuscript affect vision adversely. Text has stripes from the rows of words and from the strokes of the letters, and it can be uncomfortable to read. Efficient reading requires continuous subtle adjustment of the alignment of the eyes, with a precision that depends upon the design of the font. Children are initially required to read large widely spaced text, which helps the alignment of the eyes. Unfortunately, the text gets too small too early in life, compromising reading speed and comprehension. Sometimes coloured filters make reading easier, and the reasons seem to relate to the brain’s use of oxygen, according to work with migraine patients. Patients with migraine are particularly susceptible to visual discomfort when they look at patterns of stripes, and their brain then uses an excessive amount of oxygen. This abnormal use of oxygen can be reduced to normal levels when the patterns are observed through a coloured filter. This may help to explain why coloured filters have now been shown to be of benefit in a range of neurological disorders that affect the visual brain, including autism, Tourettes and after stroke. The therapeutic colour differs from patient to patient and has to be individually selected as comfortable.
Brief Bio: Arnold obtained a doctorate from Sussex University for work on semantic memory, then spent two years postdoc at the Montreal Neurological Institute where he became interested in photosensitive epilepsy, and its prevention. He returned to England and worked at the MRC Applied Psychology Unit for 22 years studying visual discomfort from lighting, patterns and text. He took a professorship at the University of Essex in 1997.
Emergent representations from stochastic diffusion dynamics
22 January 2014
Speaker: Matthew Spencer
Abstract: Representationalist theories of mind hold that internal representations of objects in the environment underpin natural cognitive abilities. Cognitivist accounts of representation describe these representations as manipulable internal objects which maintain stable reference to external entities, but have difficulty explaining how these symbols might be grounded such that they have meaning intrinsic to the agent [Searle1980, Harnad1990, Muller2009]. Alternatively, enactive cognitive science regards representations as behaviour-generating patterns or constraints on the agent's continuous interaction with its environment [Ziemke2000, Pattee2001, Raczaszek-Leonardi2012]. These patterns are conceived as being fundamentally grounded in and emergent from the constitutive autonomy enabled by agent-environment interaction [Ziemke2003, DiPaolo2006, Froese2009]. The enactive approach describes a richer, more organic form of environmental coupling and therefore provides a compelling basis for the emergence of natural cognition. However, this additional complexity also abolishes the clear boundary between the organism and its environment, which causes some difficulty to the development of enactive models of (artificial) intelligence.
One enactive account of the emergence of representation comes from Interactivism [Bickhard2009], which conceives of natural cognitive beings as highly ordered, and thus far-from-equilibrium, thermodynamic systems which are fundamentally coupled with their environments. While these systems would normally be expected to break down over time (and thus return to equilibrium with their environments), they have evolved "functional interactions", or ways of interacting with their environments which sustain their far-from-equilibrium states. With multiple such functional interactions (each appropriate in different contexts), an organism must learn to anticipate (in a basic sense) which of its interactions will be functional in a given context. The emergence of these anticipations requires some level of adaption on the organism's part and occurs through the continued coupling of the agent and its environment. Since these
anticipations serve to maintain the normative state, they can be described as fundamentally representational. Further, since these anticipations are fundamentally future- and action-oriented, they can be described as goal-directed and intentional.
In this talk, I explore the emergence of such enactive representations using Stochastic Diffusion Search (SDS) as a metaphor. Stochastic Diffusion Search [Bishop1989, Nasuto1998, Nasuto1999] is a stochastic swarm model that, through a combination of exposure to an environment and basic communication amongst the population's members, produces dynamic, emergent clusters which converge around "interesting" features in the environment. In classic SDS, the exact definition of "interesting" is extrinsic to the swarm and varies depending on the application, which implies that, while the clusters appear intuitively representational, they must lack any meaning for the swarm itself. On the other hand, the gas-like dynamics of the swarm may serve as a metaphor to help bridge the gulf between thermodynamics and cognition. Following the ideas of Interactivism, we explore whether a measure of "interest" could be defined metabolically such that the clusters could be said to have meaning intrinsic to the swarm. Further, we explore the necessary modifications to the nature of SDS's environmental coupling such that this intrinsic representation can emerge.
Brief Bio: Matthew's PhD was on data-driven and modelling techniques for the study of evolving complex networks describing neuronal functional connectivity dynamics, in both cellular cultures and whole-brain EEG. Applications of his research pertain to animats and brain-computer interfaces as well as to the classification of complex system dynamics in general. He is currently working as a post-doctoral research assistant in the Brain Embodiment Lab at the University of Reading, investigating models of cognition as communication and continuing his work on stochastic network models of brain connectivity.
Interactive Machine Learning for End-User Systems Building in Music Composition and Performance
5 Feburary 2014
Speaker: Dr. Rebecca Fiebrink
Abstract: Rebecca Fiebrink builds, studies, teaches about, and performs with new human-computer interfaces for real-time digital music performance. Much of her research concerns the use of supervised learning as a tool for musicians, artists, and composers to build digital musical instruments and other real-time interactive systems. Through the use of training data, these algorithms offer composers and instrument builders a means to specify the relationship between low-level, human-generated control signals (such as the outputs of gesturally-manipulated sensor interfaces, or audio captured by a microphone) and the desired computer response (such as a change in the parameters driving computer-generated audio). The task of creating an interactive system can therefore be formulated not as a task of writing and debugging code, but rather one of designing and revising a set of training examples that implicitly encode a target function, and of choosing and tuning an algorithm to learn that function.
In this talk, Rebecca will provide a brief introduction to interactive computer music and the use of supervised learning in this field; she will show a live musical demo of the software that she has created to enable non-computer-scientists to interactively apply standard supervised learning algorithms to music and other real-time problem domains. This software, called the Wekinator, supports human interaction throughout the entire supervised learning process, including the generation of training data by real-time demonstration and the evaluation of trained models through hands-on application to real-time inputs.
Drawing on her work with users applying the Wekinator to real-world problems, Rebecca will discuss how data-driven methods can enable more effective approaches to building interactive systems, through supporting rapid prototyping and an embodied approach to design, and through "training" users to become better machine learning practitioners. She will also discuss some of the remaining challenges at the intersection of machine learning and human-computer interaction that must be addressed for end users to apply machine learning more efficiently and effectively, especially in interactive and creative contexts.
Brief Bio: Dr. Rebecca Fiebrink is a Lecturer in Graphics and Interaction at Goldsmiths College, University of London. As both a computer scientist and a musician, she is interested in creating and studying new technologies for music composition and performance. Much of her current work focuses on applications of machine learning to music: for example, how can machine learning algorithms help people to create new digital musical instruments by supporting rapid prototyping and a more embodied approach to design? How can these algorithms support composers in creating real-time, interactive performances in which computers listen to or observe human performers, then respond in musically appropriate ways? She is interested both in how techniques from computer science can support new forms of music-making, and in how applications in music and other creative domains demand new computational techniques and bring new perspectives to how technology might be used and by whom.
Fiebrink is the developer of the Wekinator system for real-time interactive machine learning, and she frequently collaborates with composers and artists on digital media projects. She has worked extensively as a co-director, performer, and composer with the Princeton Laptop Orchestra, which performed at Carnegie Hall and has been featured in the New York Times, the Philadelphia Enquirer, and NPR's All Things Considered. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app "I am T-Pain." Recently, Rebecca has enjoyed performing as the principal flutist in the Timmins Symphony Orchestra, as the keyboardist in the University of Washington computer science rock band "The Parody Bits," and as a laptopist in the Princeton-based digital music ensemble, Sideband. She holds a PhD in Computer Science from Princeton University and a Master's in Music Technology from McGill University.
Looking at looking
26 February 2014, 4pm
Speaker: Peter Coles, Goldsmiths University of London
Abstract: We move our eyes on average about three times a second as we explore our visual world. As the oculomotor system matures before other sensorimotor systems, eye movement recording has become a potentially valuable tool for studying the development of cognitive processes in infants before language develops and to see how these processes change throughout childhood. The data from eye movement recording have also be used to improve the layout of control panels and displays, to study how people look at paintings, to help train forensic scientists, airline pilots and radiographers and even make advertising more effective.
In this talk I will review some of the discoveries arising from over 60 years of eye movement research. I will also argue that the relationship of line of sight to the focus of attention is not always simple, however, and varies with age, experience and other parameters. As a research tool, then, eye movement recording needs to be handled with care. Drawing on a range of experimental studies and theoretical approaches, I will review some of the factors that influence the relationship between eye movements and attention and attempt to put forward a dynamic interpretive framework.
Brief Bio: Peter has been a Visiting Fellow in the Centre for Urban and Community Research at Goldsmiths since 2007 and teaches on the MA in Photography and Urban Cultures. He carried out research on eye movements in infants, children and adults in Jerome Bruner’s lab in Oxford in the late 1970s and in Piaget’s lab in Geneva in the early 1980s. After pursuing applied research on the design of new aids to marine navigation for several years, he left the academic world to become a science writer, working for Nature, Science and New Scientist, before joining UNESCO, in Paris, as a staff editor. With a long-standing interest in art and visual perception, Peter has developed a parallel career as a documentary and fine art photographer alongside his writing and research.
Cognition, sex and very mild discontent in old age
12 March 2014, 4pm
Speaker: Professor Patrick Rabbitt
Abstract: Because they impair cognitive abilities it is important to study the incidence of depression and anxiety in old age. Large and convincing studies suggest that the incidence of clinically significant depression is no greater, and the incidence of sub-clinical depression and anxiety grows less as age advances. We consider models to explain this effect. The effects of depression on mental abilities do not decline in old age. On the contrary even very slight increases in discontent are associated with cognitive losses. Further, depression in old age leads to earlier mortality. An odds-ration analysis separates the associations of mild discontent and of more severe depression with sex, chronic illnesses and intelligence.
Brief Bio: Patrick Rabbit is a British psychologist working in the area of cognitive gerontology. He worked at the Medical Research Council Applied Research Unit, Cambridge (1961-1968), the Department of Experimental Psychology and the Queen's College, University of Oxford (1968-1982), was Head of Department of Psychology, University of Durham (1982-1983) and was Research Professor and Director of the Age and Cognitive Performance Research Centre, University of Manchester (1983-2004) Emeritus 2004, and Member of the European Academy 2013.
Part-whole relationships in visual art and the beholder's share.
17 March 2014, 5pm
Speaker: Professor Johan Wagemans
University of Leuven, Leuven
Abstract: Part-whole relationships constitute a key topic of research in vision science since the early days of Gestalt psychology, over a century ago. They are also fundamental to the experience of visual art. A good deal of what makes an art work visually and aesthetically appealing lies in the way the parts can be integrated into a larger whole. The 'beholder's share' describes the flexible organization of the parts into a coherent Gestalt, a process whose result is the viewer's pleasure and fascination. In addition to a theoretical/conceptual analysis of this notion, recent studies will be used to illustrate this further. Specifically, I will focus on my empirical studies into pictorial relief and shape (e.g., in Picasso's sketches of female nudes and a nude sculpture) and on my recent collaborative projects with contemporary artists (Ruth Loos, Wendy Morris, Anne-Mie Van Kerckhoven).
Brief Bio: Johan Wagemans is professor of experimental psychology and currently director of the Laboratory of Experimental Psychology at the University of Leuven. His research interests are mainly in so-called mid-level vision (perceptual grouping, figure-ground organization, depth and shape perception) but stretching out to low-level vision (contrast detection and discrimination) and high-level vision (object recognition and categorization), including applications in autism, arts, and sports. He is supervising a long-term research program aimed at reintegrating Gestalt psychology into contemporary vision science and neuroscience (see www.gestaltrevision.be). He is chief-editor of Perception, i-Perception and Art & Perception.
An Asian Perspective to Design: How we use Technology to enable Musical Expression
2 April 2014
Abstract: With advances in algorithms for sound synthesis and processing, combined with inexpensive computational hardware and sensors, we can now easily build new types of musical instruments, and other real-time interactive expressive devices. These new ‘‘instruments’’ can leverage and extend the expertise of virtuoso performers, expand the palette of sounds available to composers, and encourage new ideas and composition techniques. This talk will look at a variety of new devices, projects, and ensembles created over the last decade, with a particular emphasis on extending techniques inspired by Asian music. From India, Korea, Indonesia, and beyond, the creation of new musical interfaces and robots will be presented. The birth of the KarmetiK Machine Orchestra evolved from these inventions, and video of compositions and experimental productions will be presented.
Brief Bio: Ajay's work revolves around one question: "How do you make a computer improvise with a human?" Using the rules set forth by Indian classical tradition, Ajay has been driven to build new interfaces for musical expression through extending the Indian classical Tabla, Dholak, & Sitar, with added microchips and embedded sensor systems, while designing custom robotic musical instruments. He now leads a team of artists and engineers exploring the intersection of music, composition, storytelling, science and technology in the KarmetiK Machine Orchestra.
Ajay Kapur is currently the Director of the Music Technology: Interaction Intelligence and Design (MTIID) program at the California Institute of the Arts, as well as the Associate Dean for Research and Development in Digital Arts. He is also a Senior Lecturer of Sonic Arts Engineering at the New Zealand School of Music at Victoria University of Wellington. He received an Interdisciplinary Ph.D. in 2007 from University of Victoria combining computer science, electrical engineering, mechanical engineering, music and psychology with a focus on intelligent music systems and media technology. Ajay graduated with a Bachelor of Science in Engineering and Computer Science from Princeton University in 2002. He has been educated by music technology leaders including Dr. Perry R. Cook, combined with mentorship from robotic musical instrument sculptors Eric Singer and the world famous Trimpin. A musician at heart trained on drumset, tabla, sitar and other percussion instruments from around the world, Ajay strives to push the technological barrier in order to explore new sounds, rhythms and melodies.
Kapur has published over 80 technical papers and presented lectures across the world on music technology, human computer interface for artists, robotics for making sound, and modern digital orchestras. His book “Digitizing North Indian Music”, discusses how sensors, machine learning and robotics are used to extend and preserve traditional techniques of Indian Classical music.
Integrating cultural experience with realtime perception: temporal dynamics of visual systems
2 October 2013
Speaker: Sarah Bro Pedersen, University of Southern Denmark
Abstract: With Gibson (1986) I reject the hypothesis that the environment is perceived with our eyes. Rather, perception depends on: “the eyes in the head on a body supported by the ground, the brain being only the central organ of a complete visual system” (Gibson 1986: 1). Gibson’s approach to perception expanded the perceptual system in space but not in time. Goodwin (1994; 2002; 2003; 2007) on the other hand, examines the temporal and sociological aspect of how the environment affords various perceptions amongst distinctive groups of individuals. For instance, by linking professionals’ situated cognition to processes of classification that guide relevant action-perception cycles, he shows how they are able to achieve a successful outcome in realtime. As repetitive interaction sculptures categorical patterns and forms over time, practitioners are provided with a professional vision (Goodwin, 1994). That is, an expert view that over time becomes materialised into ! artefacts with inherent cultural meaning that symbolises a unique domain of competency.
In this talk I present qualitative investigations of how human perception stems from cultural knowledge and realtime flexible, adaptive behaviour in a medical arena. Relating Goodwin’s term ‘professional vision’ to Gibson’s ‘visual system’, it is demonstrated how perception is embedded in an extended space-time. Hence, what a medical practitioner sees, feels and perceives is both socially pre-organised through material-cultural artefacts and the implementation of procedures and narratives, and it is dynamical, anticipative and situated (Pedersen and Steffensen, in press).
Brief Bio: Sarah Bro Pedersen draws on qualitative approaches within the cognitive sciences and the language sciences when she studies how people – through interactivity (cognitive and linguistic processes) – are able to link bodies, expressive features of the environment and meaning. A methodological concern, thus, falls on how interactivity connects the rapid processes of realtime coaction with situation transcendent processes of social knowledge, norms and meaning – in a way that forms results.
In 2012, Sarah received the Elite Research Scholarship granted by The Danish Ministry of Research, Innovation and Higher Education.
Sarah has worked at various labs around the world including: Gothenburg University, with Professor Per Linell; The University of California San Diego, with Professor David Kirsch; Stanford University with Professor Michael L. Anderson; University of California, Berkeley, with Professor Claire Kramsch.
Sarah is currently visiting Goldsmiths, University of London, where she is discussing issues related to embedded perception with Professor Mark Bishop.
Not all Gestalts are equal: The encoding of parts and wholes in the visual cortical hierarchy
9 October 2013
Speaker: Johan Wagemans, University of Leuven
Abstract: Gestalt psychology argued that the whole is different from the sum of the parts. Wholes were considered primary in perceptual experience, even determining what the parts are. How to reconcile this position with what we now know about the visual brain, in terms of a hierarchy of processing layers from low-level features to integrated object representations at the higher level? What exactly are the relationships between parts and wholes then? I will argue that there are different types of “Gestalts” with their own relationships between parts and wholes, both in visual experience and in their neural encoding.
Some Gestalts seem to be encoded in low-level areas based on feedback from higher-order regions. Other Gestalts seem to be encoded in higher-level areas, while the parts are encoded in lower-level areas. In some cases, this happens without suppression of the parts (“preservative Gestalts”); in others, with suppression of the parts (“eliminative Gestalts”). I will describe three studies from our own lab to illustrate these different types of Gestalts. Together, these findings support the general conclusion that not all Gestalts are equal, while the specific conceptual refinements made may help to motivate further research to better understand the mechanisms of how parts and wholes are encoded in the visual cortical hierarchy.
Brief Bio: Johan Wagemans is professor of experimental psychology and currently director of the Laboratory of Experimental Psychology at the University of Leuven. His research interests are mainly in so-called mid-level vision (perceptual grouping, figure-ground organization, depth and shape perception) but stretching out to low-level vision (contrast detection and discrimination) and high-level vision (object recognition and categorization), including applications in autism, arts, and sports. He is supervising a long-term research program aimed at reintegrating Gestalt psychology into contemporary vision science and neuroscience (see www.gestaltrevision.be). He is chief-editor of Perception, i-Perception and Art & Perception.
Behavioural Finance, Music and Emotion: a stock market that thinks it’s an opera
16 October 2013
Abstract: Open Outcry is a ‘reality opera’, originated by Alexis Kirke, and co-created with Greg. B. Davies, Head of Behavioural and Quantitative Investment Philosophy at Barclays. Behavioural finance proposes psychology-based theories to explain stock market anomalies. Within behavioural finance, it is assumed that the information structure and the characteristics of market participants systematically influence individuals' investment decisions as well as market outcomes. Before the premiere of the “opera” - at Mansion House, City of London - singers were asked to complete an analytical behavioural finance questionnaire to determine their emotional approach to investment. In the performance, the opera singers traded for real money in real time using an artificial stock market trading floor. The emotional arc of the performance was defined by the jubilation, fear and greed of the 12 classical singers as they used trading melodies to interact with each other and do deals.
The singers chose when to sing based on when they want to change their portfolio. The musical trading phrases were created using genetic algorithms to emphasize the “emotional state” of the market: when most singers are selling the market sounds dissonant, when most singers are buying the market sounds consonant, and so that the tunes for buying and selling the same stock harmonize pleasantly. The singers trades also had an effect on the movement of the stocks, however the market was also able to change its state autonomously based on a statistical model between neutral, boom and bust. Furthermore the conductor was able to influence the state of the market to help define the musical arc of the otherwise semi-deterministic 30 minute performance. After the premiere, a quantitative analysis was done of the trading of the singers, relating it to their behavioural questionnaires.
Brief Bio: Alexis Kirke is a Permanent Research Fellow in Computer Music at the Interdisciplinary Centre for Computer Music Research at Plymouth University, UK. His areas of research currently focus on novel computation and composition. Recent invited talks include the Royal Institution of Great Britain and BBC R&D. He is co-editor of Springer’s Guide to Computing for Expressive Music Performance, one of the first books on the topic, and the originator of the 4.5 year project Brain-Computer Interface for Monitoring and Inducing Affective States, funded by EPSRC. Alexis is also a composer well-known for his interdisciplinary practice - he has been called “The Philip K. Dick of Contemporary Music”. Alexis’ music and film work has been covered by the press and media internationally, though his proudest moment was a cartoon attacking his opera in the Sunday Times.
The plasticity of the number brain
23 October 2013
Speaker: Marinella Cappelletti, Goldsmiths University of London
Abstract: How are some people able to maintain number skills even after brain lesions or following congenital impairments to the number brain? Can numeracy be improved in young and ageing people? Evidence from psychophysics, neuropsychology, brain imaging and brain stimulation techniques suggests that the human brain is capable of maintaining residual number skills even when injured or impaired from birth, and that these skills can be further boosted with training in the young and elderly brain. Together, these results highlight the plasticity of the number brain but also its resilience to damage or ageing, because number relies on a primitive, dynamic cognitive and neuronal system.
Brief Bio: Marinella is a lecturer in the Psychology Department at Goldsmiths, who has previously worked at UCL, first on a Wellcome Trust Training fellowship and then on a Royal Society Dorothy Hodgkin fellowship. Before these fellowships and after her PhD at Kings, she spent one year in Boston, US learning brain stimulation techniques. She uses numerical cognition to understand brain plasticity in the healthy and pathological brain.
Escaping the here and now: towards a component process model of self generated cognition
13 November 2013
Speaker: Jonathan Smallwood, University of York
Abstract: Even when deprived of salient sensory input, the human mind is rarely idle. Instead it often engages in thoughts and feelings that have little relationship to the surrounding environment. These self-generated thoughts, such as occur when our mind wanders or when we daydream, occupy almost half of waking thought suggesting they are a core element of human cognition. The current talk will present a framework to understand the different component processes that are engaged when we self-generate thought based on recent cognitive and neuroscientific evidence. Furthermore, it will argue that one function of our capacity for self-generated thought is to allow us the opportunity to make choices other than those dictated by the external environment, a capacity known as freedom from immediacy.
Brief Bio: For the last fifteen years Jonathan has tried to understand how the brain self-generates experiences that do not arise directly from immediate perceptual input, common examples of these are the states of daydreaming or mind-wandering. He completed both his undergraduate and PhD work on this topic at the University of Strathclyde in Glasgow in the late 1990s. Since then he worked as a researcher in institutions in several different countries including Canada, Germany and the USA. He uses the tools of experience sampling, in conjunction with those of cognitive neuroscience, including EEG and fMRI, to probe the neural correlates of the experience. Currently he works as a reader at the University of York.
What Next for Nature-Inspired Computing?
20 November 2013
Speaker: Dr. Ed Keedwell, Senior Lecturer in Computer Science, University of Exeter (E.C.Keedwell@exeter.ac.uk)
Abstract: Nature has provided Computer Scientists with inspiration for countless algorithms to help solve difficult computational, engineering and scientific problems in reasonable computational time. Evolutionary algorithms, swarm intelligence and neural networks are among the best known of these and have been shown to approximate, or occasionally supersede human capability in specific domains. Nature-inspired computing has been a success story to date, and although it is possible to derive new methods from a better understanding of natural systems, there are of course a finite number of systems on which to base computational systems. In this talk, I will discuss the achievements of nature-inspired computation to date, including its application to difficult problems and will explore the scope for further naturally-inspired methods, concluding with some thoughts on the state-of-the-art and future directions for the field.
Brief Bio: Dr Ed Keedwell is a Senior Lecturer in Computer Science at the University of Exeter, having also previously studied there as an undergraduate and postgraduate. His research interests centre around the use of nature-inspired techniques to tackle difficult problems in science and engineering, in particular water distribution network optimisation and bioinformatics. He has over 70 publications in computer science, engineering and bioinformatics and currently holds an EPSRC grant investigating the potential for the creation of low-level heuristics and the development of ‘hyper-heuristics’ to solve real-world problems. Dr Keedwell is also Publications Officer for the Society for the Study of Artificial Intelligence and the Simulation of Behaviour.
Challenging the use of adult neuropsychological models for explaining neurodevelopmental disorders: Developed versus developing brains
27 November 2013
Speaker: Annette Karmiloff-Smith, Birkbeck Centre for Brain & Cognitive Development, University of London.
Abstract: In this talk I will contrast approaches from adult neuropsychology which seek selective, domain-specific deficits, with approaches aimed at understanding the dynamics of developmental trajectories in children with genetic disorders. The talk will stress the crucial difference between developed brains damaged in their mature state, and atypically developing brains. I will also challenge the search for specific genes to explain selective cognitive-level outcomes. Throughout, the talk will argue that, if we are to understand both the impairments and proficiencies displayed in children with neurodevelopmental disorders, it is critical to trace cognitive-level deficits back to their basic-level processes in infancy, where genes are likely to exert their early influences.
Brief Bio: Annette Karmiloff-Smith used to be a simultaneous interpreter at the United Nations, but got bored with repeating other people’s thoughts! So, she studied with the famous psychologist-epistemologist, Piaget, at Geneva University. Before finishing her doctorate, she spent two years working in the Palestinian refugee camps in Beirut. She is the author of 10 books and of over 300 chapters/articles in scientific journals, as well as a series of booklets for parents on foetal, infant and child development. Now a Professorial Research Fellow at Birkbeck’s Centre for Brain & Cognitive Development, her most recent Wellcome-Trust-funded research is on Down syndrome as a model for Alzheimer’s Disease, examining early risk and protective factors in infants with DS. Today she will deal with the more general issue of why development is crucial to understanding developmental disorders.
Bayesian Just-So Stories in Psychology and Neuroscience
13 March 2013, 4pm
Speaker: Prof. Colin Davis
Royal Holloway, University of London
Abstract: Bayesian theories are all the rage in psychology and neuroscience. These theories claim that minds and brains are (near) optimal in solving a wide range of tasks. A common premise of these theories is that theorising should largely be constrained by a rational analysis of what the mind ought to do in order to perform optimally. I will argue that this approach ignores many of the important constraints that come from biological, evolutionary, and processing (algorithmic) considerations, and that it has contributed to the development of many Bayesian “just-so” stories in psychology and neuroscience; that is, mathematical analyses of cognition that can be used to explain almost any behavior as optimal. I will argue that the empirical evidence for Bayesian theories in psychology is weak at best, and that the empirical evidence for Bayesian theories in neuroscience is weaker still.
Brief Bio: Colin Davis obtained a PhD in cognitive psychology from the University of New South Wales in Sydney. After a few years working as a post-doctoral research fellow at the Macquarie Centre for Cognitive Science he moved to Bristol as a post-doctoral researcher. He was appointed as a Senior Lecturer at Royal Holloway, University of London in 2006, becoming a Professor of Cognitive Science in 2010. In summer 2013 he will take up a Chair in Cognitive Psychology at the University of Bristol. Most of his research is focussed on language processing, and particularly the recognition of printed words. This lecture is based on an article he published with Jeffrey Bowers in Psychological Bulletin in 2012.
The Conscious Phenotype
6 March 2013, 5pm
Speaker: Professor Geraint Rees
UCL Institute of Congitive Neuroscience, London, UK
Abstract: Consciousness is central to the human condition, furnishing us with phenomenal awareness of the external world and the ability to reflect upon our own thoughts and experiences. Almost half our communication concerns the contents of our thoughts and experiences. The shared language we use to do this obscures the recent realization that there is substantial variability in how different people experience the same physical environment. Moreover, key aspects of this variability in conscious experience are heritable, suggesting a conscious phenotype with adaptive significance. In this talk I will explore the nature of individual differences in conscious perception and their neural basis, focusing on both structure and function of the human brain.
Brief Bio: Geraint Rees is Director of the UCL Institute of Cognitive Neuroscience in London, UK. His research interests focus on understanding the neural basis of human consciousness in health and disease, using functional and structural brain imaging. His work has been internationally recognised by award of the Young Investigator Medal of the Organisation for Human Brain Mapping, the Experimental Psychology Prize; and he has given the Francis Crick lecture at the Royal Society and the Goulstonian lecture at the Royal College of Physicians. Recently he has pioneered new approaches to analysing functional brain images to individuate the contents of consciousness, and has written and spoken on the potential moral and ethical implications of such techniques.
Preperception in the Human Brain
27 February 2013, 4pm
Speaker: Professor Kia Nobre
University of Oxford, Oxford, UK
Abstract: Most of us still hold the traditional view that perception starts outside: energy from the context and events around us stream in through our senses and our brain mirrors these in our internal, mental representations. This view, of course, is wrong in several ways. Experimental evidence shows that our perception is highly selective. At best, we extract and hold onto a handful of items from the boundless possibilities offered by the external environment. Furthermore, what we do perceive is highly shaped by our current task goals, motivation, and the memories of what we experienced in the past. These endogenous factors inject predictions or suggestions into sensory channels, often ahead of the events to unfold, to guide perception and subsequent learning. So, you see, perception also starts within, and proceeds through a dance between our internal states and external sources of energy. In my talk, I will illustrate how we go about investigating these mechanisms in the human brain, and review some of our findings to date.
Brief Bio: Anna Christina Nobre (known as Kia Nobre) is Professor of Cognitive Neuroscience at the University of Oxford and Tutorial Fellow in Experimental Psychology at New College, Oxford. She directs the Oxford Centre for Human Brain Activity (OHBA) and heads the Brain & Cognition Lab. Her research focuses on understanding the principles of the neural systems that support cognitive functions in the human brain. Currently, she looks at how neural activity linked to perception and cognition is modulated according to memories, task goals, and expectations. Her work integrates behavioural methods with a combination of non-invasive techniques to image and stimulate the human brain, such as electro- and magneto-encephalography (EEG and MEG), structural and functional magnetic resonance imaging (MRI), and transcranial magnetic stimulation (TMS). Funding for her core research activities comes from the Wellcome Trust, NIHR, and JSMF.
Computational Creativity
20 February 2013
Speaker: Dr. Alison Pease
Imperial College
Abstract: Computational Creativity is the study and simulation, by computational means, of behaviour, natural and artificial, which would, if observed in humans, be deemed creative. In this talk I look to examples of human creativity to suggest aspects of creativity which have not yet received much attention from the CC community. In particular, I consider serendipitous discovery, and whether it is possible to build a system which makes serendipitous discoveries. I also consider the role of framing information in creativity; that is, the context around a piece of creative work, including the artist themselves, their motivations for creating the piece and how they think it fits into a current artistic landscape. Finally, I consider which methods are appropriate, or inappropriate, for measuring progress in CC.
Brief Bio: Alison Pease is a Research Associate on the Computational Creativity Theory project at Imperial College London. Her main interest is in creativity in mathematics, and she has investigated the use of analogies, conceptual-blends and embodied reasoning in mathematics. She holds a PhD in Artificial Intelligence, in which she built a model of social interaction between mathematicians, based on a theory by the philosopher Imre Lakatos. She has a background in philosophy and mathematics and was a mathematics teacher for several years.
Neural Correlates of Dynamic Musical imagery
6 February 2013
Speaker: Professor Andrea Halpern
Bucknell University, USA
Abstract: Auditory imagery is more than just mental “replaying” of tunes in one’s head. I will review several studies that capture characteristics of complex and active imagery tasks, using both behavioral and neuroscience approaches. I use behavioral methods to capture people’s ability to make emotion judgments about both heard and imagined music in real time. My neuroimaging studies look at the neural correlates of encoding an imagined melody, anticipating an upcoming tune, and also imagining tunes backwards. Several studies show correlates of neural activity with self-report of imagery vividness. These studies speak to the ways in which musical imagery allows us not only to remember music, but also shows how we use those memories to judge ever-changing aspects of the musical experience.
Brief Bio: Since receiving her PhD in Psychology from Stanford University, Prof. Halpern has been a faculty member in the Psychology Department at Bucknell University, an undergraduate liberal arts university in Pennsylvania. She studies memory for nonverbal information, cognitive neuroscience of music perception, and cognitive aging, particularly with respect to music. She has received grants from several US federal and private agencies, including the Grammy Foundation, and currently serves as President of the Society for Music Perception and Cognition. In 2012-13, Prof. Halpern is Leverhulme Visiting Professor at Queen Mary and Goldsmiths, University of London.
Distributed Decisions: New Insights from Radio-tagged Ants
30 January 2013
Speaker: Dr. Elva Robinson, University of York
Abstract: Ant colonies are model systems for the study of self-organisation and viewing ants as identical agents following simple rules has led to many insights into the emergence of complex behaviours. However, real biological ants are far from identical in behaviour. New advances in radio-frequency identification (RFID) technology now allow the exploration of ant behaviour at the individual level, providing unprecedented insights into distributed decision-making. I have addressed two areas of decision-making with this new technology: 1 Collective decision-making during colony emigration; 2 Task decisions in a changing environment. The first of these, collective decision-making during colony emigration, uses RFID microtransponder tags to identify the ants involved in collecting information about the environment, and to determine how their actions lead to the final colony-level decision.
My results demonstrate that ants could use a very simple threshold rule to make their individual d!
ecisions, and still maintain a sophisticated choice mechanism at the colony level. The second area of distributed decision-making which has benefitted from the use of RFID is ant colony task-allocation, and in particular, how tasks are robustly distributed between members of a colony in the face of changing environmental conditions. The use of RFID tags on worker ants allows simultaneous monitoring of a range of factors which could affect decision-making, including age, experience, spatial location, social interactions and fat reserves. My results demonstrate that individual ants base some task decisions on their own physiological state, but also utilise social cues. For non-specialist tasks, self-organisation also contributes, as movement patterns can cause emergent task allocation. The combination of these simple mechanisms provides the colony as a whole with a responsive work-force, appropriately allocated across tasks, but flexible in response to changing environmental !
conditions.
Brief Bio: Elva Robinson studies the organisation of social insect societies, combining empirical and modelling work to identify the simple rules followed by individual members of a colony, and to determine how they interact to produce adaptive group-level behaviours. Key research areas include: The organisation of a flexible foraging strategy; Division of labour and flexible task allocation; Collective decisions and pattern formation via self-organised processes. Elva began working in this area with a PhD at the University of Sheffield jointly between the Department Animal and Plant Sciences and the Department of Computer Science. After post-doctoral work at the University of Bristol, she moved to the University of York on a Royal Society Dorothy Hodgkin Fellowship. She is currently based between the York Centre for Complex Systems Analysis and the Department of Biology, where she holds a proleptic lectureship appointment.
Enactivism, mental health and the emergence of narrative – taking the ‘psycho’ out of psychotherapy
16 January 2013
Speaker: Dr. Mark McKergow, University of Hertfordshire
sfwork - The Centre for Solutions Focus at Work
Abstract: Much of the existing research into an enactive view of mental health has focused on movement and physical engagement. I will be taking a look at psychotherapy – ‘talking cures’ – through an enactive lens to examine the way language is used in therapy. Contrasting with the cognitive perspective (‘thoughts cause actions’), we can take cues from Dan Hutto and Ludwig Wittgenstein to take a fresh look at what can sensibly be said about, and by, those suffering distress. This tour will incorporate the hybrid psychology of Rom Harré, complex systems and the Solution-Focused Brief Therapy work of Steve de Shazer to present a firmly grounded and novel way to view therapeutic conversations, raising new questions and challenging existing notions of good therapy practice.
Miller G & McKergow M (2012), From Wittgenstein, Complexity, and Narrative Emergence: Discourse and Solution-Focused Brief Therapy in A. Lock and T. Strong, eds, Discursive Perspectives in Therapeutic Practice, Oxford: Oxford University Press, 163-183
Brief Bio: Dr Mark McKergow is visiting research fellow in philosophy of psychology at the University of Hertfordshire and co-director of sfwork – The Centre for Solutions Focus at Work. He is e is an international consultant, speaker and author. A self-proclaimed ‘recovering physicist’ with a PhD in self-organising and complex systems, Mark has worked on every continent except Antarctica, and is an international conference keynote presenter
Mark is a global pioneer applying the post-Solutions Focus (SF) ideas to organisational and personal change. He has written and edited three books and dozens of articles; his book 'The Solutions Focus: Making Coaching and Change SIMPLE' (co-authored with Paul Z Jackson) was declared one of the year's top 30 business books in the USA, and is now in eleven languages. Mark’s work is now in placing the practice of SF – in therapy and elsewhere – to its Wittgensteinian and narrative frame.
Acting on their own behalf: Norm-generating and norm-following mechanisms in simulated agents
10 October 2012
Speaker: Matthew Egbert, Informatics and biology departments at the University of Sussex
Abstract: One of the fundamental aspects that distinguishes acts from mere events is that actions are subject to a normative (good/bad) dimension that is absent from other types of interaction: natural agents behave according to intrinsic norms that determine their adaptive or maladaptive nature.
In this talk I will discuss our recent paper in which we present a minimal model of a metabolism that is coupled to a gradient climbing chemotaxis mechanism. We use this model to show how the processes that determine the viability limits of an organism (the conditions in which it will live rather than die) can also influence behaviour and how, therefore, agents can act "on their own behalf", i.e., to satisfy their own intrinsic needs.
Dynamical analysis of our minimal model reveals an emergent viable region and a precarious region where the system tends to die if environmental conditions remain the same. We introduce the concept of normative field as the change of environmental conditions required to bring the system back to its viable region. Norm-following, or normative action, is defined as the course of behaviour whose effect is positively correlated with the normative field. We thereby make progress on two problems of contemporary modelling approaches to viability and normative behaviour: 1) how to model the topology of the viability space beyond the pre-definition of normatively-rigid boundaries, thereby allowing the possibility of reversible failure; and 2) how to relate, in models of natural agency, both the processes that establish norms and those that result in norm-following behaviour.
The work presented is an extension of a paper that was selected as one of the best submissions to the European Conference on Artificial Life in Paris 2012, and should be of interest to a wide audience, from philosophers of biology to protocell researchers.
Brief Bio: Matthew Egbert is a Research and Tutorial Fellow with the Evolutionary and Adaptive Systems Group at the Centre for Computational Neuroscience and Robotics, an interdisciplinary lab that is associated with both the informatics and biology departments at the University of Sussex. In this role, he teaches graduate and undergraduate level courses on artificial life and adaptive behaviour while carrying out research into the relationship between metabolism, adaptive behaviour and evolution. This research typically involves the development and analysis of minimalistic, dynamical computational models of biological phenomena.
The STST Model of temporal attention and working memory encoding, and its relationship to theories of conscious perception.
17 October 2012
Speaker: Professor Howard Bowman
Centre for Cognitive Neuroscience and Cognitive Systems, University of Kent at Canterbury, UK
Abstract: The Simultaneous Type/ Serial Token (STST) model (Bowman & Wyble, 2007) was developed as a theory of how attention is deployed through time and how working memory representations are formed. It provides a neural explanation of perceptual phenomena, particularly those observed using Rapid Serial Visual Presentation (RSVP), for example, attentional blink, repetition blindness, temporal conjunction errors and perceptual episodes (see e.g. Wyble et al, 2011). Its activation dynamics have also been tied to the P3 event related potential component (Craston et al, 2009), which has been argued to be an electrophysiological correlate of conscious perception.
I will discuss our recent work on relating the STST model to theories of conscious perception. This will consider an STST explanation of why conscious perception becomes more all-or-none during the attentional blink (Bowman et al, 2009), as well as the relationship between working memory encoding and conscious perception.
Finally, we will highlight applications of these RSVP-P3effects in lie detection and brain computer interaction.
Bowman and Wyble, 2007. The simultaneous type, serial token model of temporal attention and working memory. H. Bowman and B. Wyble. Psychological Review, 114(1):182-196, January 2007.
Wyble et al, 2011. Attentional episodes in visual perception. B. Wyble, M. Potter, H. Bowman, and M. Nieuwenstein. Journal of Experimental Psychology: General, 140(3):182-196, August 2011.
Craston et al, 2009. The attentional blink reveals serial working memory encoding: Evidence from virtual & human event-related potentials. Patrick Craston, Brad Wyble, Srivas Chennu, and Howard Bowman. Journal of Cognitive Neuroscience, 21(3):182-196, March 2009.
Bowman et al, 2009. The delayed consolidation hypothesis of all-or-none conscious perception during the attentionalblink, applying the ST² framework. Howard Bowman, Patrick Craston, Srivas Chennu, and Brad Wyble. In Proceedings of the 31st Annual Conference of the Cognitive Science Society, pages 182-196. Cognitive Science Society, July 2009.
Brief Bio: Howard Bowman is Professor of Cognition & Logic and co-director of the Centre for Cognitive Neuroscience & Cognitive Systems at the University of Kent at Canterbury. He undertakes research in theoretical and applied cognitive neuroscience. He has developed theories of temporal attention, visuo-motor control, reinforcement learning and emotional interference. He is co-inventor (with Brad Wyble) of the Simultaneous Type/Serial Token model of temporal attention and working memory encoding. He has held funding from EPSRC, British Telecom, the London Mathematical Society, the DTI, the EU and Research Council's UK. He has over 100 publications.
Computers Can Do Almost Nothing – Except Cognition (Perhaps)
24 October 2012
Speaker: Professor Vincent C. Müller, Anatolia College Thessaloniki, Greece & University of Oxford, UK;
Abstract: The basic idea of classical cognitive science and classical AI is that if the brain is a computer then we could just reproduce brain function on different hardware. The assumption that this function (cognition) is computing has been much criticized; I propose to assume it is true and to see what would follow. Let us take it as definitional that computing is ‘multiply realizable’: Strictly the same computing procedure can be realized on different hardware. (This is true if computing is understood as digital algorithmic procedures, in the sense of Church and Turing.)
But in multiple realizations only the syntactic computational properties are retained from one realization to the other, while the physical and semantic properties may or may not be. So, even if the brain is indeed a computer, realizing it in different hardware might not have the desired effects because the hardware-dependent effects are not computational: Just computing can’t even switch on a red light; a computer model of an apple tree will not produce apples. But perhaps cognition is different. Is cognition one the properties that are retained in different realizations?
Brief Bio: Vincent C. Müller is Professor of Philosophy at Anatolia College, Thessaloniki and James Martin Research Fellow, Faculty of Philosophy, University of Oxford. His research focuses on the nature and future of computational systems, particularly on the prospects of artificial intelligence. He is the coordinator of the European Network for Cognitive Systems, Robotics and Interaction (2009-2014), funded by the European Union through two FP7 projects with 3.9M€ (www.eucognition.org). In 2011, he organized the first of a series of conferences on the 'Theory and Philosophy of AI'.
Müller has published a number of articles on the philosophy of computing, the philosophy of AI and cognitive science, the philosophy of language, and related areas. He is currently preparing several edited volumes on the theory of cognitive systems and artificial intelligence as well as a monograph on the basic problems of AI and. He studied philosophy with cognitive science, linguistics and history at the universities of Marburg, Hamburg, London and Oxford.
How does the science of magic contribute to our understanding of awareness?
31 October 2012
Speaker: Dr Gustav Kuhn, Department of Psychology, Goldsmiths University of London
Abstract: Over the centuries, magicians have developed extensive knowledge about how to manipulate our conscious experience; knowledge that has been largely ignored by science. However, in recent years, steps have been taken towards utilizing this knowledge to further our understanding of human cognition and consciousness. Here I will explore areas of science in which the science of magic has contributed to our understanding of human consciousness and cognition. I will discuss the types of contributions that this science has made to date, and explore some future directions. For example, what can we learn from the perceptual and cognitive effects that magicians have developed? What can their techniques (e.g. misdirection) tell us about attention and awareness? How can we use magic as a tool to investigate psychological processes? How can magical effects be used to investigate belief systems? What can the experiential states generated by observing magic effects tell us about human experience? What can we learn from the magician’s expertise in motor control? I will conclude that there are numerous areas in which magic is not merely a sufficient, but a necessary way of investigation.
Brief Bio: Gustav Kuhn is senior lecturer in the department of Psychology at Goldsmiths. Prior to his academic career, Gustav worked as a professional magician. In much of his research he utilizes the methods and techniques used by magicians to distort our perception to investigate human cognition.
Making Sense of Ourselves and Others: Narratives not Theories
14 November 2012
Speaker: Daniel D. Hutto
Abstract: Making sense of each other's reasons is a cornerstone of human social life. It involves attributing beliefs, desires and hopes - in complex ways. Our capacity to do this is unique: we do not share it with animals or very young children. It is so deeply ingrained in our daily existence that we tend only to notice it, and its critical importance, when it is damaged or absent altogether - as it is for severely autistic individuals. What is the basis of this competence? How do we come by it?
In this lecture Professor Hutto introduces the idea that this remarkable ability is essentially a skill in producing and consuming a special sort of narrative, acquired by engaging in storytelling practices. As Waterhouse’s A Tale from the Decameron (1916) reminds us beautifully, narrative practices have been at the heart of human society throughout our history. Dan defended the stronger claim that they might be absolutely central for stimulating important aspects of our social understanding and noted that, if true, it excludes the prospect that this crucial ability is one which is built-in to members of our species. Knowing the answer matters, fundamentally, when it comes to deciding which therapies are the most promising and appropriate for treating certain mental health disorders and which sorts of educational opportunities should be provided for younger children. Equally, it matters when thinking about whether and how we, as adults, might improve abilities to understand ourselves and others.
Bio: Daniel D Hutto was born and schooled in New York but finished his undergraduate degree as a study abroad student in St Andrews, Scotland where his maternal roots lie. He returned to New York to teach fourth grade in the Bronx for a year in order to fund his MPhil in Logic and Metaphysics, after which he carried on his doctoral work in York. He is currently the Professor of Philosophical Psychology and Research Leader in Philosophy at the University of Hertfordshire. He has authored and edited many books, including Narrative and Understanding Persons (2007), Folk Psychological Narratives (2008) and Narrative and Folk Psychology (2009) and is co-author of Radicalizing Enactivism: Basic Minds without Content (forthcoming with MIT Press in 2013).
He is currently He is a chief co-investigator for the Australian Research Council ‘Embodied Virtues and Expertise’ project (2010-2013), the Marie Curie Action ‘Towards an Embodied Science of Intersubjectivity’ initial training network (2011-2015) and the 'Agency, Normativity and Identity' project (2012-2015) funded by the Spanish Ministry of Innovation and Research. He regularly speaks at conferences and expert meetings for clinical psychiatrists, educationalists, narratologists, neuroscientists and psychologists.
Cognitive capacity limits in anxiety and depression: Can they be increased?
21 Novemner 2012
Speaker: Prof Nazanin Derakhshan
Department of Psychological Sciences, BirkbeckUniversity of London, and Research Associate, St Johns College, University of Oxford
Abstract: While accumulating evidence has documented that deficits in attention control processes play a key role in emotional vulnerability to disorders such as anxiety and depression, the underlying cognitive and neural mechanisms of such deficits are less well understood. Crucially, it is unclear how deficits in attentional control are reflected through behavioural task performance. For example, according to the most cited Attentional Control Theory (Eysenck, Derakshan et al., 2007; Derakshan & Eysenck, 2009) anxious individuals are expected to engage in compensatory cognitive effort to overcome processing inefficiency but research is yet to determine under what circumstances should compensatory effort emerge and how it should affect task performance.
I will discuss recent developments that have enhanced our understanding of the neural mechanisms behind attention control deficits in emotional disorders and their effect on task performance. I will end the talk by discussing findings from a recent intervention that showed how engaging working memory functions through an adaptive cognitive training regime can result in enhanced working memory capacity and attention control in individuals suffering from depression, with training effects transferable to other cognitive tasks. The contributing role of these findings towards developing interventions that target the engagement of top down mechanisms to improve attentional control and subsequent cognitive task performance is discussed.
Bio:Nazanin Derakshan is a Professor of Psychology at Birkbeck University of London, and a former Royal Society Dorothy Hodgkin Fellow in Experimental Psychopathology. She has conducted extensive research into the effects of anxiety and depression on cognitive performance. Her recent work focuses on understanding the cognitive and neural mechanisms of attentional control processes in emotional disorders and how these could explain cognitive inefficiency in such disorders. She has published a number of key papers in this area including highly cited theoretical articles. Her current work tries to understand how cognitive biases in attention, as well as deficits in attention regulation and inhibition, in anxiety and depression, can be improved by engaging and targeting top down mechanisms. She is currently a Research Associate at St John's College Research Centre at the University of Oxford.
Ready to experience: Binocular function is turned on earlier in preterm infants
28 November 2012
Speaker: Ilona Kovács
Department of Cognitive Science, Budapest University of Technology and Economics, Hungary
Abstract: While there is a great deal of knowledge regarding the phylo- and ontogenetic plasticity of the neocortex, the precise nature of environmental impact on the newborn human brain is still one of the most controversial issues of neuroscience. The leading model-system of experience-dependent brain development is binocular vision, also called stereopsis. Stereopsis provides accurate depth perception by aligning the two eyes’ views in some of the rodents, and in most carnivores, primates and humans. The binocular system is unique among other cognitive capacities because it is alike across a large number of species, therefore, a remarkable collection of molecular, cellular, network, and functional data is available to advance the understanding of human development. This system is also unique in terms of the well-defined timeline of developmental events which persistently brings it into the limelight of studies on cortical plasticity.
To address the origin of early plasticity of the binocular system in humans, we studied preterm human neonates as compared to full-term infants. We asked whether early additional postnatal experience, during which preterm infants have an approximately 2 months of extra environmental stimulation and self-generated movement, leads to a change in the developmental timing of binocular function. It is remarkable that the extra stimulation time leads to a clear advantage in the cortical detection of binocular correlation. In spite of the immaturity of the visual pathways, the visual cortex is ready to accept environmental stimulation right after birth. The results suggest that the developmental processes preceding the onset of binocular function are not pre-programmed, and that the mechanisms turning on stereopsis are extremely experience-dependent in humans. This finding opens up a number of further queries with respect to human-specific cortical plasticity, and calls for comparative developmental studies across mammalian species.
Bio: Ilona Kovacs is Professor of Psychology, head of the PhD School in Psychology and vice-dean at the Faculty of Natural Sciences, Budapest University of Technology and Economics. She studied for a degree in psychology at Eotvos University, Budapest, and then spent more than 10 years at Rutgers University in the US. Her main interest is human vision, including the developmental and clinical aspects of it.
Motion, Sound and Interaction
5 December 2012
Speaker: Frédéric Bevilacqua
l Interactions Team of IRCAM (Paris). We have developed various methods and systems that allow for the interaction between gesture, motion and digital media. This research has been influenced by sustained collaborations with musicians/composers and dancers/choreographers. For example, the study of musician gestures allowed us to formalize key concepts about continuous gesture control, movement segmentation and co-articulation. This guided us in designing various real-time gesture analysis systems using machine learning techniques, such as the "gesture follower that enables gesture synchronization with sound synthesis.
Concrete applications concerning augmented musical instruments, new music interfaces and music games will be described. Finally, recent research on sensori-motor learning will be presented, which opens novel perspectives for the design of musical interfaces and medical applications.
Bio: Frédéric Bevilacqua is the head of the Real Time Musical Interactions team at IRCAM - Institute for Music/AcousticResearch and Coordination in Paris. His research interests concern gestural interactive systems and interfaces for music expression. He has studied music at the Berklee College of Music in Boston and has participated in several music and media arts projects. From 1999 to 2003 he conducted research at the Beckman Laser Institute, University of California Irvine. He joined IRCAM in October 2003 as researcher on gesture analysis for music and performing arts.
Computational Creativity in a post-Turing Test World
18 January 2012
Abstract: In Computational Creativity research, we study how to engineer software that can take on some of the creative responsibility in arts and science projects. In recent years, we have undertaken a practical approach to addressing questions arising in Computational Creativity. In particular, we have built software that can perform mathematical discovery; software that automates cognitive and physical aspects of the painting process; software which helps in game design; and most recently, a corpus-based poetry generator. We have applied our research to projects in automated mathematics, video game design, graphic design and the visual arts. This broad spectrum of applications has enabled us to take a holistic view, and develop various philosophical notions, resulting in a set of guiding principles for the development of autonomously creative software, and a fledgling formalisation called Computational Creativity Theory, which will be the subject of a major EPSRC-funded project that has just started.
In the talk, I will describe our practical applications, the guiding principles and the formalisations. I will then focus on one of the the most thorny issues in Computational Creativity, namely how to assess the creativity of the software we write. We have argued that Turing-style comparison tests are wholly inappropriate in Computational Creativity research, as they encourage pastiche and naivety. We will discuss this issue with reference to The Painting Fool system - which we hope will one
day be taken seriously as a creative artist in its own right. Like any other artist, The Painting Fool should be horrified if people confuse its creations with those of someone else - whether human or machine. So…. should we really apply Turing-style tests to this aspiring creative talent?
Biography: Dr. Simon Colton is an AI researcher and Reader in Computational Creativity at the Department of Computing of Imperial College, London. He leads the Computational Creativity research group of around 10 people and is an EPSRC Leadership Fellow, employed as a full time researcher on the "Computational Creativity Theory" project. In the past four years, he has been the principal investigator on six EPSRC/TSB funded projects. He has published more than 130 papers on various aspects of AI research, and his work has been recognised by national and international awards, as well as being covered in the print, TV and radio media.
Computational Creativity: Swarms and Paul Jump out of the Box
25 January 2012
Mohammad briefly introduces a novel hybrid swarm intelligence algorithm followed by a discussion on the 'computational creativity' of the swarm. The discussion is based on the performance of the swarm through a cooperative attempt to make a drawing. We raise the question on whether swarm intelligence algorithms (inspired by social systems in nature) are possibly capable of leading to a different way of producing 'artworks' and whether the swarms demonstrate computational creativity in a non-representational way.
Biography: Mohammad Majid al-Rifaie obtained a BSc in Computing and Information Systems from University of London, Goldsmiths College, External Programme in 2005. His background is in computing, craftsmanship and journalism and his artistic interests focuses on the inter-connections between artificial intelligence, swarm intelligence, robotics and digital art. Postgraduate study took him to do a PhD course which touched upon Aritificial Intelligence, Swarm Intelligence, Cognitive Science and Robotics at Goldsmits, University of London. Mohammad's thesis focuses on the significance of information sharing in population-based algorithms (e.g. Swarm Intelligence). Mr. al-Rifaie's current research interests, in addition to the role of information sharing, lie in understanding the impact of freedom and autonomy in computational creativity.
You Are What You Hear: How Music and Territory Make Us Who We Are
1 February 2012
Abstract: Have you ever wondered why we evolved to have music? And if we need it, what does it do to us? Dr Harry Witchel reveals the answers with the most up-to-date science, relating it to humorous anecdotes from the history of pop culture, to unveil why music makes us feel so good — or why the wrong music makes us feel so bad. Dr. Witchel, who researches music, pleasure and the brain, provides a wealth of evidence pointing to one answer: like birds, gibbons, and other musical animals, we use music to establish and reinforce social territory. In this way music can influence what you think, what you decide to buy, and even how smart you are.
Biography: Dr. Harry Witchel is a Senior Lecturer at the Brighton and Sussex Medical School in UK. In 2004 he received the national honour of being awarded The Charles Darwin Award Lecture by the British Science Association. He has consulted and run experiments for Honda, Tesco, Nike and Nokia, and his writings have in appeared in the Financial Times, the Times Higher Educational Supplement. His regular appearances internationally as a commentator on radio and television include shows ranging from Big Brother to the BBC and the Discovery channel. His new book on music You Are What You Hear was published in January 2011.
Swarm-bots and Swarmanoid: Two Experiments in Embodied Swarm Intelligence
14 March 2012
Abstract: Swarm intelligence is the discipline that deals with natural and artificial systems composed of many individuals that coordinate using decentralized control and self-organization. In particular, it focuses on the collective behaviors that result from the local interactions of the individuals with each other and with their environment. The characterizing property of a swarm intelligence system is its ability to act in a coordinated way without the presence of a coordinator or of an external controller. Swarm robotics could be defined as the application of swarm intelligence principles to the control of groups of robots. In this talk I will discuss results of Swarm-bots and Swarmanoid, two experiments in swarm robotics.
A swarm-bot is an artifact composed of a swarm of assembled s-bots. The s-bots are mobile robots capable of connecting to, and disconnecting from, other s-bots. In the swarm-bot form, the s-bots are attached to each other and, when needed, become a single robotic system that can move and change its shape. S-bots have relatively simple sensors and motors and limited computational capabilities. A swarm-bot can solve problems that cannot be solved by s-bots alone. In the talk, I will shortly describe the s-bots hardware and the methodology we followed to develop algorithms for their control. Then I will focus on the capabilities of the swarm-bot robotic system by showing video recordings of some of the many experiments we performed to study coordinated movement, path formation, self-assembly, collective transport, shape formation, and other collective behaviors. I will conclude presenting recent results of the Swarmanoid experiment, an extension of swarm-bot to heterogeneous swarms acting in 3-dimensional environments.
Biography: Marco Dorigo received his Ph.D. degree in electronic engineering in 1992 from the Politecnico di Milano, Milan, Italy, and the title of Agrégé de l’Enseignement Supérieur from the Université Libre de Bruxelles, Brussels, Belgium, in 1995. From 1992 to 1993, he was a Research Fellow at the International Computer Science Institute, Berkeley, California. In 1993, he was a NATO-CNR Fellow, and from 1994 to 1996, a Marie Curie Fellow. Since 1996, he has been a tenured Researcher of the FNRS, the Belgian National Funds for Scientific Research, and a Research Director of IRIDIA, the artificial intelligence laboratory of the Université Libre de Bruxelles.
He is the inventor of the ant colony optimization metaheuristic. His current research interests include swarm intelligence, swarm robotics, and metaheuristics for discrete optimization. He is the Editor-in-Chief of Swarm Intelligence, and an Associate Editor or member of the Editorial Boards of many journals on computational intelligence and adaptive systems. Dr. Dorigo is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and of the European Coordinating Committee for Artificial Intelligence (ECCAI). He was awarded the Italian Prize for Artificial Intelligence in 1996, the Marie Curie Excellence Award in 2003, the Dr. A. De Leeuw-Damry-Bourlart award in applied sciences in 2005, and the Cajastur International Prize for Soft Computing in 2007. He is the recipient of an ERC Advanced Grant (2010).
The cost of Intelligence as a Cybernetic Problem
19th October 2011, 4pm
John Cummins, DEVA Research
Abstract: As early humans increased in intelligence they may have run into an evolutionary bottleneck. Increased intelligence would probably carry a cost, namely an increase in psychological stress. Chronic stress has an adverse effect on major biological systems and organs. This increased ‘allostatic load’ would impact on longevity, health and reproduction. The talk will suggest how the problem may have been at least partly resolved by evolution. A speculative model (still very much a work in progress) will be described, proposing the existence of a high-level, late –stage comparator in the brain that became subject to an unusual adaptation. The proposed adaptation involved a flexible cognitive bias affecting the accuracy of assessments of control. Somewhat paradoxically, this might have facilitated an increase in the flexibility of high-level problem-solving and other intelligence. The adaptation will be considered in the context of Fodor’s concept of modularity. Some questions and implications for artificial intelligence will also be considered.
Biography: John Cummins obtained an LLB in 1977 from Manchester University and qualified as a Solicitor in 1983. After working as a commercial lawyer in the City of London for some years he left law in 1995 to start DEVA Research, commercial undertaking in the field of early-stage life sciences research. Deva has won two DTI ‘SMART’ awards for innovation in anti-infectives. The present talk arises from a personal interest over the last ten years in the evolution of the stress system in our higher primate ancestors and ourselves.
Rhythm as entrainment: A dynamical, post-cognitivist case study
2nd November, 4pm
Dr. Fred Cummins, University College, Dublin
Abstract: Classical Cognitive Science holds dear to a model of the autonomous individual that serves some purposes, but fails in many cases. I here discuss two ways of viewing rhythm and synchronisation: the classical model, which leans heavily on the notion of prediction by a cognitive system, and a dynamical account, which emphasises embodiment, movement, and coupling among individuals. Only the latter can account for the very tight coupling observed when pairs of speakers read a text in synchrony. The dynamical account also suggests that the uniquely human ability to move in time with music may provide a window onto a basis for skill-sharing, and may thus constitute an important innovation in moving from ape to human.
Biography: Dr. Fred Cummins obtained a B.A. in Computer Science, Linguistics and German from Trinity College Dublin in 1991. Postgraduate study took him to the Cognitive Science program at Indiana University. Fred obtained an M.A. in Linguistics in 1996, and a PhD with joint major in Cognitive Science and Linguistics in 1997 at Bloomington. Fred's thesis was an experimental study of English speech rhythm. Fred subsequently completed a one year postdoctoral research at the Department of Lingusitics in Northwestern University, Evanston, IL, and another at the Dalle Molle Institute for Artificial Intelligence in Lugano, Switzerland. In 1999, Dr. Cummins became a lecturer at University College Dublin. From 2000 to 2004, Fred ran a research group in the now defunct Media Lab Europe. Fred's principal professional interests lie in helping to develop a cognitive science that does justice to the reality of subjective experience. In this context he has particular interests in rhythm, speech, language, enaction, ecological psychology, and the metaphysical basis of experience. Dr. Cummins is an unreformed anti-representationalist and thinks in terms of 'dynamic systems theory'.
Computational Creativity: Swarms and Paul Jump out of the Box
30 November 2011, 4-5pm
*cancelled and will be rescheduled in the new year.
Mohammad Majid al-Rifaie & Patrick Tresset
In this talk Patrick will introduce the AIkon-II project and Paul the robot, a robotic installation that sketches people's faces.
In his talk, Mohammad, briefly introduces a novel hybrid swarm intelligence algorithm followed by a discussion on the 'computational creativity' of the swarm. The discussion is based on the performance of the swarm through a cooperative attempt to make a drawing. We raise the question on whether swarm intelligence algorithms (inspired by social systems in nature) are possibly capable of leading to a different way of producing 'artworks' and whether the swarms demonstrate computational creativity in a non-representational way.
Biography: Mohammad Majid al-Rifaie obtained a BSc in Computing and Information Systems from University of London, Goldsmiths College, External Programme in 2005. His background is in computing, craftsmanship and journalism and his artistic interests focuses on the inter-connections between artificial intelligence, swarm intelligence, robotics and digital art. Postgraduate study took him to do a PhD course which touched upon Aritificial Intelligence, Swarm Intelligence, Cognitive Science and Robotics at Goldsmits, University of London. Mohammad's thesis focuses on the significance of information sharing in population-based algorithms (e.g. Swarm Intelligence). Mr. al-Rifaie's current research interests, in addition to the role of information sharing, lie in understanding the impact of freedom and autonomy in computational creativity.
Biography: Patrick Tresset, a French artist/scientist currently based in London, uses what he calls “clumsy robotics” to create autonomous cybernetic entities that are playful projections of the artist. He co-directs the Aikon-II project with Frederic Fol Leymarie at Goldsmiths College, University of London.
The Aikon-II project investigates the observational sketching activity through computational modeling and robotics. The project also provides a rich ground for an artist to examine issues in creativity, and in exploring
robotics systems as a source of potential augmentation of one's creative capacity. The work also seeks to engage with the public in softening the artificial divide between the "two cultures" of contemporary art & science.
Auditory Selective Attention in Real-Room Reverberation
7 December, 4pm
Dr. Simon Makin
Abstract: Typical listening situations consist of both multiple sound sources and numerous reflecting surfaces, so each sound is accompanied by a multitude of delayed, attenuated copies. This “reverberation” reduces speech intelligibility by masking later arriving portions of the direct sound and degrades the information conveyed by a sound’s temporal envelope. Further, the frequency response of any acoustic environment is typically not flat, creating spectral distortion. But reverberation is not always detrimental. It creates sensations of spaciousness, important in areas like architectural acoustics and music technology. Also, in the case of complex sounds such as speech, early reflections perceptually “fuse” with the direct sound, which can lead to an increase in effective SNR. Moreover, work in the Reading Auditory Lab has provided evidence that the distorting effects of reverberation are ameliorated by perceptual mechanisms which effect constancy.
Because the ratio of reflected to direct sound energy increases with distance between source and listener, the distortion of a sound’s temporal envelope increases with distance. Spectral distortion also varies, not only between different environments, but also between different pairs of source-listener positions within the same space. Any detectable difference between competing speech messages will help a listener to track one of them, so it seems reasonable to ask whether these position-specific distortions could serve to aid auditory selective attention when faced with the problem of multiple talkers in a reverberant space. However, the classic inter-aural cues arising from position differences which enable listeners to localise sounds are known to be highly vulnerable to reverberation. So does reverberation help or hinder auditory selective attention? Talker characteristics are the other main source of differences between speech messages, and cues arising from these differences are more robust in reverberation. The series of experiments I will describe in this talk therefore place talker differences in conflict with position differences, while listening in reverberation measured from real rooms, in order to study the effects of realistic reverberation on auditory selective attention.
Biography: Simon got his start in auditory psychology in Anthony Watkins’ lab at Reading, working on a project investigating perceptual constancy for spectral envelope distortion. He got his masters in speech and hearing sciences from UCL then completed a PhD in auditory modelling at Sheffield University, supervised by Guy Brown, one of the pioneers of Computational Auditory Scene Analysis (CASA). His PhD work involved using both perceptual experiments and computer modelling to study the role of pitch in concurrent vowel segregation. After returning to Watkins’ lab at Reading a collaborative project between Reading and Sheffield was born. This aims to use the results of perceptual experiments investigating constancy in reverberation when temporal envelopes vary, to inform the development of a computer model for use as a front-end in reverberation-robust artificial listening devices. Most recently, he has also been studying the effects of reverberation on auditory selective attention.
Towards and affective neuro-physio-phenomenology
Wednesday 26 January, 4pm
Dr. Giovanna Colombetti
Department of Sociology & Philosophy at the University of Exeter
Abstract: The neuroscientific study of emotion experience has been neglected compared to other aspects of consciousness. Affective neuroscience is still largely dominated by a behaviouristic paradigm that focuses on the link between emotional stimuli, neural, bodily and/or behavioural responses. To rectify this situation, I propose to augment affective neuroscience with the neurophenomenological method originally delineated by Varela (1996) to combine first-, second- and third-person methods in the study of consciousness. I argue that this integration will enrich affective neuroscience as well as neurophenomenology itself, given that the latter has not yet been applied to emotion experience, and has focused exclusively on the brain in spite of its association with the “enactive” view of the mind. Integrating neurophenomenology with affective neuroscience will extend the neurophenomenological method to the rest of the organism (hence the proposed label “affective neuro-physio-phenomenology”), enabling us better to understand how lived experience relates to physical processes.
Biography: Dr. Colombetti is Senior Lecturer in Philosophy in the Department of Sociology & Philosophy at the University of Exeter. Her main research interests include embodied and enactive approaches in philosophy of mind and cognitive science, and philosophical and scientific theories of emotion. She is currently working on a 5-year project funded by the ERC (European Research Council) to bridge these two fields of inquiry. Among others she has co-edited with Evan Thompson a special issues of the Journal of Consciousness Studies on "Emotion Experience". Dr. Colombetti is currently working on a manuscript provisionally titled "The Feeling Body: Emoting the Embodied Mind".
Do we need consciousness for control?
Wednesday 2 February, 4pm
Magda Osman, Lecturer, Biological and Experimental Psychology Centre, School of Biological and Chemical Sciences, Queen Mary University of London
Abstract: Dynamic control environments (e.g., car driving, medical decision making, playing the stock market, operating nuclear power plants) involve goal directed decision making. That is, the decision maker chooses actions that generate outcomes that build on previous decisions in order to work towards achieving a final desirable state of the environment. The problem inherent in dynamic control environments is that the decision maker must learn to isolate the occasions in which their actions change the observed events (direct effects – slowing the spread of disease through drug intervention) from those occasions in which the changes in the environment are autonomous (indirect effects – variable rate of spread of disease). When faced with such complex decision making environments, psychological studies often report a dissociation between people's ability to improve their control of the task environment, and their failure to provide accurate verbal descriptions of their task knowledge. I will present evidence from a series of laboratory studies showing that people do have conscious access to their decision making behaviors and, crucially meta-cognitive processes actually guide control behaviors. I argue that, despite the popularity in the cognitive sciences, there is little evidence to support the claim that control is based on implicit learning.
Biography: Dr Osman currently holds the position of Lecturer in Experimental Cognitive Psychology in the Biological and Experimental Psychology Centre, School of Biological and Chemical Sciences, at Queen Mary University of London. She completed by PhD from Brunel University (2001), and was a Senior Research Fellow (2001-2007) at University College London – and currently maintains an honorary position there. Her main research interests are perceptual-motor learning, decision making, reasoning and problem solving. To date, her research is concerned with the underlying mechanisms that support learning and decision making in complex dynamic environments in which people must track the effects of their actions in order to control the changing events around them. In two recent review (Osman, 2010a Psych Bull; Osman, 2010b, Controlling Uncertainty, Wiley-Blackwells) the critical issues related to complex dynamic decision making (e.g. agency, causality, prediction and control) are examined in a variety of disciplines (HCI, Machine Learning, Psychology, Engineering, and Neuroscience).
The mind in-between: Can social interaction constitute social cognition?
Wednesday 16 February, 4pm
Prof. Ezequiel di Paolo
Research Professor at Ikerbasque, the Basque Foundation for Science
Abstract: Recent empirical work in social cognition, both in psychology and neuroscience, has gradually started to focus on situations involving various degrees of social interaction. Such situations are notably difficult to manage in controlled settings. This is one reason behind the prevailing attention to individual cognitive mechanisms for social understanding. However, the "experimental quarantine" (Daniel Richardson's phrase) is being lifted and the focus of empirical studies is increasingly concerned with individuals in interactive situations (e.g., joint action). In this talk, I argue that this move must be followed by a lifting of the "conceptual quarantine", which still in effect and puts the weight of social cognitive performance solely on individual mechanisms (e.g., mirror neurons). This perspective is traceable to the methodological individualism prevalent in cognitive science in general. It is yet another reason that accounts for the widespread conception of social cognition as a detached observation of social situations and exceptionally as a form of participation. The properties of the interaction dynamics are relegated to the role of informational input to individual mechanisms.
In order to conceive of the possibility of social interaction being itself part of the mechanisms of social cognition, it is necessary first to provide a definition of the term able to capture the intuitive notion of engagement. Such definitions are surprisingly rare in the literature. I argue that the enactive definition of social interaction achieves this objective. Following this, the possible roles that interaction could play in particular cases are evaluated according to a scale of increasing involvement by introducing distinctions between contextual factors, enabling conditions and constitutive processes. I discuss existence proofs for all of these options (thus answering the title question positively). The argument carries minimal and maximal implications. At the very least, if the interaction process is admitted to play in some cases a role beyond the contextual, this implies that individual mechanisms (e.g., contingency detection modules, mirror neurons) must be re-conceptualised as mechanisms-in-interaction, and their functional role re-assessed. I discuss evidence that this shift is slowly taking place for the case of mirror neurons. Maximally, if interaction is admitted to sometimes constitute social cognition this opens the door for a broadening of the spectrum of explanations and calls for a program aimed at assessing the contributions of individual and social mechanisms not only for social cognition, but for cognition in general. This talk is based on a recent publication: "De Jaegher, H, Di Paolo, E. A. and Gallagher, S. (2010) Can social interaction constitute social cognition?, Trends in Cognitive Sciences, 14(10), 441 - 447. http://dx.doi.org/10.1016/j.tics.2010.06.009"
Biography: Professor di Paolo is currently research Professor at Ikerbasque, the Basque Foundation for Science, Bilbao, Spain. Ezequiel studied Physics and Mathematics at the Universidad de Buenos Aires and did an MSc in a Nuclear Engineering at the Instituto Balseiro (National Atomic Energy Agency and University of Cuyo). He was awarded a DPhil (PhD) at COGS, Univesity of Sussex, within the Evolutionary and Adaptive Systems group, under the supervision of Prof Phil Husbands. Ezequiel has been a research fellow at the German National Research Center for Information Technology, GMD, in Sankt Augustin within the Autonomous Intelligent Systems (AiS) institute. Ezequiel remains in closely connected with COGS research at Susses, having worked there as a lecturer from 2000, Senior Lecturer and then Reader. He is also affiliated with the Centre for Computational Neuroscience and Robotics and the Centre for Research on Cognitive Science.
Now you see it, now you don’t? A computational method for automatic stimulus generation for change blindness and visual pop out tasks
Wednesday 2 March, 4pm
Prof. Peter McOwan
School of Electronic Engineering and Computer Science at Queen Mary, University of London
Abstract: Change blindness, where observers have difficulty in perceiving changes between sequentially presented images, and spatial pop put where regions of target textures need to be identified, are useful tools to help explore human visual awareness. In this talk I will present results on work that blends a computational model for image saliency and evolutionary optimization techniques to allow the automatic custom generation of experimental stimuli. The results show that this computational approach is able to predict observer performance in both special pop put and change blindness tasks.
Biography: Peter is currently Professor of Computer Science and Director of Outreach in the School of Electronic Engineering and Computer Science at Queen Mary, University of London. His research interests are in visual perception, mathematical models for visual processing, cognitive science and biologically inspired hardware and software. He has authored more than 90 papers in these areas. He recently served on the Program Committee for ACII2009, CVPR 2009 and IEEE Artificial Life and is a member of the editorial board of the Journal on Multimodal User Interfaces. Current research projects include LIREC, an EU FP7 IP, developing long term synthetic companions, an EPSRC programme grant CHI+MED investigating design to reduce human errors in medical software and an EPSRC PPE CS4fn, an outreach project to enthuse schools about computer science research. He was also elected a National Teaching Fellow by the Higher Education Academy in 2008.
Robot control using living neuronal cells: progress and challenges
Wednesday 9 March, 4pm
Dr. Victor Becerra
Department of Cybernetics, University of Reading
Abstract: Typically, mobile robots are controlled by means of an embedded computer. Recent EPSRC funded research has been carried out at the University of Reading in which dissociated biological neurons have been cultured, electrically interfaced, and employed to send commands to a mobile robot, and to receive stimulation derived from the sensors mounted at the robot. In this way, it can be argued that the interfaced neural culture is taking at least part of the role of the controller of the robot. The principal aim of this research project was, by using experiments such as the one described above, to investigate the computational and learning capacity of dissociated neuronal cultures. This seminar provides an overview of the problem area, introduces the breath of ongoing research, and states a number of open questions that may be answered by future research.
Biography: Victor Becerra is a Reader in Cybernetics. He has been an academic at Reading since January 2000. He was previously a Research Fellow in the Control Engineering Research Centre at City University, London, between 1994 and 1999. He obtained his PhD in Control Engineering from City University, London, in 1994, for his work in the development of optimal control algorithms. He obtained his BEng in Electrical Engineering from Simon Bolivar University, Venezuela, in 1990.
How people look at faces differently
Tuesday 15 March, 4pm
NAB LG01
Katsumi Watanabe, Associate Professor of Cognitive Science, Research Centre for Advanced Science and Technology, University of Tokyo, Japan
Abstract: Facial processing is considered to be one of the fundamental visual processes necessary for successful social interaction. It has therefore been assumed that face processing is largely universal among humans. However, recent studies have accumulated to challenge the idea of strictly universal facial processing. In this talk, I will present two on-going studies on real- and artificial-face processing from our laboratory. One study concerns eye movements during face observation in Japanese deaf people. We found differential fixation patterns between deaf and hearing people. The other study examines how people perceive and evaluate ambiguous faces of statues depicting Buddha (the Thousand Armed Kannon at
the Hall of the Lotus King, a.k.a. Sanjûsangendô, Kyoto, Japan). This study found several differences between Japanese and American observers in emotion and affective evaluations of the faces of Buddha statues. These results suggest that face processing and related mechanisms are not homogenous but can be influenced by experience.
Biography: Katsumi Watanabe is Associate Professor of Cognitive Science at the University of Tokyo. He received his PhD in Computation and Neural Systems from California Institute of Technology, in 2001, for his work in cross-modal interaction in humans. He was a research fellow at the National Institute of Health (USA) and a researcher at the National Institute of Advanced Science and Technology (Japan). His research interests include: scientific investigations of explicit and implicit processes, interdisciplinary approaches to cognitive science, and real-life applications of cognitive science.
Automatic guidance of attention from working memory
Wednesday 23rd March, 4pm
BPLT
David Soto, Lecturer, Department of Medicine, Imperial College London
Abstract: I will talk about recent research showing interactions between the process of keeping information 'online' in working memory, and the attention processes that select relevant information for action. I will show how human visual attention in health and disease can be strongly influenced by whether or not the stimuli in the scene match the current contents of working memory. Attentional guidance from the contents of working memory occurs automatically, even when it is detrimental to performance; new behavioral data suggest that working memory guidance can arise even when the working memory content is not consciously seen. I will also present data from ongoing functional brain imaging projects delineating the distinct neural mechanisms supporting guidance of attention from working memory and from implicit memory.
Biography: Born in A Coruna (Spain), licenciado in Psychology, and PhD in Experimental Psychology from the University of Santiago de Compostela. He was a post-doctoral research fellow at the Behavioral Brain Sciences
Centre in Birmingham UK, visiting fellow at Harvard Medical School and then a research fellow of the British Academy at the Centre for Neuroscience at Imperial College London. Now he is a Lecturer at Imperial funded by a New Investigator Grant from the Medical Research Council. His main research interests revolve around the interplay between memory and attention in health and disease.
Inverse mapping the neuronal correlates of face categorizations
Wednesday 30 March, 4pm
BPLT
Dr Marie Smith Lecturer, Department of Psychological Sciences, Birkbeck College, University of London
Dynamically tracking the processing of local and global visual information in the brain.
Abstract: One of the fundamental goals in cognitive neuroscience is to relate modulations in brain activity to perceptual and cognitive functions. Of critical importance is identifying the specific information being processed and how this information is distributed and transferred throughout the different brain regions involved. In this talk, I will present a reverse correlation methodology that makes it possible to directly study information processing in the brain, and report the application of these techniques to the study of face perception and the processing of local and global visual information in the brain.
Biography: After a degree in Maths and Physics, I completed a PhD in physics at the University of Glasgow, Scotland in 2003 on the modeling of electromagnetic radiation emitted by living tissues (e.g. the brain). I then moved to a post-doctoral position with Professor Philippe Schyns in the department of Psychology, University of Glasgow looking at novel ways of interpreting brain-imaging data in terms of specific visual information processing content. From there I moved to a position at the MRC Cognition and Brain Sciences Unit in Cambridge where I continued exploring these topics, while also looking at the effects of sleep deprivation on recollection and familiarity in collaboration with Dr Richard Henson. In January 2010 I joined the Department of Psychological Sciences at Birkbeck College as a lecturer.
Consciousness and Connectivity
20 October 2010
Professor Murray Shanahan
Computing, Imperial College, London
Abstract: ow might the brain be organised so as to realise the globally integrated states that are hypothesised to tbe the hallmark of the conscious condition? In this talk, I will argue that the answer resides in the pattern of long-range neural connections that constitute the brain's communications infrastructure. These connections enable information and influence from around the brain to funnel into a connective core, from where it can be broadcast back out again. Thanks to the structure of this connective core, which acts as a global neuronal workspace, a serial procession of thoughts is distilled from the activity of massively many parallel processes, and unity arises out of multiplicity. The talk summarises the central chapter of my book Embodiment and the Inner Life.
Biography: Murray Shanahan Professor of Cognitive Robotics at Imperial College London. He is primarily interested in cognitive architecture, both as it is found in nature and as it might be realised artificially. Because he is committed to the view that cognition and embodiment are intimately related, he has a strong interest in robotics, (robots providing a vehicle
for testing theories of cognition). Murray is also interested in consciousness and sees consciousness and cognition as closely linked.
Measuring consciousness: from behaviour to neurophysiology
27 October 2010
Dr. Anil Seth
Co-Director Sackler Centre for Consciousness Science, School of Informatics, University of Sussex
Abstract: How can we measure whether a particular sensory, motor, or cognitive event is consciously experienced or remains unconscious? Such measurements provide the essential data on which a science of consciousness depends, yet there is no clear consensus on how such measurements should be made. Much of what we know derives from subjective (introspective) verbal report, but on some theories such reports confound mechanisms of metacognitive access with mechanisms of consciousness and are also susceptible to biases. In response, there has been a growing emphasis on neurophysiological measures as well as on behavioral measures that do not rely on introspection. But for these 'objective' measures it can be hard to guarantee that they are measuring consciousness per se. I will review definitional, methodological, and conceptual issues surrounding the problem of measuring consciousness and describe specific examples based on both behavior and on neurophysiology. In the former case I will focus on 'post-decision wagering', and in the latter, on measures of 'information integration' and 'causal density' in neural dynamics.
Biography: Anil Seth is currently a Reader in the School of Informatics at the University of Sussex, co-director of the Sackler Centre for Consciousness Science, and an EPSRC Leadership Fellow in computational neuroscience. Research in his group integrates mathematical, theoretical, and experimental approaches to unravelling the neural mechanisms underlying consciousness, in humans and other animals, and in health and in disease. A second and complementary interest lies in statistical approaches to causal inference in complex network dynamics. Anil studied natural sciences at Cambridge, artificial intelligence at Sussex, and spent five years as a Postdoctoral and Associate fellow at the Neurosciences Institute in San Diego before returning to Sussex in 2006.
Metaphor is Both Simpler and More Quirky than You Thought: Lessons from an AI Assault
3 November 2010
Professor John Barnden
Computer Science, University of Birmingham
Abstract: Metaphor is a central aspect of not only literary language but also of mundane types of discourse such as ordinary conversation, newspaper articles, explanatory documents for the public, and popular novels. It is also central in forms of communication other than language, such as gesture, pictures, diagrams, music, ... Thus, AI systems capable of interacting naturally with people, and capable of understanding communication between people, must ultimately be able to understand and produce metaphor. However, despite the high salience of metaphor in fields such as Psychology, Philosophy and Linguistics, AI has given relatively little attention to the topic. In the talk I will outline an approach to parts of the problem of metaphor understanding that I have been developing. This approach is partially realized in an AI system called ATT-Meta. The approach is focussed largely on how to understand metaphorical language that rests on familiar metaphorical conceptions of what is being talked about but builds in an open-ended and possibly creative way on those conceptions. I will discuss how this type of metaphor shows that it is misguided to rely too firmly to the idea that metaphor rests on knowing or finding a detailed analogy between the two subject matters related by a metaphorical utterance. I will also argue that sometimes the task of understanding a partially metaphorical piece of discourse can involve episodes of mentally translating what is literally presented into metaphorical terms, not just trying to translate the metaphorical into the literal.
Biography: John Barnden has been Professor of Artificial Intelligence at the School of Computer Science at the University of Birmingham since 1997. Previously he worked at the computer science departments of Indiana University in Bloomington, Indiana, USA and New Mexico State University in Las Cruces, New Mexico, USA, following a post-doctoral project at Reading University, England. He was educated as a mathematician at Trinity College, Cambridge and as a computer scientist there and at Oxford University, where he obtained his D.Phil. in 1976. He has interests in figurative language generally, in various aspects of reasoning (notably reasoning about mental states), and related areas of philosophy and psychology. He would like to have time to do research in areas such as diagrammatic cognition, emotion and consciousness. He was until recently chair of AISB (Society for the Study of Artificial Intelligence and Simulation of Behaviour) and is now the vice-chair. He was a founding board member of RaAM, the International Association for Researching and Applying Metaphor. More soberly he is on the Research Committee of the recently-instituted BCS Academy of Computing.
One step forward, two steps back: explaining the slow progress with understanding the origins of individual differences in mathematics
17 November 2010
Dr Yulia Kovas
Department of Psychology, Goldsmiths, University of London
Abstract: Recent twin research has revealed a strong genetic basis to mathematics. Molecular genetic research has begun to identify DNA polymorphisms that contribute to variation in mathematical ability. However, our research suggests that the mechanisms of this contribution are extremely complex, which explains why the progress in this area has been slow. The complexity is further increased due to potential cultural effects on different aspects of mathematical cognition. Here I present two lines of investigation. First, I discuss the results from the UK longitudinal, population-based Twins' Early Development Study demonstrating complex gene-environment mechanisms. Second, I discuss the cross-cultural work into numerical cognition. In terms of genetics, our research suggests that many DNA polymorphisms contribute to mathematical ability, and each of them has only a small and probabilistic contribution to the person's position on the 'mathematical ability continuum'.
Although many of the same genetic effects continue to be important for mathematics across development, new genetic effects also come on line at each age. In addition, many of the DNA polymorphisms that contribute to variation in mathematical ability at a particular age also contribute to variation in other learning abilities at the same age, but less so at other ages. Our research also shows that the effects of genes on mathematical ability may not be the same in different environments. For example, genetic risk of poor mathematical performance seems to be mediated by the way children experience and perceive their learning environment – so that the effects of the risk genes are suppressed when the child's classroom experiences are positive. Our cross-cultural research suggests multiple sources of cultural contributions to numerical and mathematical variation. Understanding these complexities is of great importance for future research and for ultimate progress in understanding the origins of mathematical
achievement and underachievement.
Biography: Yulia Kovas received her Ph.D. in 2007 from the SGDP Centre, Institute of Psychiatry. She received a degree in Literature and Linguistics as well as teaching qualifications from the University of St Petersburg, Russia in 1996 and taught children of all ages for 6 years. She received a B.Sc in Psychology from Birkbeck College, University of London in 2003 and an MSc in Social, Genetic, and Developmental Psychiatry from the SGDP Centre, King's College. Her current interests include genetic and environmental etiology of individual differences in mathematical ability and disability and the etiology of covariation and comorbidity between different learning abilities and disabilities. Dr Kovas is the head of the InLab – an international, interdisciplinary research lab dedicated to numerical ability and other STEM fields. She is leading the genetically-sensitive mathematics research in the Twins Early Development Study at the SGDP Centre, King's College, London and is involved in a number of cross-cultural studies dedicated to understanding sources of variation in numerical ability and achievement.
Linguistic sense-making: from Maturana to biosemiotics
24 November 2010
Dr. Stephen Cowley
Psychology, Hertfordshire University
Abstract: In dialogue, language spreads through brain, body and the world. What we ordinarily do eludes models of autonomous 'language systems' or how we manipulate material linguistic symbols. Rather, while having a local aspect language is also non-localizable. It is, at once measurable and traceable to a community's history. Accordingly, its phylogenetic, ontogenetic and microgenetic manifestations are most suitably traced to languaging. In recent years, there has been a boom in theoretical and empirical work that addresses how people language. New importance has been given to pico-scale resonances (lasting tens of milliseconds). Languaging is (1) biocognitive; (2) depends on particularities; and (3) prompts situation transcending 'thoughts'. How is this to be explained? In this paper, I contrast De Jaegher and Di Paolo's (2007) participatory sense-making with approaches based in the principles of biosemiotics. It is stressed that languaging (1) traces its ontology to relations -not the observable; (2) can use semantic biology to explain how autopoiesis uses natural artefacts (including DNA); and (3) allows life -and language -to treat autonomous agents (including humans) as resources used in expanding into a changing possibility space. It is shown that these principles can be used to clarify how linguistic activity integrates virtual, dynamical and material features whose origins and functions draw on quite different histories. While participatory sense-making is a valuable model for enactivist simulations, in itself it is far from sufficient to ground linguistic sense-making.
Biography: Stephen Cowley is a Senior Lecturer in Developmental psychology at the university of Hertfordshire UK. While his PhD was in Linguistics, since 2000, he has lectured on Cognitive Science and Psychology. He founded and co-ordinates the Distributed Language Group. This international group of scholars aims to replace code models of language with naturalistic approaches to the directed, dialogical activity that gives human intelligence a collective dimension. In empirical work, Stephen has pursued this around how we resonate with and resist other people's voices, mother-infant interactions, social robotics and how decisions come to be made during medical simulation.
Perception, Causation and the Scientific Study of Human and Machine Consciousness
1 December 2010
Dr David Gamez
Electrical Engineering, Imperial College London
Abstract: The talk will start by showing that the virtual reality model of perception is the only coherent way of explaining how phenomenal experiences can appear outside of the body. The virtual reality framework also accounts for hallucinations, dreams, and out of body experiences, and it leads to a clear distinction between the phenomenal world of our experiences and the physical world described by science. This distinction will be used to dissolve the hard problem of consciousness and to demonstrate how a science of consciousness is possible. Such a science will be based on mathematical and algorithmic theories of consciousness that can make precise predictions about human conscious states. These predictions will be validated by comparison with first person reports, and the talk will highlight some of the problems with phenomenal-physical causation that are likely to affect the reporting of conscious states. The talk will conclude with a discussion of consciousness in animals and artificial systems.
Biography: David Gamez studied for a BA in natural sciences and philosophy at Trinity College, Cambridge, and completed a PhD in Continental philosophy at the University of Essex. After a couple of years working on agent-based artificial intelligence he completed a second PhD on machine consciousness as part of an EPSRC-funded project to build a conscious robot. He is currently at Imperial College, London, where he is working on new techniques for the simulation and analysis of spiking neural networks. Gamez is the author of What We Can Never Know - a book exploring the limits of philosophy and science through studies of perception, time, madness and knowledge - and the co-editor of What Philosophy Is - a collection of essays on the nature of philosophy. He is currently working on a book that will provide a systematic framework for the scientific study of human and machine consciousness.
Chinese Whispers and Virtual Arrowheads: Experimental Studies of Human Cultural Evolution
8 December 2010
Dr Alex Mesoudi
Biological and Experimental Psychology Group, Queen Mary University of London.
Abstract: Over the last few decades, a growing body of theory has begun to analyse human culture - the body of beliefs, skills, knowledge, customs, attitudes and norms that is transmitted from individual to individual via social learning - as a Darwinian evolutionary process. Just as the biological evolution of species can be characterised as a Darwinian process of variation, selection and inheritance, so too culture exhibits these basic Darwinian properties. I will present the results of a series of experiments that have simulated cultural evolution in the lab using methods from social psychology. One set of studies using the "transmission chain method" have identified a bias in cultural evolution for information concerning social interactions over non-social interactions, as predicted by the "social brain" theory of human intelligence. Another set of studies have simulated the cultural evolution of prehistoric arrowhead designs, testing hypotheses that different patterns of arrowhead variation are caused by different ways in which arrowhead designs were transmitted between prehistoric hunter-gatherers. Generally, psychology experiments offer a valuable tool for studying human cultural evolution, while at the same time a cultural evolutionary framework can provide added validity to psychology experiments by linking them to patterns and trends in the ethnographic and archaeological records.
Biography: I am currently Lecturer in the Biological and Experimental Psychology Group, Queen Mary University of London. I did my PhD in the School of Psychology at the University of St Andrews, and have held postdoctoral research posts at the University of Missouri-Columbia, University of British Columbia and University of Cambridge. My research interests lie in the experimental study of human cultural transmission and theoretical studies of human cultural evolution.
The Psychology and Neurobiology of Musical Virtuosity
15 December 2010
Professor Justin London
Music, Carleton College, Northfield USA
Abstract: We are all amazed and enchanted to hear the performance of a musical virtuouso, whether it is Jimi Hendrix or Isaac Perlman. But what makes a virtuoso a virtuoso—or to put it another way, what are cognitive constraints and sensorimotor affordances for musical virtuosity? Is virtuosity simply a matter of playing very, very fast? Why is virtuosity a solo art? In this lecture the interaction between virtuosity and innate human limits on rhythm and timing are discussed, including the outer limits of musical speed (illustrated with examples from the "World's Fastest Drummer Competition"), interpersonal coordination, and musical expression. Other topics addressed will include the "10,000 hour" rule, Fitt's Law, and the Hick-Hyman Law in relation to skilled musical behavior. The talk concludes with aesthetic considerations of both virtuosity and anti-virtuosity, the latter as exemplified in "outsider"
music.
Biography: Justin London is Professor of Music at Carleton College in Northfield, MN, USA, where he teaches courses in Music Theory, The Philosophy of Music, Music Perception and Cognition, and American Popular Music. Trained as a classical guitarist, he holds the Ph.D. in Music History and Theory from the University of Pennsylvania where he studied with Leonard Meyer. He has written articles and reviews on a wide range of subjects, from humor in Haydn to the perception of complex meters. His book Hearing in Time (Oxford University Press, 2004) is a cross-cultural exploration of the perception and cognition of musical meter. In 2005-2006 he was a visiting scholar at the Centre for Music and Science of Cambridge University under the auspices of a UK Fulbright Foundation grant. He has given many talks and symposia, including the Mannes Institute for Advanced Studies in Music Theory (New York, 2005), the International Orpheus Academy for Music & Theory (Ghent, Belgium, 2007), and the Interdisciplinary College (IK) in cognitive science (Günne, Germany, 2009 & 2010). He served as President of the Society for Music Theory in 2007-2009.
Computation of emotion in man and machine
20 January, 2010
Prof. Peter Robinson, Professor of Computer Technology, University of Cambridge Computer Laboratory, UK
The importance of emotional expression as part of human communication has been understood since the seventeenth century, and has been explored scientifically since Charles Darwin and others in the nineteenth century. Advances in computer technology now allow machines to recognise and express emotions, paving the way for improved human-computer and human-human communications. This talk presents some recent advances in theories of emotion and affect, their embodiment in computational systems, the implications for general communications, and broader applications.
Recent advances in Psychology have greatly improved our understanding of the role of affect in communication, perception, decision-making, attention and memory. At the same time, advances in technology mean that it is becoming possible for machines to sense, analyse and express emotions. We can now consider how these advances relate to each other and how they can be brought together to influence future research in perception, attention, learning, memory, communication, decision-making and other applications.
The computation of emotions includes both expression and recognition, using channels such as facial expressions, non-verbal aspects of speech, posture, gestures and general behaviour. The combination of new results in psychology with new techniques of computation on new technologies will enable new applications in commerce, education, entertainment, security, therapy and everyday life. However, there are important issues of privacy and personal expression that must also be considered.
Brief Bio: Peter Robinson is Professor of Computer Technology in the Computer Laboratory at the University of Cambridge, where he leads the Rainbow Research Group working on computer graphics and interaction. His research concerns problems at the boundary between people and computers. This involves investigating new technologies to enhance communication between computers and their users, and new applications to exploit these technologies. The main focus for this is human-computer interaction, where he has been leading work for some years on the use of video and paper as part of the user interface. The idea is to develop augmented environments in which everyday objects acquire computational properties through user interfaces based on video projection and digital cameras. Recent work has included desk-size projected displays and inference of users' mental states from facial expressions, speech, posture and gestures.
Professor Robinson is a Fellow of Gonville & Caius College where he previously studied for a first degree in Mathematics and a PhD in Computer Science under Neil Wiseman. He is a Chartered Engineer and a Fellow of the British Computer Society.
What pops out in pop-out?
A dimension-weighting account of visual search for salient pop-out targets.
27 January, 2010
Professor Hermann Müller, Professor of Experimental/Cognitive Psychology,
Universities of Munich, Germany, and London (Birkbeck College)
Visual search for salient singleton targets, such as a red object amongst green (distractor) objects, is surprisingly efficient - phenomenally, such targets appear to 'pop out' of the search display in a seemingly automatic, purely bottom-up driven fashion. However, research in my laboratory conducted over the last decade has shown that pop-out target detection is subject to dimension-specific processing, or competitive 'weighting', limitations, such that if our visual system is tuned to detecting, say, color-defined targets, its capacity for detecting motion-defined targets is reduced. Also, while these dynamic weighting processes are largely bottom-up driven, they may be modulated by dimensional top-down expectancies. Dimensional weighting in the brain is implemented within a fronto-posterior network of brain areas, with (left) fronto-polar mechanisms initiating the (re-) adjustment of dimensional weight settings, which, via temporal and parietal areas,
modulates the processing efficiency in dimension-specific visual processing areas such as V4 (for color) and hMT+ (for motion). In the lecture, I will present the psychophysical and neuro-scientific evidence for this account and discuss its implications for theories of visual selective attention as well as applications, e.g., in human-robot interaction.
Brief Bio: Hermann Müller is currently Professor of Experimental/Cognitive Psychology at the Universities of Munich, Germany, and London (Birkbeck College). He studied Psychology at the Universities of Wuerzburg, Germany (MSc), and Durham (PhD). After a number of post-doc years (with Professors P.M.A. Rabbitt and G.W. Humphreys), he became a Lecturer/Senior Lecturer/Reader at Birkbeck College, followed by a Professorship at the University of Leipzig (1997-2000). In Munich, he heads a large research group dedicated to the study of selective attention in vision, memory and action. Having worked on all aspects of visual selection (space-based, feature-/dimension-based object-based), over the last decade or so he has become particularly interested in visual search, an activity we engage in every day.
Modelling Jazz Virtuosity
3 February, 2010
Dr. François Pachet, Sony CSL-Paris, 6, rue Aamyot, 75005, Paris, France
In this talk I focus on the particular problem of generating virtuoso Bebop melodies. The problem of modeling Jazz improvisation has received a lot of attention recently, thanks to progresses in machine learning, statistical modeling, and to the increase in computation power of machines. The Continuator (Pachet, 2003) was the first real time interactive systems to allow users to create musical dialogs using style learning techniques. The Continuator is based on a modeling of musical sequences using Markov chains, a technique that was shown to be well adapted to capture stylistic musical patterns, notably in the pitch domain. The Continuator had great success in free-form improvisational settings, in which the users explore freely musical language created on-the-fly, without additional musical constraints, and was used with Jazz musicians as well as with children (Addessi & Pachet, 2005). However, the Continuator, like most systems using Markovian approaches, is difficult, if not impossible to control. This limitation is intrinsic to the greedy, left-to-right nature of Markovian music generation algorithms. Consequently, it was so far difficult to use these systems in highly constrained musical contexts such as Bebop.
I propose here a computational model of virtuosity based on a novel, combinatorial view of Markov sequence generation. This model solves the "control" problem inherent to Markov chain geeneration, and also provides a very fine degree of control to the user. I illustrate this work with a controlable Bebop improvisation generator. Bebop was chosen as it is a particularly "constrained" style, notably harmonically. I will show how this technique can generate improvisations that satisfy three types of constraints:
- harmonic constraints derived from the rules of Bebop,
- "Side-slips" as a way to extend the boundaries of Markovian generation by producing locally dissonant but semantically equivalent musical material that smoothly comes back to the authorized tonalities, and 3) non-Markovian constraints deduced from the user’s gestures. I will try to convince the audience that these generated phrases 1) are of the same nature than what real virtuoso are able to generate and that
- the ability to control these phrases is an highly enjoyable process.
Brief Bio: Francois Pachet got a Ph.D in Artificial Intelligence from the University of Paris 6. He is now senior researcher at Sony Computer Science Laboratories in Paris, where he conducts research on new forms of musical experiences.
What Mrs Thatcher taught me about face processing
10 February, 2010
Professor Nick Donnelly, University of Southampton
The Thatcher illusion is thought to demonstrate the processing of perceptual configural features in faces. The basis for this inference is the immediacy of the phenomenological experience of grotesqueness that emerges when the eyes and mouths are inverted in otherwise upright faces. I will report on three experiments that explore how Thatcher faces are discriminated from typical faces. The results of these experiments are inconsistent with perceptual configural processing of Thatcher (and typical) faces. I will go on to argue none of the commonly reported
behavioural tests of configural face processing actually provide supporting evidence for the perceptual processing of configurations. I will finish by considering the implications for studies of both the development of face processing and face processing in atypical populations.
Brief Bio: Nick Donnelly is currently head of the School of Psychology at the University of Southampton. He studied for his PhD at the University of Wales, Swansea, graduating in 1989. Since then he has worked at Birkbeck College, the University of Birmingham, the University of Kent at Canterbury before moving to Southampton in 1999. His research focuses on issues of configurality in visual processing and visual search.
Body Movement as a Modality for Affective Human-Computer Interaction
24 February, 2010
Dr. Nadia Berthouze, UCLIC, University College London, UK
Brief Bio: Since 2006 Nadia Berthouze is a lecturer in the UCL Interaction Centre (UCLIC) at the University of London, a Centre in Human-Computer Interaction. After her PhD (1995) in Computer Science and Bio-medicine at the University of Milan (Italy), she spent 5 years first as a postdoctoral fellow and then as a COE fellow at the Electrotechnical Laboratory (Tsukuba in Japan) where she investigated HCI aspects in the area of Multimedia information interpretation with a focus on the interpretation of affective content. In 2000, she was appointed as lecturer at the Computer Software Department of the University of Aizu in Japan where she extended her interest in emotion expression to the study of non-verbal affective communication. The premise of her research is that affect, emotion, and subjective experience should be factored into the design of interactive technology. At the centre of her research is the creation of interactive systems that exploit body movement as a medium to induce, recognize and measure the quality of experience of humans. She is investigating the various factors involved in the way body movement is used to express and experience emotions, including cross-cultural differences and task context. She was awarded a 2-year International Marie Curie Reintegration Grant (AffectME) to investigate these issues in the clinical domain and in the gaming industry. In the area of clinical domain, she is investigating how to design technology that supports self-directed rehabilitation in chronic muscle-skeleton pain. In the area of computer games, she is investigating how an increase in task-related body movement imposed, or allowed, by the game controller affects the player’s game experience.
Top-down and bottom-up processes in visual search
3 March, 2010
Dr Michael Proulx, Queen Mary, University of London
How does one visually search for a target? Understanding the relative role of top-down and bottom-up guidance is crucial for models of visual search. Here an attentional-capture method was used to address the role of top-down and bottom-up processes in visual search for features and conjunctions. I will discuss two features, brightness and size, as examples of what can capture attention due to the use of bottom-up guidance. I will then turn to why and how attention is captured by these features as a function of the perceptual load or difficulty of the task predicting the reliance on bottom-up mechanisms to guide attention. Next I will present evidence that bottom-up mechanisms are even used in visual search for a conjunction of features. Finally I will also describe recent work considering more complex 'features' that may guide bottom-up mechanisms in visual search, such as size created by the Müller-Lyer illusion.
Brief Bio: Michael Proulx first studied psychology at Arizona State University, where he conducted research on categorisation with Donald Homa. He then received an MA and PhD at Johns Hopkins University, where he began research on visual attention and perception under Howard Egeth and Steve Yantis. In Germany he then worked as a postdoc in Düsseldorf and expanded his research in a crossmodal direction, studying sensory substitution devices for blind persons in particular. He is now Lecturer in Cognitive Psychology at Queen Mary University of London.
Affective agents and interactive narrative
10 March, 2010, 3pm
Prof. Ruth Aylett, Heriot-Watt University
Interactive narrative research struggles with the conflict between a pre-authored plot and responsiveness to the actions of a participating user. In this talk we examine the 'narrative paradox', and discuss the concept of 'emergent narrative' and the requirements for intelligent affective characters to which it gives rise.
Brief bio: Ruth Aylett is a Professor of Computer Science at heriot-Watt University, in the research group VIS&GE (Vision, Interactive Systems and Graphical Environments). She has been researching affective agent architectures and interactive narrative for more than 10 years and was the coordinator of the EU project eCIRCUS (www.e-circus.org) which has investigated some of these issues through the educational virtual drama systems FearNot! and ORIENT.
Seeing is a verb: Neurological observations on visual awareness
4.15pm
Professor Bob Rafal, MD,
Professor of Clinical Neuroscience and Neuropsychology, School of Psychology, University of Bangor
When an action is made, an 'efference copy' of the motor signal is sent by recurrent collaterals from each level of the motor system to the preceding level from which it received the signal (Sommer & Wurtz, 2008). Here we are considering a very special kind of efference copy - the corollary discharge - recorded in sensory cortex that will be stimulated as a result of movement. One visual area receiving corollary discharge, the intraparietal cortex, has neurons that remap their receptive fields before and after eye movements, and that exhibit particularly interesting properties. First, neurons there remap the entire visual field, regardless of the direction of eye movement or the stimulus location. Secondly, they only remap stimuli that are behaviourally salient, i.e. that demand a high priority for action. I'll summarise observations in neurological patients, and using transcranial magnetic stimulation, that inactivation of this area disrupts saccadic updating of the salience map, and observations in patients with Balint's syndrome that the failure to update the salience map causes the visual scene to disappear. Finally, I'll discuss research that will test the hypothesis that 'seeing' arises from parietal corollary discharge - a prediction of the sensory consequences of action, and the testing of those predictions using the world as an 'external memory'.
Brief Bio: Bob Rafal did his first degree in Biology at the University of Delaware and received his MD from Jefferson Medical College in Philadelphia. He trained in neurology at the University of Oregon, and has taught at Brown University and the University of California, Davis. He has taught at Bangor University since 1999, and is Professor of Clinical Neuroscience and Neuropsychology in the School of Psychology and Consultant Neurologist for the North Wales Brain Injury Service.
Modulation of Emotion: A Computational and Real-time Functional MRI Approach
17 March 2010
Dr. Su Li
Medical Research Council Cognition and Brain Sciences Unit
In the first half, I will focus on computational modelling of the Attentional Blink (AB) effect, which suggests that there is a 400-500ms window in which a second target (T2) is vulnerable to being missed following a first target (T1). A variant of this task (the key distractor AB task) explores emotional modulation of the temporal attention. This experiment replaces the T1 with an emotional distractor, the salience of which determines to what extent a following target is processed. I will present a model of the key distractor AB task based around the interaction of three subsystems: implicational, propositional and body-state. The implicational subsystem extracts a generic form of meaning, while the propositional subsystem extracts referentially specific meaning and the body state subsystem reflects somatic responses. The model explains the key distractor AB effects in terms of the movement of attentional focus amongst the subsystems. Emotional effects are modelled through the interaction of all three subsystems. In addition, the model can be used to predict (P300 like) EEG components in the context of BCI. Thus, individual variability can be built into the model for fast prototyping brain-computer interfaces (BCIs).
In the second half, I would like to discuss the potential of using the model to inform neuroimaging studies. In this respect, we used emotional AB task as pre- and post-tests of the behavioural changes introduced by neuro-feedback paradigms, which allow participants to self-regulate using real-time functional magnetic resonance imaging (rt-fMRI). Our study of healthy volunteers suggests that the right anterior insula (RAI) activation is amenable using suitable cognitive strategies and the Blood Oxygenated Level Dependant signal as neural feedback. RAI is a region that is implicated in affective processing and akin to the body-state subsystem in our model. So, the model predicts that the emotional interference from the key distractor would be enhanced after up-regulating the RAI. Further more, RAI is found hypoactive in some psychiatric conditions, so such rt-fMRI not only provides a novel BCI, but also has clinical potential as a non-pharmacological therapy.
Brief Bio: Li Su is an investigator scientist in MRC's Cognition and Brain Science Unit, and a research fellow in Institute of Psychiatry at King's College London. He is also a senior member of Wolfson College at Cambridge University. He has a degree in computer science from Beijing University of Posts and Telecommunication, and MSc and PhD in computer science from the Centre for Cognitive Science and Cognitive Systems at University of Kent. His research interests include the affective processing in human attention, human-computer interaction and cognitive neuropsychiatry. He develops and applies several methods such as real-time functional neuroimaging, multivariate pattern analysis for MEG/EEG, and computational modelling in systems cognitive neuroscience.
Ways of Mattering: Embodiment and Cognitive Extension
24 March 2010
Professor Michael Wheeler
Department of Philosophy, University of Stirling
According to the thesis of embodied cognition, bodily acts and environmental manipulations are often central aspects of an intelligent agent’s problem-solving strategies. According to the thesis of extended cognition, there are actual (in this world) cases of intelligent action in which thinking and thoughts are distributed over brain, body and world, in such a way that the external (beyond-the-skin) factors concerned are rightly accorded cognitive status. In this talk I shall interrogate the transition from embodied cognition to cognitive extension, via some reflections on the character of embodiment. Having described some empirical research from cognitive science which illuminates the embodied cognition hypothesis, I shall suggest that once one has accepted the resulting picture of intelligent action, there remains a choice to be made over precisely how to conceptualize the role of the body in the action-generation process. One way of understanding embodiment opens the door to extended cognition, the other shuts that door. Having resolved this choice in the manner that favours extended cognition, I shall argue that it is precisely by thinking through embodiment in this way that the extended cognition hypothesis may be defended against some recent and seemingly powerful criticisms.
Brief Bio: Michael Wheeler is Professor in Philosophy at the University of Stirling. He was previously lecturer in the Department of Philosophy at the University of Dundee and before that worked at Christ Church Oxford. Michael's primary research interests are in philosophy of science (especially cognitive science, psychology, biology, artificial intelligence and artificial life) and philosophy of mind. Michael also works on Descartes, on Heidegger, and on environmental philosophy. Although Michael's style of argument is firmly analytic, he remains keen to explore philosophy at the interface between the analytic and the continental traditions.
Interaction and direct perception
14 January 2009, 2pm
Goldsmiths Cinema, Richard Hoggart Building, Goldsmiths
Dr Hanne De Jaegher
Marie Curie Fellow (University of Heidelberg) and COGS, University of Sussex
The process of social interaction is the first thing we need to understand if we want to get a grip on social cognition. I substantiate this proposal by discussing Shaun Gallagher's idea that direct perception forms an important aspect of understanding each other(Gallagher, S. (2008). Direct perception in the inter-subjective context. Consciousness and Cognition, 17(2), 535-543.). I show that the idea of direct perception is in danger of being appropriated by the very cognitivist accounts criticised by Gallagher (theory theory and simulation theory). Then I argue that the experiential directness of perception in social situations can be understood only in the context of the role of the interaction process in social cognition. Using the notion of participatory sense-making, I show that direct perception, rather than being a perception enriched by mainly individual capacities, can be best understood as an interactional phenomenon.
Hanne De Jaegher is postdoctoral fellow on the Marie Curie Research Training Network DISCOS: Disorders and Coherence of the Embodied Self. She works in the Department of Psychiatry at the University of Heidelberg. She's also a visiting research fellow at the Centre for Computational Neuroscience and Robotics (CCNR) at the University of Sussex. Her research focuses on how people understand each other. More in particular, she investigates the implications of taking the process of interacting as the basis of social understanding.
Implicit Processes in Attention, Action, and Decision Making
14 January 2009, 4pm
Goldsmiths Cinema, Richard Hoggart Building, Goldsmiths
Prof. Katsumi Watanabe & Petter Johansson
Research Centre for Advanced Science and Technology, the University of Tokyo
Perception, action, and decision making are products of complex interactions between explicit and implicit processes. In this talk, we will present some of our recent work, where we employed several methods to examine implicit processes. One example is a methodology called choice blindness. The basic idea is to manipulate the outcome of people's choices without them noticing, and then measure how they respond to the alterations made. Not only do people seldom detect the change, they are also willing to give long and elaborate explanations for choices they in fact did not make. Others include implicit learning of attention guidance and implicit conformity behaviour. Through these empirical examples, we will illustrate considerable influences of implicit, unconscious processes on human behaviours.
Katsumi Watanabe is Associate Professor of Cognitive Science at the University of Tokyo. His research interests include: scientific investigations on explicit and implicit processes, interdisciplinary approaches to cognitive science, and real-life applications of cognitive science.
Petter Johansson is a researcher at the University of Tokyo, Japan and Lund University. His current interest centres on self-knowledge, how introspection relates to higher order as well as implicit processing.
Sensational material, touching design. Technologies, textures, pleasures
21 January 2009
Dr. Mark Paterson
University of Exeter, Department of Geography, School of Geography, Archaeology and Earth Resources
How do we speak these days of the sensory and affective qualities of experiencing an object or a built environment in a postphenomenological world? Further, is there a language that attempts to articulate such qualities in the design process? What are the techniques and technologies of evoking particular felt qualities, or managing certain sensibilities? And importantly, how can we answer these questions without recourse to a standardised phenomenological terminology that returns us continually to humanistic territory? In terms of the engineering of material-spatial experience, what terminology or lexicon ‘fits’, has purchase? After briefly revisiting the phenomenological architectural literature, I seek to explore a more abstract architectural sensorium, initially through Massumi’s ‘biograms’, and discuss some of the limitations in the literature for speaking about the somatic (bodily) senses within spatial encounters, including kinaesthesia, proprioception, and the vestibular sense.
Furthermore, new methodological approaches are emerging that foreground embodiment through attention to these corporeal performances. Crang highlights the dearth of truly “haptic knowledges” (2003:499), of learning through the immediacy of bodily responses and situations. Insofar as methodological approaches engage with the senses they remain largely ocularcentric or visually-based (e.g. Rose 2000). Similarly, while Imrie (2003) has explored conceptions of the human body within the architectural design stage, few have studied how embodied responses are conceptualised and anticipated by users or practitioners alike. Attempting to rethink the confluence of material design, corporeality, affects and sensations in the experience of a building entails developing haptic knowledges in literally ‘concrete’ contexts. Theoretical and empirical axes will be drawn together by means of a case study of a building in downtown Sydney.
Mark Paterson is Lecturer in Human Geography at the University of Exeter. Between 2002 and 2006 he was Lecturer in Philosophy at the University of the West of England (UWE). In 2002 he completed his Ph.D. in Human Geography at the University of Bristol entitled ‘Haptic Spaces’. After writing Consumption and Everyday Life (Routledge, 2005), he wrote The Senses of Touch: Haptics, Affects and Technologies (Berg, 2007) in Sydney, Australia, with a grant from the Arts and Humanities Research Council (AHRC). He has published journal articles in philosophy and social science journals, and received grants to look at robot skin and the haptic modelling of prehistoric textiles. Currently he is writing Seeing with the Hands: A Philosophical History of Blindness for Reaktion.
From the Dynamic Core to a Small World: the role of functional connectivity and cortical oscillations in the emergence of consciousness
28 January 2009
Prof Adrian Burgess (Subject Leader, Psychology, School of Life & Health Sciences, Aston University, UK).
The search for a neural correlate of consciousness has focused for many years on the role of cortical oscillatory activity and it has been claimed that conscious perception is associated with changes in both local oscillatory activity (e.g. gamma oscillations ~40Hz) and connectivity between different brain areas (i.e. functional connectivity). However, as both gamma activity and functional connectivity can be seen in the absence of consciousness, neither can be considered to be neural correlates of the process. To overcome this problem, it has been proposed that it is not the presence or absence of functional connectivity in any given frequency range but the pattern (i.e. the topology) of the connections that is critical.
For example, Tononi & Edelman’s (1998) Dynamic Core Hypothesis proposes that for consciousness to occur there must be a specific pattern of information exchange within the brain which they call Neural Complexity. The great strengths of the Dynamic Core Hypothesis are that i) Neural Complexity is explicitly mathematically defined and ii) the hypothesis makes testable predictions. In this talk I shall report a series of experiments designed to test the Dynamic Core Hypothesis and discuss the theoretical and practical limitations of the approach. I shall go on to describe new topological approaches to functional connectivity in the brain derived from graph theory (e.g. Small World Networks) that might overcome these limitations and report some preliminary results that suggest these may have a useful role to play in the search for neural correlates of consciousness.
Adrian Burgess is Subject Leader in Psychology at Aston University and a former President of the British Psychophysiology Society and the British Association for Cognitive Neuroscience. He gained a degree in Experimental Psychology from Oxford University and went on to qualify as a Clinical Psychologist at the University of Surrey, After a brief clinical practice he returned to academia and obtained a PhD from Charing Cross & Westminster Medical School (University of London) where he worked as a lecturer and then as Senior Lecturer at Imperial College. He obtained a chair in Psychology at Swansea University in 2005 and moved to Aston in 2008.
Colour Categories in Infancy and Early Childhood
4 February 2009
Dr. Anna Franklin,
Surrey Baby Lab, University of Surrey, UK.
The origin and nature of colour categories in language and cognition has been the concern of researchers from a range of disciplines such as psychology, anthropology, cognitive science, linguistics and philosophy for many decades. One major issue is whether the colour spectrum is arbitrarily carved up into categories, or whether there are universal constraints on where these categories form. In support of the argument that there are constraints on how language categorises colour, there is converging evidence for categorical responding to colour in pre-linguistic infants. During this talk I will present both behavioural and neuro-physiological evidence for categorical responding to colour in infancy. I will also present a series of studies that have investigated how colour categories are lateralised in the human brain across development. This research finds a right–to-left hemisphere switch in categorical perception of colour that occurs around the time of colour term acquisition. Implications for the debate about the interaction between perceptual and linguistic colour categories are discussed. The findings are also related to the wider debate about the interaction of language and cognition across development.
Dr. Anna Franklin is a Lecturer in the Psychology Department at the University of Surrey. She set up Surrey Baby Lab during her PhD to investigate whether infants categorise colour. Since then her research has investigated the development of colour perception and cognition in infancy and early childhood using behavioural, eye-tracking and neuro- physiological techniques. A recent research project has also investigated colour perception in children with Autism Spectrum Disorders. Research has been funded from a variety of sources including the ESRC.
Visual Information and Conscious Perception
11 February 2009
Prof. Philippe G. Schyns
Centre for Cognitive Neuroimaging, University of Glasgow
When people consciously perceive visual stimuli, they consciously perceive aspects of the outside world. When scientists consider the problem of conscious perception, they must understand this outside world visual information this is consciously perceived. So, the visual information underlying conscious experience of a stimulus has remained a challenging philosophical and empirical problem. Using the Bubbles technique, we uncovered this information content in observers who consciously perceived each interpretation of the ambiguous Dali painting "Slave Market with Disappearing Bust of Voltaire." For each individual observer, we isolated the stimulus features underlying their overt judgments of the input as "the nuns" and "Voltaire" (i.e. the two possible perceptions of the ambiguous painting. Every 2 ms between stimulus onset and over response, we derived the sensitivity of the observer's oscillatory activity (in the theta, alpha and beta bandwidths) to these features. Then, in each bandwidth, we estimated the moments (between stimulus onset and perceptual judgment) when perception-specific features were maximally integrated, corresponding to perceptual moments. We show that centro-parietal beta oscillations support the perceptual moments underlying the conscious perception of the nuns, whereas theta oscillations support the perception of Voltaire. For both perceptions, we reveal the specific information content of these perceptual moments.
Prof. Philippe G. Schyns is Director of the Centre for Cognitive Neuroimaging. He researches visual cognition from a computational, behavioural and brain imaging perspectives. He is also a Fellow of the Royal Society of Edinburgh and Associate Editor of Psychological Science.
Ethics after the Fourth Revolution
25 February 2009
Prof. Luciano Floridi
Research Chair in Philosophy of Information Department of Philosophy, University of Hertfordshire
Information and Communication Technologies (ICTs) have profoundly altered many aspects of life, including the nature of education, communication, human relations, entertainment, work, health care, business, industrial production, and conflicts. Therefore, they have had aprofound and widespread impact on our moral lives and hence on contemporary ethical debates. Privacy, ownership, freedom of speech, trust, responsibility, technological determinism, the digital divide, pornography online are only some of the pressing issues that characterise the ethical discourse in information societies. They are the subject of information ethics, a new ethical theory that investigates the transformations brought about by ICTs and their implications for the future of human life and society, for the evolution of moral values and rights, and for the evaluation of agents’ behaviours. In this lecture, I outline the nature and scope of information ethics and show how it should be considered a new way of approaching the moral discourse.
Luciano Floridi is Professor of Philosophy at the University of Hertfordshire, where he holds the Research Chair in Philosophy of Information, and Fellow of St Cross College, University of Oxford. He is the founder of Oxford Information Ethics research Group, and best known for his work on the philosophy of information and information ethics. He is currently President of the International Association for Computing And Philosophy (www.ia-cap.org) and Gauss Professor of the Academy of Sciences in Göttingen. His forthcoming books are: The Philosophy of Information (Oxford University Press); Information, a volume for the Very Short Introductions series (Oxford University Press); and the Handbook of Information and Computer Ethics (Cambridge University Press).
Culture and numerical cognition
4 March 2009
Prof. Brian Butterworth
Institute of Cognitive Neuroscience, University College London
A controversial issue in the theory of numerical cognition, is the role of one aspect of culture, language, in the development of basic numerical concepts. Reports of Amazonian cultures with languages that contain few or no counting words suggest that this vocabulary is necessary for the development of concepts of exact numbers above 3. That is, to have a concept of exactly five, you need to have a word for five. We report evidence from two Australian indigenous cultures that suggests, on the contrary, that there is no difference between the concepts of children in these cultures and those from a purely English-speaking background. I will attempt to put these results in a broader theoretical and neuroscientific context.
Brian Butterworth is Professor of Cognitive Neuropsychology at in the Institute of Cognitive Neuroscience at University College London. He taught at Cambridge for 8 years, and has held visiting appointments Padua, Trieste, MIT and the Max Planck Institute at Nijmegen. He is currently Professorial Fellow at the Melbourne University. He was elected Fellow of the British Psychological Society in 1993 and of the British Academy in 2002. He has been the coordinator of two European networks to research the neural basis of mathematical abilities, Neuromath: Mathematics and the Brain (2000-2003) and Numbra: Numeracy and Brain Development (2004-2007). He has published many articles and books on mathematical cognition and on topics in the neuropsychology of language. He collaborated with the legendary graphic designer Storm Thorgerson on an installation for the Millennium Dome in London on the ontogeny and phylogeny of language (Babble to Babel, 2000), and they are now working on a new brain-related project. He is currently working with Prof Bob Reeve at Melbourne U. on the numerical abilities of indigenous children in Australia, and with colleagues around the world on the neuropsychology and the genetics of mathematical abilities. A long-term project is to persuade educators and governments to recognize dyscalculia as a serious handicap that needs specialized help.
Intelligent Media: Interactive Moving-Image Narratives
11 March 2009
Dr. Marian Ursu
Department of Computing, Goldsmiths
"From the caves of Lescaux, to the next Harry Potter, man has been a storytelling animal” says McCrum in an article in the Observer in 2002 and continues, challenging: “Narrative is part of our DNA”. He is not too far from the view shared by scientists. Psychologists and neuroscientists have recently become fascinated by the human predilection for storytelling, by the fact that our brain seems to be wired to enjoy stories. Anthropologists, historians and linguists have similarly been fascinated by storytelling, one of the few traits of humankind that is truly universal across culture throughout all known history. This talk, however, takes a different but related perspective: it reflects the growing interest of computer scientists, ranging from multimedia to artificial intelligence researchers, and narratologists alike in developing new forms of communication and creative expression in the current context of digital media, more and more driven by interactivity.
This talk is about the creation of a new form of storytelling: interactive moving- image narratives, which adapt, whilst they are being told, in response to explicit or inferred interactions from the viewers, transforming thus passive audiences into active participants in the storytelling process, allowing them to influence the narrations they receive, to reshape them, and to establish new forms of social communication. The focal point will be on results of a joint European research endeavour which devised and developed generic (production independent) technology for the creation and delivery of interactive moving-image narratives – dubbed ShapeShifting Media Technology – and validated the technology with a number of interactive moving-image productions in traditional genres such as drama, documentary and news, realised in collaboration with national broadcasters such as BBC, YLE (Finland) and SVT (Sweden). A brief demo of the technology accompanied by snippets of the productions will illustrate the talk.
Dr. Marian Ursu is a Senior Lecturer in the Department of Computing at Goldsmiths and leads the Narrative and Interactive Media research group. He and his team have pioneered research in computational representations for interactive and narrative media. Amongst others, he is one of the main architects of the ShapeShifting Media Technology and the creator of the Narrative Structure Language, on which the former is founded. Apart from further developing and exploiting ShapeShifting Media, his current research regards the development of artificial intelligence techniques for image based communication technologies which aim to remove barriers of space and time between groups of people and, at the same time, maintain the naturalness of the communication.
Cannabis – Drug of abuse or medicinal use?
8 October, 4pm
Ben Whalley
Lecturer Clinical Pharmacy, University of Reading
Recent changes to the legal status of cannabis and the licensing of cannabis-based medicines have further raised public awareness of the drug. However, a great deal of the information advocating medicinal use or highlighting cannabis abuse appears contradictory.
In recent years, it has been found that our bodies contain cannabinoid receptors; proteins to which the active components in cannabis ('cannabinoids') bind in order to exert their effect. Moreover, our bodies also produce their own cannabinoids (the endocannabinoids) that perform important modulatory and control functions within the central nervous system. The apparent ubiquity of the endocannabinergic system has led to its implication in a plethora of higher CNS functions (appetite, reward, cognition, learning and memory) and pathophysiological states (e.g. epilepsy, pain, schizophrenia, depression etc.).
The above should also be considered beside the fact that herbal cannabis contains a large number of different cannabinoids (>60) in additional to many other pharmacologically active non-cannabinoids (>400). The amounts of such components also vary considerably with strain, storage and consumption route, making it unsurprising that mixed messages about risks, benefits and use appear within the scientific community and populist media. Moreover, cannabinoids and non-cannabinoid components often exert antagonistic and synergistic effects upon one another, further complicating our understanding of the mechanisms underlying overall effects upon the CNS.
This presentation will discuss some of the science underlying these mixed messages in the context of research currently being conducted at the University of Reading that aims to assess the therapeutic potential of individual cannabinoids to act as anti-epileptic agents.
Dr Ben Whalley is currently a Lecturer in Clinical Pharmacy at the School of Chemistry, Food Biosciences & Pharmacy (Pharmacy), University of Reading (2005-present). Following a B. Pharm degree from the University of London, he practised as a pharmacist for a number of years before undertaking a PhD (University of London) examining developmental changes underlying seizure susceptibility in vitro. His current research interests include cannabinoid pharmacology (especially cannabinoid modulation of hyperexcitability states), in vivo and in vitro models of epileptiform activity/epilepsy and the development of multi-site electrophysiological methods for use in such contexts.
A Critical Overview of Evolutionary Algorithms for Music
22 October, 4pm
Dr. Colin Johnson
Senior Lecturer, Department of Computing, University of Kent
Evolutionary algorithms---computer algorithms inspired by biological theories of evolution---have been applied in many ways to music technology and composition. In this talk I will discuss three aspects of this. Firstly, I will survey the origins of this area, focusing particularly on the "prehistory" of the subject, looking at trends in music that led to the application of these techniques. Secondly, I will discuss some recent work at Kent where we have applied these ideas to the synthesis, recognition, and analysis of timbre in music. Finally, I shall critically examine the relevance of the concept of "fitness" as applied in this area, to provoke a debate on whether the concepts of fitness and creative exploration are compatible.
Colin Johnson is Senior Lecturer in Computer Science at the University of Kent, and Deputy Director of the Kent Centre for BioMedical Informatics. His research interests include the interaction of computing and the natural sciences, and technology for music and media.
Bored, tired? What’s left? The curious relationship between alertness and awareness of left space.
29 October, 4pm
Tom Manly
Medical Research Council Cognition and Brain Sciences Unit
Unilateral spatial neglect is a dramatic and surprisingly common consequence of stroke in which people have difficulty noticing, acting on or even thinking about information from one side of space. Research has linked the persistence of this debilitating condition with problems faced by patients in maintaining an alert, ready-to-respond state. Here, I outline work that began with this clinical condition but which led us to examine the relationships between alertness and spatial bias in other groups and in the healthy population – with somewhat surprising results!
Tom Manly is a clinical psychologist and researcher with the Medical Research Council Cognition and Brain Sciences Unit in Cambridge. In addition to peer reviewed publications and book chapters on neuropsychology, attention and executive function he is also the author of the Test of Everyday Attention for Children (TEA-Ch). In 2007 he was awarded the Elizabeth Warrington Prize by the British Neuropsychological Society and in 2008 was awarded the Spearman Medal for an outstanding contribution to psychological literature by the British Psychological Society.
Music and Autism
12 November, 4pm
Dr Pam Heaton
Reader in Psychology, Goldsmiths, University of London
Autism is a neurodevelopmental disorder characterised by difficulties in social and communicative domains. However, a striking feature of the disorder is that many individuals possess unusually good abilities within the domains of music and art. My talk will provide an overview of the experimental literature on music and autism and research findings will be discussed within the context of the disabilities associated with this disorder.
Dr Pamela Heaton is a Reader in Neurodevelopmental disorders in the Psychology Department at Goldsmiths College. Her primary interest is in perception and cognition in Autism Spectrum Disorders and her doctoral work into musical cognition in autism was awarded the British Psychological Society prize for Outstanding Doctoral Research contributions to Psychology in 2004. Her most recent work has investigated absolute pitch across music and language domains and perception of prosody in autism.
Reflexive Monism and the psychophysical universe
19 November, 4pm
Prof. Max Velmans
Emeritus (Psychology, Goldsmiths)
Classical dualist ways of viewing the relation of consciousness to the brain split human nature in ways that make it difficult to put it back to together again. However, materialist reductionism conflicts with the evidence of everyday conscious experience. Neither approach provides a satisfactory understanding of the causal interactions between consciousness and brain. Reflexive monism provides a non-dualist, non-reductionist alternative, treating consciousness and brain as two, intimately related aspects of psychophysical mind. The human mind is, in turn, embodied and embedded in a wider psychophysical universe—a view that has intriguing convergences both with the views of Gustaf Fechner, the founder of psychological science and Wolfgang Pauli, one of the founders of quantum mechanics.
Max Velmans is currently Emeritus Professor at Goldsmiths, and Visiting Professor of Consciousness Studies at the University of Plymouth. He has over 90 publications on consciousness including Understanding Consciousness (2000), which was short listed for the British Psychological Society book of the year award in 2001 and 2002. Other publications include The Science of Consciousness: Psychological, Neuropsychological and Clinical Reviews (1996), Investigating Phenomenal Consciousness: New Methodologies and Maps (2000), How Could Conscious Experiences Affect Brains? (2003), and the Blackwell Companion to Consciousness (2007). He was a co-founder and, throughout 2004 to 2006, Chair of the Consciousness and Experiential Psychology Section of the British Psychological Society.
The comparative cognition of spatial organisation: working memory and vision
26 November, 4pm
Dr Carlo De Lillo
Senior Lecturer, School of Psychology, University of Leicester
Two converging lines of research will be presented which focus on the comparative cognition of spatial organisation in working memory and vision. Results indicate that a tendency to spontaneously use spatial constraints for the organisation of search is a predictor of interspecies differences in working memory. Systematic investigations of serial recall show that human working memory benefits from the encoding of series of items segregated by serial-spatial structures and that these benefits are
mediated by frontal and executive functions. Comparative analyses of visual organisation tasks also point to qualitative differences between humans and other species which do not seem to be easily accountable for by peripheral and other low level cognitive functions. On the basis of these results, it
will be suggested that the assessment of the ability to use structure to minimise the demands of cognitive tasks may provide more insights concerning the emergence of human cognitive sophistication than comparisons based on measures of mental capacity and speed of processing.
Dr. Carlo De Lillo is a senior lecturer in psychology at the University of Leicester. He obtained his first degree in Experimental Psychology at the University of Rome and his PhD at the University of Edinburgh in 1994. He has research interests in the cognitive bases of serial order in behaviour, search strategies, spatial memory and perceptual grouping from a comparative, developmental and neuropsychological perspective
Collecting or classifying?
3 December, 4pm
Prof.Francis Rousseaux and Prof. Alain Bonardi
IRCAM-CNRS, Paris and Université de Paris 8
Modern Information Science deals with tasks which include classifying, searching and browsing large numbers of digital objects. The problem today is that our computerized tools are poorly adapted to our needs as they are often too formal: we illustrate this matter with the example of multimedia collections. We then propose a software tool, ReCollection, for dealing with digital collections in a less formal and more sustainable manner. Finally, we explain how our software design is strongly backed up by both artistic and psychological knowledge concerning the ancient human activity of collecting, which we will see can be described as a metaphor for categorization in which two irreducible cognitive modes are at play: aspectual similarity and spatio-temporal proximity.
Francis Rousseaux is Professor at Université de Reims . He also coordinates a European IST project at IRCAM in Paris on behalf of the CNRS. His recent research focuses on the topics of Computing and Decision Making, Music and New Technologies, and Epistemology and Computational Location. His recent publications include: "ReCollection: a Disposal/ Formal Requirement-Based Tool to Support Sustainable Collection Making," "Towards a Collection-Based Knowledge Representation: Application to Geo-political Risks and Crisis Management," and "Taking Lessons from Cognitive Psychologists to Design our Content Browsing Tools."
Alain Bonardi is both a researcher and an artist. He has been exploring performance arts, mainly opera, but also theatre in the perspective of computer science, especially artificial intelligence. He also is a composer. Senior lecturer at Paris 8 University, Alain is an engineer diploma from Ecole Polytechnique, Ecole Nationale Supérieure des Télecommunications de Paris. He also has a Ph.D in musicology from Paris 4 University. Alain has been in CNRS delegation at IRCAM since september 2006. His research at IRCAM deals on the one hand with performance classification by artificial intelligence techniques (RealTime Team) and on the other hand the description of signal processing patches (european project Caspar).
Evolutionary Robotics: Philosophy of Mind using a Screwdriver
Wednesday 23 January
Inman Harvey
Senior Lecturer Informatics, University of Sussex, UK.
The design of autonomous robots has an intimate relationship with the study of autonomous animals and humans -- robots provide a convenient puppet show for illustrating current myths about cognition. Like it or not, any approach to the design of autonomous robots is underpinned by some philosophical position in the designer. Whereas a philosophical position normally has to survive in debate, in a project of building situated robots one's philosophical position affects design decisions and is then tested in the real world -- doing philosophy of mind with a screwdriver. I shall discuss, with examples, whether and how Evolutionary Robotics might lead to creating robots that really want to do things -- as opposed to 'merely' going through the motions; and lead up to the question(s) of robot consciousness.
Inman Harvey is a founder-member of the EASy (Evolutionary and Adaptive Systems) group at Sussex, which is the largest group of researchers in the world into aspects of Artificial Life. He helped to lay the foundations for the Evolutionary approach to Robotics in the early 1990s. Current interests include Gaia Theory, Autopoiesis, homeostasis, Dynamical Systems approaches to understanding cognition, and active control of (semi-)autonomous gliders and kites for energy extraction.
Subjective measures of unconscious knowledge
Wednesday 30 January
Zoltan Dienes
Reader in Experimental Psychology, University of Sussex, UK
will argue, based on higher order thought theory, that subjective measures are the best way of determining the conscious status of knowledge. Using the knowledge gained in implicit learning paradigms as an example, I will show how confidence ratings can be used to measure the amounts of conscious and unconscious knowledge expressed in judgments, and how verbal ratings rather than wagering do a better job in assessing the relevant higher order thoughts. I will show how subjective measures can be used to assess the conscious status of the structural knowledge leading to judgments, and how Jacoby's methods show only the conscious status of judgment knowledge and not structural knowledge. Finally I will argue that the interesting divide in nature (in the case of implicit learning) is probably between the conscious and unconscious status of structural knowledge and not judgment knowledge.
Zoltan studied natural sciences at Cambridge, experimental psychology at Macquarie and Oxford Universities, and have been at University of Sussex since 1990. I have been a Reader since 1997 and my main research interest is implicit learning. I co-authored (with Dianne Berry) a book on implicit learning in 1993 and next year have a book coming out on scientific and statistical inference.
The relation of neural oscillations to behavioural performance
Wednesday 20 February
Joachim Gross
Professor of Psychology, University of Glasgow
Multiple repetitions of the same experimental trial are typically associated with fluctuations in behavioural performance in the individual subject. The neural mechanisms underlying this variability remain largely unknown. A number of recent studies identified links between neural oscillations (as measured with MEG/EEG) in different frequency bands and behavioural performance in tasks investigating conscious perception and speeded reaction. I will present new analysis techniques for the investigation of neural oscillations and their interactions in humans and new experimental evidence supporting a functional role of neural oscillations for behavioural performance.
Joachim Gross obtained his PhD at the Institute of Medicine (IME) at the research centre Juelich, Germany and the MPI for Cognitive Neuroscience in Leipzig. His PhD was on linear and nonlinear transformations of neuroelectromagnetic signals. He was PostDoc and research group leader in the Department of Neurology, Duesseldorf before he was appointed Professor of Psychology at Glasgow University and member of the steering group for establishing the new Centre for Cognitive Neuroimaging (CCNi). His main research interest is the non-invasive investigation of neural oscillations in humans including development of appropriate analysis methods and experimental paradigms for studying their functional role.
The neuroscience of 'tricks of the light'
Wednesday 27 February
Tom Troscianko
Professor of Psychology, University of Bristol, UK
The big problem for visual systems is to signal the properties of relevant objects in the field of view, and to “ignore” the variable nature of the light falling on the objects. There is evidence for the optimisation of colour vision for achieving these kinds of aims, for specified important tasks, such as foraging for food. However, it is an open (and interesting) question whether, and when, aspects of illumination such as shadows are ignored in human vision. I will describe a series of experiments and computational studies which address this issue, and conclude that the issue is much more rich than was previously assumed.
Tom Troscianko originally studied Physics, and became a research scientist at Kodak Ltd, studying colour vision and photography. He did a PhD in the Department of Optometry and Visual Science at The City University, London after which (in 1978) he came to Bristol University to work with Richard Gregory. He became interested in the way in which vision encodes the properties of the world around us, and spent periods studying this from a clinical perspective (at Tübingen University Eye Hospital) and in computer science (at the IBM UK Scientific Centre, Winchester). He moved to the Psychology Department at Bristol University in 1988, where he carried out a variety of projects on vision and complex scenes. In 2000 he became Professor of Psychology at the School of Cognitive and Computing Sciences at the University of Sussex. In 2002 he returned to Bristol University as Professor of Psychology and founded the Cognition and Information Technology Research Centre (COGNIT), which promotes interdisciplinary research spanning the cognitive, computing, and biological sciences. He currently holds grants from EPSRC, BBSRC, and industry, to investigate projects as diverse as the ecology of vision, the use of CCTV cameras in our cities, they safety of railway signals, and the construction of a self-aware robot. He is Executive Editor of the journal Perception.
The Neuropsychology of Semantic memory
Wednesday 5 March
Elizabeth Warrington
Professor of Neuropsychology, Institute of Neurology, University College London, UK
The concept of semantic memory can be used to encompass that body of knowledge held in common by members of a cultural or linguistic group. Semantic memory processes, stores and retrieves information about the meaning of words concepts and facts. The impairment of semantic memory can be the first and only sign of cognitive impairment in patients who have progressive degenerative conditions. These semantic memory deficits can, at least in the early stages of illness, be remarkably circumscribed.
Current debate is centred on two main issues: category specificity and modality specificity. Many accounts of category specificity have focused on the double dissociation between knowledge of living things and man made objects. Evidence that category specific phenomenon may be both more fine grain and broader in range will be reviewed. The significance of semantic memory impairments confined to either the verbal or visual domain will be discussed. It will be suggested that we have evolved separable data bases for our visual and verbal knowledge of the world.
I have been associated with the Institute of Neurology and the National Hospital for Neurology and Neurosurgery since 1954 when I obtained a research assistant position. In 1960 I took over responsibility for the clinical neuropsychological service to the hospital. I was appointed to a personal chair in Clinical Neuropsychology in 1982. When I retired from the Hospital service in 1996, I joined the Dementia Research Centre in an honorary capacity. During the last 50 years I have been fortunate in having excellent opportunities to further my research interests in varied cognitive domains including memory, language and perception.
Provenance: an Open Approach to support Workflow Inter-Operability
Wednesday 12 March
Luc Moreau
Professor of Computer Science, University of Southampton, UK
Over the last few years, e-Science and e-Business have emphasized the need to expose existing and new procedures as services, so that they can be composed in sophisticated functionality for end-users. In particular, workflows have emerged as a paradigm for representing and managing complex distributed scientific computations. To some extent, with workflow technology, e-scientists are today provided with the means to express and run their experiments.
However, while workflow technology is a crucial breakthrough, it is only one of the tools required to support the scientific methodology. As important to domain scientists (and very often ignored by computer scientists!) is the ability to describe past experiments, to reproduce and verify them, and to understand differences between executions. The problem is further compounded by the fact that workflow systems will inevitably be heterogeneous, and multiple workflow technologies are bound to co-exist (e.g. Taverna, Triana, Pegasus, Swift, Kepler).
Provenance (also known as lineage, pedigree, audit trail) is crucial to allow scientists to implement their scientific methodology fully in silico. Provenance of a data product is defined as the process that led to that data product. While provenance technology has traditionally been embedded in execution environments (workflows system, operating system, specific application), we have taken a radically different view by seeing a provenance management system as a distinct first-class component of any computational environments where past executions should be inspect-able. Applications of our approach not only include e-science but also business, where past processes have to be audited.
By taking this view, and separating provenance from workflow, we were able to identify the essence of provenance and to propose an architecture for provenance management systems, which allow past processes to be described, even when multiple execution technologies are involved. In this talk, I present the principles of provenance, its architectural design, its implementation, and integration with several workflow technologies. We have successfully deployed the approach in multiple application domains, including astronomy, aerospace engineering, and medicine.
Professor Moreau is Professor of Computer Science, in the Intelligence, Agents, Multimedia group IAM, School of Electronics and Computer Science at the University of Southampton. His research is concerned with large-scale open distributed systems not subject to centralised control; examples of these include the Internet, the World Wide Web, the Grid and pervasive computing environments.
Robot Ethics: Fantasy or Necessity?
Wednesday 19 March
Steve Torrance
Considered as tools, robots don't really raise special ethical issues. But when we consider them as potential persons, we may have to face up to some more radical ways of understanding robot ethics. First, we need to consider our potential responsibilities towards the robots themselves. Second, we need to think about the potential responsibilities of robots towards us (and no doubt towards each other).
Some would say that to talk of robots as moral agents in their own right is to engage in fantasy. On this view non-organic artificial agents will never have the kinds of autonomy or consciousness that would be necessary to qualify them as members of the moral community – either as targets of moral concern from humans, or as responsible moral actors in their own right.
But there is a contrasting view. The likely multiplication of autonomous robots – in industrial production, on battlefields, in public places, in homes, and so on – means that within a short while they may be making decisions and occupying roles which would certainly have deep moral import if humans were taking such decisions and roles. Perhaps, on this view, there is a need to develop, not just external controls on robot actions, but internal moral self-direction in the robots themselves. If this were so, then building ethical responsibility into robots will be not just a desideratum but a necessity. This could require radical rethinking of social relations.
In this talk I'll present the two sides of this picture and try to unravel some of the conceptual complexities in this area.
Steve Torrance is Emeritus Professor in Cognitive Science at Middlesex University, and a visiting Senior Research Fellow at the University of Sussex. He teaches part-time at Sussex and at Goldsmiths. He has interests in computational and enactive approaches to cognitive science and consciousness, and he has a particular interest in the conceptual and ethical foundations of artificial personhood. He has recently edited journal issues on machine consciousness, ethics and artificial agents, and enactive experience.
Spring Workshop close: Special lecture
Wednesday 23 April
Professor Juri Kropotov
Professor Juri Kropotov (Director of the Neurobiology of Action Programming Laboratory, Institute of the Human Brain, Russian Academy of Sciences, and Professor II, Institute of Psychology, Norwegian University for Science and Technology) will give a special Whitehead lecture and Spring Workshop to close the 2007/8 Whitehead lecture series.
Professor Kropotov will give two lectures on Wednesday 23 April, one for a more specialist audience on "ERPs and their independent components" at 2 pm, and a second for a cross-disciplinary audience on "Normative EEG databases and the assessment and rehabilitation of brain dysfunction" at the usual time of 4 pm. Both lectures will take place in the Ben Pimlott Lecture Theatre, Goldsmiths.
Everyone is very welcome to attend the lectures and a drinks reception afterwards.
The lectures are a taster of some of the highlights of Professor Kropotov's Workshop. If you are interested in attending the workshop, there are still a few places available, and you should contact Tony Steffert (t.steffert) to arrange registration as soon as possible (there is a special rate for Goldsmiths' staff, and an even more favourable rate for Goldsmiths' PhD students who can make a case that the workshop is relevant to their studies).
An Outsiders view of the Self and Certainty
Wednesday 10 October, 4pm
David Malone
Director, Because you think TV
In this talk David will offer a series of questions that have been the basis for nine films over the last ten years. Culminating with Dangerous Knowledge shown on BBC television this year. Like the films, each question led to the next. Beginning with wondering how to describe the relationship between Consciousness and the Self and ending with wondering if the modern Self’s obsession with Proof and Certainty is neither healthy nor perhaps inevitable. Along the way the talk will touch upon the phenomenon of artists and scientists who hear voices, robots who believe in God, Greg Chaitin’s views on creativity and computation and Wolfram’s work on Cellular automata. David's job as a documentary film maker is to find a way of posing or framing a question that draws together views which at first might seem disparate and unexpected. What comes out of this process is rarely an answer, but hopefully deeper, richer questions.
David Malone’s academic background is in hominid evolution. He began his film making career at the BBC’s Science Department in 1986. During his time there he established the record for bringing Tomorrow’s World the closest it ever came to not making it to transmission. He made films ranging from the Flow of Time to the legacy of Darwinism in modern thought. More recently he has made a several series of films that have looked at questions of Consciousness, the Self and Soul as well as arguments surrounding the work of Kurt Gödel, whether Computation can ever be Conscious, how the mind models other people, the limits of Certainty and the source of Creativity. His thinking has been influenced by, amongst others, Roger Penrose and Greg Chaitin, Louis Sass and Iain McGilchrist.
Integrating EEG and fMRI on a trial-by-trial level: Mission impossible?
Wednesday 17 October, 4pm
Stefan Debener
Senior Clinical Scientist at the MRC Institute of Hearing Research, Southampton
Little is yet known about the relation between the scalp-recorded event-related EEG and the fMRI BOLD response. This holds true in particular for brain activation related to higher order cognitive processing. Whereas previous research focused on the event-related potential (ERP), an alternative approach will be presented integrating EEG and fMRI on a trial-by-trial basis. The basic idea is to apply independent component analysis (ICA) to disentangle otherwise overlapping EEG activations. An example will be presented showing that ICA-filtered single-trial EEG amplitudes not only predicted the subjects' reaction times, but also systematically correlated with the fMRI BOLD response. The potential of simultaneous EEG-fMRI studies will be discussed with regard to an event-related brain dynamics view of cognitive processes.
Stefan Debener is a psychologist by training and received his Ph.D. in 2001 from the University of Dresden, Germany. He is currently NHS Senior Clinical Scientist at the MRC Institute of Hearing Research, Southampton, and received honorary degrees (Reader) from the Schools of Medicine and Psychology, University of Southampton. He aims at bridging the gap between computational neurosciences and cognitive psychophysiology. His major contributions are in the field of advanced EEG analysis and multi-modal brain imaging, which includes the direct integration of EEG, fMRI and behaviour. He is also interested in multi-sensory processing, temporal attention, and cortical plasticity before and after cochlear implantation.
Scientific Art or Artistic Science?
Wednesday 24 October, 4pm
Nicolas Wade
Professor of Visual Psychology, University of Dundee
The study of natural phenomena could be pursued with regard to their representation (art) or interpretation (science) and in previous centuries the same people engaged in both endeavours. One consequence of the division and the attendant specialisation is that histories of art and science are surveyed by those steeped in one tradition or the other. This has resulted in the neglect of areas of common enterprise, like vision. Visual artists and visual scientists are often concerned with examining the same phenomena, but the methods they adopt differ radically. Scientists try to discover new facts regarding old phenomena. New phenomena are rarely discovered but they do determine different conditions under which old ones operate (perhaps using some novel apparatus for generating stimuli). Artists are concerned with arranging phenomena in a manner that has not been seen before, or perhaps to increase the spectators’ awareness of the phenomena.
Often this involves complicating the effects rather than simplifying them. Thus, scientists rarefy and isolate phenomena to control them in the laboratory, whereas artists embrace complexity and manipulate phenomena intuitively. The differences in method have resulted in divergent vocabularies for describing similar visual effects, and the two approaches can appear more disparate than their phenomenal commonality would suggest. Not only have artists provided more engaging examples of visual spatial phenomena, but they have also enhanced their range in ways that are scientifically novel. The opposite argument applies to motion perception, where scientists developed techniques that were eagerly adopted in the arts. The interactions between art and both spatial and motion vision were influenced by instruments invented in the early nineteenth century for manipulating the representation of space and time – the stereoscope and the stroboscopic disc. Art and science can provide complementary approaches to the study of vision.
Nicholas Wade research interests are concerned with: the representation of space and motion in human vision, the history of research on visual phenomena, and the relationship between visual science and visual art. He has written several books on these topics among which are: The Art and Science of Visual Illusions (1982), Brewster and Wheatstone on Vision (1983), Visual Allusions: Pictures of Perception (1990), Psychologists in Word and Image (1995), A Natural History of Vision (1998), Destined for Distinguished Oblivion: The Scientific Vision of William Charles Wells (1757-1817) (2002), Perception and Illusion. Historical Perspectives (2005), The Moving Tablet of the Eye: The Origins of Modern Eye Movement Research (with Ben Tatler, 2005), and Circles: Science, Sense and Symbol (2007).
Cybernetic investigation of structure and function in the brain
Wednesday 31 October, 4pm
Slawek Nasuto
Reader in Cybernetics, University of Reading
There is a growing amount of empirical evidence that information processing in the brain depends not only on the dynamics of individual neurons but also on their structure and the structure of their networks. Moreover, understanding fully the brain operation may require integration of dynamic processes at diffeent spatial and temporal scales. Our group has been pursuing the investigation of the information processing in the brain according to research directions consistent with this view. The talk will present our attempts at utilising such principles in the construction of artefacts interfacing with the nervous system.
Dr Slawomir J Nasuto is a a Reader in Cybernetics in the School of Systems Engineering, University of Reading. Prior to 2000 when he joined UoR he has been a member of the Computational Neuroanatomy group at the Krasnow Institute for Advanced Studies, George Mason University, USA. His long term interests are in finding out how the cognitive processes emerge from the dynamics of brain activity. His recent research projects include use of long range synchronisation to investigate memory processes with electroencephalogram (EEG), EEG based Brain Computer Interfaces, classification of single motor unit action potentials from surface electromyographs (EMG), automated reconstruction of neuronal structure from optical microscopic stacks and a construction of an animat, a robot controlled by a culture of biological neurons and investigation of its computational capacity. He also continues research into analysis, modelling and application of swarm intelligence and distributed intelligent systems.
Cancelled: Evolutionary Robotics: Philosophy of Mind using a Screwdriver
Wednesday 14 November, 4pm
Inman Harvey
Senior Lecturer Informatics, University of Sussex, UK.
Machine Learning and Games
Wednesday 21 November, 4pm
Simon Lucas
Reader in Computing, University of Essex
One of the key problems in AI is how an agent can best learn in a largely unsupervised manner via interactions with its environment. Games provide an excellent way to test approaches to this problem. They provide ready made environments of variable complexity, offering dynamic and unpredictable challenges. They enable the emergence of open-ended intelligent behaviour and provide natural metrics to measure the success of that behaviour.
Two main ways to train agents given no prior export knowledge are temporal difference learning, and evolution (or co-evolution). We'll study ways in which these methods can train agents for games such as Othello and Ms Pac-Man. The results show that each method has important strengths and weaknesses, and understanding these leads to the development of new hybrid algorithms such as EvoTDL, where evolution is used to evolve a population of TD learners. Examples will also be given of where seemingly innocuous changes to the learning environment have profound effects on the performance of each algorithm. Choice of architecture (e.g. type of neural network) is also critical.
The main conclusion is that these are powerful methods capable of learning interesting agent behaviours, but there is still something of a black art in how best to apply them, and there is a great deal of scope for designing new learning algorithms. The talk will also include live demonstrations.
Dr. Simon M. Lucas (SMIEEE) received the BSc degree in computer systems engineering from the University of Kent, UK, in 1986 and the PhD degree from the University of Southampton, UK, in 1991, having worked a year in between as a research engineer for GEC Avionics. After a one-year post doctoral research fellowship (funded by British Telecom) he was appointed to a lectureship at the University of Essex in 1992 and is currently a reader in computer science there.
His main research interests are evolutionary computation, games, and pattern recognition, and he has published widely in these fields with over 120 refereed papers, mostly in leading international conferences and journals. He was chair of IAPR Technical Committee 5 on Benchmarking and Software (2002 - 2006) and is the inventor of the scanning n-tuple classifier, a fast and accurate OCR method. He was appointed inaugural chair of the IEEE CIS Games Technical Committee in July 2006, has been competitions chair for many international conferences, and co-chaired the first IEEE Symposium on Computational Intelligence and Games in 2005. He was program chair for IEEE CEC 2006, and program co-chair for IEEE CIG 2007, and will be program co-chair for PPSN 2008. He is an associated editor of IEEE Transactions on Evolutionary Computation, and the Journal of Memetic Computing. He was invited keynote speaker at IEEE CEC 2007.
Second Order Cybernetics: an historical introduction
Wednesday 28 November, 4pm
Bernard Scott
Senior Lecturer in Electronically-Enhanced Learning, Cranfield University, Defense Academy, Shrivenham
In 1974, Heinz von Foerster articulated the distinction between a first order and a second order cybernetics, as, respectively, the cybernetics of observed systems and the cybernetics of observing systems. Von Foerster’s distinction, together with his own work on the epistemology of the observer, has been enormously influential on the work of a later generation of cyberneticians. It has provided an architecture for the discipline of cybernetics, one that, in true cybernetic spirit, provides order where previously there was variety and disorder. It has provided a foundation for the research programme that is second order cybernetics. However, as von Foerster himself makes clear, the distinction he articulated was imminent right from the outset in the thinking of the early cyberneticians, before, even, the name of their discipline had been coined. In this paper, I give a brief account of the developments in cybernetics that lead to von Foerster’s making his distinction. As is the way of such narratives, it is but one perspective on a complex series of events. Not only is my account a personal perspective, it also includes some recollections of events that I observed and participated in at first hand.
Dr Bernard Scott is Head of the Flexible Learning Support Centre, Cranfield University, Defence College of Management and Technology, Defence Academy, Shrivenham, Wiltshire, UK. Previous appointments have been with: the University of the Highlands and Islands Millennium Institute, De Montfort University, the Open University and Liverpool John Moores University. Between 1967 and 1978, he worked with Gordon Pask at System Research Ltd, developing conversation theory and computer-based systems for teaching, course assembly and knowledge elicitation. Dr Scott’s research interests include: theories of learning and teaching; course design and organisational change; foundational issues in systems theory and cybernetics. He has published extensively on these topics. Dr Scott is a Fellow of the UK Cybernetics Society and an Associate Fellow of the British Psychological Society. Dr Scott is President of Research Committee 51 (on Sociocybernetics) of the International Sociological Association.
The interplay between evolution and development: the case of pointing gestures
Wednesday 5 December, 4pm
Juan Gomez
Lecturer, School of Psychology, University of St.Andrews
In this talk I address the issue of the interaction between evolutionary and developmental processes using the case of pointing gestures as an illustration. Pointing is probably a universal communicative behaviour among humans that can be used in very complex ways. It emerges relatively early in ontogeny and has been claimed to be a precursor to some of the most complex cognitive achievements of humans (language and Theory of mind). In apes, in contrast, manual pointing is not a natural behaviour that can be observed in the wild. However, when reared in captivity, apes seem to spontaneously develop whole-hand, or occasionally even index-finger, pointing gestures that are both similar and different to the pointing gestures of human infants. I will discuss different models of what this case can tell us about how development and evolution interact in creating behavioural and cognitive adaptation.
Dr. Juan-Carlos Gómez obtained his PhD. in 1992 in the Department of Developmental Psychology in The Universidad Autónoma of Madrid with a study on the development of intentional communication in young captive gorillas. In 1995 he was a PostDoc at the MRC Cognitive Development Unit, London, with Prof. A. Karmiloff-Smith. In 1996 he became a Lecturer at the School of Psychology, University of St.Andrews, where he is currenty working as Reader in Psychology. His research interests include the comparative study of early communication and theory of mind skills in non-human primates and typically and atypically developing children. He is a founding member of the Social Learning and Cognitive Evolution Centre, University of St.Andrews, and director of project REFCOM, on the origins of Referential Communication, funded by the FT-6 program of the European Union. He is the author of Apes, monkeys, children and the growth of mind (Harvard University Press), and Associate Editor of Developmental Science.
Head, heart and hand: Exploring the psychology of art
Wednesday 12 December, 4pm
Chris McManus
Professor of Psychology and Medical Education at University College London
Despite the arts and aesthetics dominating so much of everyday life, there has been surprisingly little effort in psychology to understand what is going on. There have been 'big' theories, but I will suggest that these often fail, and are almost embarrassing in their attempts to explain the rich variety of the arts with relatively limited explanatory tools. Instead I will take the line that psychology needs at present to dig down into a detailed understanding of relatively limited but nevertheless illustrative phenomena, and I will describe some such cases.
Chris McManus is a Fellow of the Academy of Medical Sciences. He qualified originally as a doctor, intercalating a degree in psychology. His PhD was on the genetics of handedness and cerebral lateralisation, a topic in which he still has a great interest, editing the journal Laterality, and also having written a popular book Right Hand Left Hand, which won the Aventis Prize in 2003. His interest in experimental aesthetics began with his undergraduate project, and he has published a series of experimental papers over many years, and is currently working on a book on the psychology of the arts.
Autonomy and automaticity
Wednesday 17 January, 4pm
Tillmann Vierkant
Dept. Philosophy, University of Edinburgh
The idea of human autonomy has recently come under severe pressure from the cognitive sciences. There are by now many experimental results that seem to show that the conscious self does not control the behaviour of the body. In this talk I will examine this challenge and some solutions to it that have been proposed, especially by philosophers. I will argue that philosophers have succeeded in pointing out many weaknesses in the challenge, but I will try to show as well that some parts of the challenge remain untouched by the most important counter arguments and present indeed a serious challenge for practical philosophy.
Dr. Tillmann Vierkant joined the Department of Philosophy at Edinburgh University to lecture on Philosophy of Mind. His PhD, awarded in 2002 , investigated philosophical concepts of self in contemporary cognitive sciences. This research was conducted under the supervision of Professor Wilhelm Vossenkuhl and Professor Wolfgang Prinz at the Max Planck Institute for Human Cognitive and Brain Sciences. He then undertook a three year post-doctoral research project on the consequences of contemporary cognitive science for practical philosophy as part of the interdisciplinary research project Voluntary Action: Nature and Culture of Willed Actions. This project was lead by Prof. Prinz, Prof. Goschke, Prof Maasen and Prof Vossenkuhl. Apart from his research Dr. Vierkant coordinated the project especially the cooperation with the philosophical board. His current primary research interests include: The relationship between narrative and implicit cognitive processing, theories of volition and freedom of the will informed by contemporary cognitive science and the importance of phenomenal consciousness for practical philosophy.
Is action based on perception rather than knowledge? The evidence from presence in virtual environments
Wednesday 31 January, 4pm
Mel Slater
Centre de Realitat Virtual (CRV), Edificio U, Universitat Politècnica de Catalunya, Spain.
This talk will present a number of studies of the responses of people to situations and events in immersive virtual environments. The evidence from these studies suggests that people tend to respond to these events as if they are real, in spite of knowing for sure that they are not. Results from studies concerned with the use of virtual environments in psychotherapy will be presented, and also a virtual simulation of the Stanley Milgram obedience experiment. Immersive virtual environments may therefore provide a research tool for social and psychological scientists and also for policy makers, in order to investigate problems under laboratory style conditions that would otherwise not be possible due to practical or ethical constraints.
Prof. Mel Slater is an ICREA Research Professor and works at the Virtual Reality Centre of Barcelona, Universitat Politècnica de Catalunya, Spain. He is also a Professor of Virtua Environments at University College London in the Department of Computer Science. His major research interest since the early 1990s has been computer graphics and virtual environments, and founded the Virtual Environments and Computer Graphics group at UCL in 1995 after he had moved there from Queen Mary, where he had been Head of Department of Computer Science from 1993 to 1995. He was EPSRC Senior Research Fellow from 1999 to 2004 at UCL. He led the EU FET project Presencia from 2002 to 2006, and currently leads the EU FET Integrated Project PRESENCCIA.
Upgrading humans: technical realities and new morals
Wednesday 7 February, 4pm
Kevin Warwick
Department of Cybernetics, University of Reading
In this presentation a look will be taken at how the use of implant technology is rapidly diminishing the effects of certain neural illnesses and distinctly increasing the range of abilities of those affected. An indication will be given of a number of problem areas in which such technology has already had a profound effect, a key element being the need for a clear interface linking the human brain directly with a computer. However, in order to assess the possible opportunities, both human and animal studies from around the world will be reported on.
The main thrust of this lecture will be an overview of Professor Warwick's research which has led to him receiving a neural implant which linked his nervous system bi-directionally with the internet. With this in place neural signals were transmitted to various technological devices to directly control them, in some cases via the internet, and feedback to the brain was obtained from such as the fingertips of a robot hand, ultrasonic (extra) sensory input and neural signals directly from another human’s nervous system. A view will be taken as to the prospects for the future, both in the short term as a therapeutic device and in the long term as a form of enhancement, including the realistic potential, in the near future, for thought communication – thereby opening up tremendous commercial potential. Clearly though, an individual whose brain is part human - part machine can have abilities that far surpass those who remain with a human brain alone. Will such an individual exhibit different moral and ethical values to those of a human? If so, what effects might this have on society?
Prof. Kevin Warwick is Professor of Cybernetics at the University of Reading. He carries out research in control theory, robotics, biomedical engineering and artificial intelligence. Kevin has been awarded higher doctorates (DScs) both by Imperial College and the Czech Academy of Sciences, Prague. He was presented with The Future of Health Technology Award from MIT (USA), was made an Honorary Member of the Academy of Sciences, St.Petersburg, was awarded the University of Malta medal from the Edward de Bono Institute and in 2004 received The IEE Achievement Medal. In 2000 Kevin presented the Royal Institution Christmas Lectures, entitled 'The Rise of The Robots'. These lectures were also presented by Kevin in Japan, China and Korea.
The mind beyond the skin: intermental thought in the novel
Wednesday 21 February, 4pm
Alan Palmer
Department of Linguistics and English Language, Lancaster University, UK.
After referring briefly to the 'cognitive turn' that has been taking place in narratology since the 1990s, I will discuss the study of fictional minds. Readers enter the storyworlds of novels and then follow the logic of the events that occur in them primarily by attempting to reconstruct the fictional minds of the characters in those storyworlds. These reconstructions by readers of the minds of characters are central to our understanding of how novels work, because fictional narrative is, in essence, the description of fictional mental functioning. The lecture will then consider intermental thought in the novel. Such thinking is joint, group, shared or collective, as opposed to intramental, or individual or private thought. It is also known as socially distributed, situated or extended cognition, and also as intersubjectivity. Intermental thought is a crucially important component of fictional narrative because much of the mental functioning that occurs in novels is done by large organizations, small groups, work colleagues, friends, families, couples and other intermental units. It could plausibly be argued that a good deal of the subject matter of novels is the formation, development and breakdown of these intermental systems. However, this topic is completely absent from traditional narrative theory. To illustrate, I will discuss the presentation of intermental thought in the opening few pages of George Eliot's Middlemarch. I will go much further than simply suggesting that the town of Middlemarch provides a social context within which individual characters operate, and will argue that the town literally and not just metaphorically has a mind of its own. I call it the Middlemarch Mind.
Alan Palmer is an independent scholar living in south east London. His book Fictional Minds (University of Nebraska Press, 2004) was a co-winner of the MLA Prize for Independent Scholars and also a co-winner of the Perkins Prize (awarded by the Society for the Study of Narrative Literature). He was a judge for the 2006 Perkins Prize. He has contributed essays to the journals Narrative, Style and Semiotica, as well as chapters to a number of edited collections including Narrative Theory and the Cognitive Sciences (ed. David Herman). Alan Palmer is an honorary research fellow in the Department of Linguistics and English Language at Lancaster University and his chief areas of interest are narratology, cognitive poetics and cognitive approaches to literature, the cognitive sciences and the study of consciousness, the nineteenth century novel, modernism and the history of country and western music.
Attentional blink on the right, alien hand on the left: ERP studies on the fragile connection and competition between hemispheres
Wednesday 28 February, 4pm
Rolf Verleger
Neurophysiologie der Kognition, Germany
The presence of the hemi-neglect syndrome after lesions of the right cerebral hemisphere leads to the assumption that the right hemisphere somehow controls perception. The mechanisms of competition between right and left hemisphere were here studied in healthy participants and in G.H., an exceptional neurological patient. In healthy participants, we used a two-stream version of the attentional blink paradigm. In this task, left-hemifield targets are drastically better identified than right-hemifield targets. ERP measurement of interhemispheric differences revealed right-hemisphere advantages in speed, in continuous engagement, in modification of activation by expectancy, and in rapid interruption of other-hemisphere activation. Similar observations can be made in G.H. who suffers from split-brain symptoms, lacking conscious control of his left hand. Among other fascinating findings, ERP measurement during perceiving and responding to centrally presented stimuli again revealed right-hemisphere advantages, both in speed of perception and in motor activation. Further, ERPs showed compensatory processing for the missing transfer of motor information between hemispheres. Thus, our brains are much more asymmetrically constructed than would be usually realized.
Prof. Dr. Rolf Verleger studied psychology at the University of Konstanz, Germany, where he worked for his diploma thesis in 1976 about event-related EEG potentials (ERPs) in schizophrenia, then had research positions at the Central Institute of Mental Health in Mannheim, Germany, and at the department of psychology at Tübingen, Germany, where he received his Ph.D. Since 1988, he holds the position of a neuropsychologist at the department of neurology at the University of Lübeck, Germany, seeing patients and doing research on functions and dysfunctions in cerebral control of higher cognitive functions. In 1998, he was appointed the title of professor. He was president of the German Society of Psychophysiology 2000-2005.
Multisensory contributions to 'touch': recent findings
Wednesday 7 March, 4pm
Charles Spence
Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University
The last few years has seen a growing realization amongst scientists that human perception is inherently multisensory. In particular, a rapidly growing body of research now highlights the existence of important connections between the human senses of sight, hearing, touch, smell, and taste. One consequence of the multisensory nature of our perceptual experience is that changing what a person sees can change what they feel when they touch/interact with an object/surface. Similarly, research now suggests that changing what an object or surface sounds like, even what it smells like, can also change how it will be perceived, evaluated, and ultimately used. In this talk, I hope to illustrate how our growing understanding of the rules governing multisensory perception (derived from the field of cognitive neuroscience research) demonstrates just how multisensory what we introspectively think of as tactile perception, or the sense of touch, really is. I also hope to highlight some of the challenges that one needs to face when trying to apply laboratory-based research findings to account for our real-world tactile interactions.
Prof. Spence's research is primarily directed at the topics related to attention and information-processing within a multisensory setting. He is particularly interested in questions related to the role of attention in multisensory perception, and much of the work involves the investigation of multisensory illusions such as the rubber hand illusion. Prof. Spence is also interested in investigating how our understanding of multisensory perception can be used in a consumer psychology setting to improve the perception of everything from everyday objects, foods and indoor environments. Prof. Spence has been the recipient of the 10th Experimental Psychology Society prize (2002), the British Psychology Society, Cognitive Section award (2002), the Paul Bertelson medal from the European Society for Cognitive Psychology recognizing him as the 'young European Cognitive Psychologist of the year (2003)', and the Friedrich Wilhelm Bessel Research Award from the Alexander Von Humbolt Foundation, Germany: "in recognition of past accomplishments in research and teaching".
Technological metaphors for the soul
Wednesday 14 March, 4pm
Chris Beckett
Institute of Health and Social Care, Anglia Ruskin University
Science fiction is typically seen as being about technology, science and the future: a form of fiction-writing that allows us to perform thought-experiments about technological change and its impact on human beings. But science writers aren't just doing though-experiments about the future. Often they are using imaginary technology and bizarre worlds to explore very much the same kinds of questions other writers and artists: questions about what it is to be human, how we relate to one another and to the world. From this angle, science fiction is not primarily about technology or the future, but uses technological and other 'scientific' devices to provide metaphors to express the writer's intuitions and reflections about life, experience, existence, identity, authenticity, our place in the world. What are the particular attractions of technological metaphors and why use them, rather than just writing about the world 'as it actually is'? I am a writer who did not deliberately set out to write only science fiction, but has found science fictional devices indispensable to say what I want to say. In this talk I will try and explain how and why I do this, and what I feel it enables me to do.
Chris did not set out to be a science fiction writer. He doesn't feel that he is especially hooked on writing about science or the future. His stories are usually about his own life and the things he struggles to make sense of. But, for some reason, they almost always end up being science fiction. He thinks he likes the freedom it gives him to invent things and play with ideas. One thing he likes about writing fiction is finding things emerging in his own stories which he wasn't conscious of, a bit like a dream whose symbolism only slowly dawns on the dreamer. Some of his stories draw on his experience as a social worker. He now lectures on social work at Anglia Ruskin University and is the author of several texts in this area. In case anyone wonders whether this is the same Chris Beckett; it is!
Shadows of artistry on the cortical canvas of functional connectivity patterns
Wednesday 21 March, 4pm
Joydeep Bhattacharya
Goldsmiths, University of London
Across races and cultures, we love music, appreciate visual art, and produce novel ideas by creative imagination. Music, visual art and cognition are deeply interrelated, acting like two convex mirrors each reflecting and amplifying the other. Yet, the simplest questions, such as, “How do we perceive natural music? Does everyone listen to music in the same way? Why someone prefers pop over classical? What are the neural correlates of perception of visual art? How does an artist mentally compose an artwork?” are yet to be completely understood. Brain imaging and lesion studies are successful in localizing the brain activities during higher cognitive performances. However it is becoming increasingly established that near and distant brain areas not only become co-active but also become functionally co-operative leading to a dense brain network with functionally connected multiple brain regions. To assess the underlying network connectivity, we recorded multichannel EEG signals during such higher cognitive tasks and analysed them by using new analytical measures. In this talk, I will present the results of functional connectivity analysis underlying human expertise in music, in visual art during the perception of music, visual art, and of creative imagery.
Dr. Bhattacharya received his PhD from Indian Institute of Technology. Later he was associated with Max-Planck-Institute for Physics of Complex Systems, Germany as a DAAD Fellow and with California Institute of Technology, USA as a Sloan Fellow. After working at Austrian Academy of Sciences as a tenured Senior Scientist for several years, he moved to Goldsmiths last October. He is fascinated by the ever present brain rhythms and synchronizations and tries to understand higher cognitive functioning of human brain.
Is Language Built on Song
Wednesday 11 October, 4pm
Prof Bob Turner
Welcome Department of Imaging Neuroscience, FIL UCL, UK
Recent experimental findings suggest that during child development musical communication precedes language, and language areas in the brain have been shown to form a subset of those concerned with music. For music, areas in both cerebral hemispheres are often engaged, while language tasks usually activate predominantly left-hemisphere regions. A recent fMRI study compares brain responses to hearing familiar songs with those to hearing spoken versions of the same songs by the same speakers. This confirms the extensive overlap between song and speech, and reveals auditory-motor template areas involved in detecting consonance. Tonal sequences of notes appear to have a special importance, both for infant music perception and for brain activation in adult hearers. It will be argued that the structures of tonal music, the prerequisite of musical harmony, which have an intrinsic connection to how we hear and produce sound, shape our brains and provide templates within our brains for structuring sound so that it can become meaningful for us in the form of language.
Robert Turner, a physicist, anthropologist and mathematician by training, worked as a Lecturer in Physics at Nottingham University from 1984-88, collaborating with the Nobel Prize winning Sir Peter Mansfield on the development of ultra-fast echo planar MRI (EPI) and design of MRI gradient coils. Pursuing his interest in using MRI to study brain function, he took a position as Visiting Scientist, NIH, in the USA in 1988, and pioneered Diffusion Weighted EPI, now used widely in stroke research, and Blood Oxygenation Level Dependent (BOLD) functional MRI at the high magnetic field of 4 T in humans. He returned to England in 1994 to help establish in London the world's first purpose-built lab for the study of human brain function with MRI. He is now a research professor in the Functional Imaging Laboratory and the High Field MR Research Laboratory, University College London, optimizing BOLD fMRI and other MRI techniques for neuroscience. His interest in music is personal as well as professional, and he has established a Music and Brain Club that meets several times per term in his laboratory in Queen Square, London.
Learning and consciousness
Wednesday 18 October, 4pm
Prof Axel Cleeremans
Université Libre de Bruxelles CP 191
One way to approach the problem of consciousness involves exploring the differences between conscious and unconscious cognition. Striking dissociations between subjective experience and behavior have now been reported in various domains from memory to learning, from perception to action. Yet, the extent to which information processing can take place without consciousness remains controversial, in part because of the substantial methodological challenges associated with demonstrating unconscious cognition, in part because of conceptual differences in the manner in which such dissociations are interpreted. In this talk, I overview recent relevant findings, and sketch a novel conceptual framework that takes it as a starting point that conscious and unconscious cognition are rooted in the same set of learning and processing mechanisms. On this view, the extent to which a representation is conscious depends in a graded manner on properties such as its stability in time or its strength.
Crucially, these properties are accrued as a result of learning, which is in turn viewed as a mandatory process that always accompanies information processing. In this light, I will report on several recent experiments in which we manipulated the temporal relationships between events and show that such manipulations influence the extent to which learning is conscious or not. A first implication of these ideas is that consciousness takes time. A second is that the main function of consciousness is to make flexible adaptive control over behavior possible. A third, much more speculative implication, is that we learn to be conscious. The conscious self, from this perspective, involves a ‘virtual other’ simulated by your brain as a result of having learned about the consequences of actions directed towards other agents over life-long interactions with them. I conclude that while learning without consciousness is definitely possible, consciousness without learning is not.
Axel Cleeremans, Ph.D. (1991), is a Professor at the Université Libre de Bruxelles and a Research Director with the National Fund for Scientific Research (Belgium). Cleeremans currently heads the Cognitive Science Research Unit at the Université Libre de Bruxelles and coordinates an advanced degree in Cognitive Science. Trained in neural network modeling at Carnegie Mellon University under the supervision of J.L. McClelland, Cleeremans' main research interests are in understanding the differences between learning with and without consciousness, and, more generally, in the mechanisms that underpin consciousness itself. Cleeremans currently acts as president of the Belgian Association for Psychological Science, and is also a member of the executive committee of the Association for the Scientific Study of Consciousness and of the board of the European Society for Cognitive Psychology.
Walking Here & There
Wednesday 25 October, 4pm
Simon Pope and Vaughan Bell
Institute of Psychiatry, Cardiff School of Art and Design and Goldsmiths College
In ‘Walking Here & There’, artist Simon Pope and psychologist Vaughan Bell investigate the interaction of place and memory in psychosis, and particularly in reduplicative paramnesia, the delusional belief that a place exists in two or more locations simultaneously. The project's immediate aims are to develop an experimental framework through which to explore this otherwise exceptional condition, and through this to investigate relationships between space, place, mobility, delusion and memory. These themes are further developed in Pope's solo-exhibition, 'Gallery Space Recall' at Chapter in Cardiff, (6th-7th November 2006). ‘Walking Here And There’ is a research & development project supported by the Wellcome Trust's SciArt Fund.
Vaughan Bell is a clinical psychologist in training and researcher studying the neuropsychology of delusions. He currently works between the Institute of Psychiatry and the South London and Maudsley NHS Trust. Simon Pope is an artist researching walking as a contemporary visual art practice. He is a Research Associate at Goldsmiths’ Digital Studios and Transmedia Brussels, a Senior Lecturer at Cardiff School of Art & Design, and former NESTA Fellow.
Exemplifying learning, memory and creativity in a prodigious musical savant
Wednesday 1 November, 4pm
Dr Adam Ockelford
Director of Education, Royal National Institute of the Blind
This presentation offers preliminary findings and analysis from the work with a single case study - Derek Paravicini - who despite having severe learning difficulties (verbal IQ = 58) has achieved international recognition as a pianist specialising in early jazz. The work with Derek is part of the 'REMUS' Project - ('Researching Exceptional Musical Skill') - a joint initiative of the Royal National Institute of the Blind and the Psychology Department of Goldsmiths College. In the study presented, I will report on how Derek learnt a specially composed piece over a period of two years, with his responses being recorded through MIDI-based software and analysed using music-theoretical procedures. The findings offer both qualitative and quantitative insights into the workings of an exceptional musical mind, as well as having potential implications for our understanding of learning, memory and creativity more generally.
Dr Adam Ockelford is currently Director of Education at the Royal National Institute of the Blind, Visiting Research Fellow at the Institute of Education, University of London, the University of Roehampton, and secretary of SEMPRE - the Society for Education, Music and Psychology research. His wide-ranging research interests include the cognition of musical structure, the derivation of musical meaning, and, in recent years, a focus on special musical abilities and special musical needs, culminating in a series of studies on 'musical savants' in collaboration with Professor Linda Pring at Goldsmiths College. He has published and lectured widely inall these fields.
Embodied simulation: From mirror neurones to social cognition
Wednesday 8 November, 4pm
Prof Vittorio Gallese MD
Universita Degli Studi Di Parma: Dept. Neuroscience
Our seemingly effortless capacity to conceive of the acting bodies inhabiting our social world as goal-oriented persons like us depends on the constitution of a “we-centric” shared meaningful interpersonal space. I have proposed that this shared manifold space can be characterized at the functional level as embodied simulation, a specific mechanism, likely constituting a basic functional feature by means of which our brain/body system models its interactions with the world. The mirror neuron systems and the other non-motor mirroring neural clusters in our brain represent one particular sub-personal instantiation of embodied simulation. With this mechanism we do not just “see” an action, an emotion, or a sensation.
Side by side with the sensory description of the observed social stimuli, internal representations of the body states associated with these actions, emotions, and sensations are evoked in the observer, ‘as if’ he/she would be doing a similar action or experiencing a similar emotion or sensation. Social cognition is not only explicitly reasoning about the contents of someone else’s mind. Our brains, and those of other primates, appear to have developed a basic functional mechanism, embodied simulation, which gives us an experiential insight of other minds. This proposal opens new perspectives on the study of the neural underpinnings of psychopathological states and psychotherapeutic relations, and of other aspects of intersubjectivity like aesthetic experience and ethics.
Vittorio Gallese, MD and Neurologist, is Professor of Physiology at the Dept. of Neuroscience of the University of Parma, Italy. As cognitive neuroscientist he focuses his research interests on the relationship between the sensory-motor system and cognition, both in non-human primates and humans using a variety of neurophysiologycal and neuroimaging techniques. Among his major contributions is the discovery, together with colleagues in Parma, of mirror neurons, and the elaboration of a theoretical model of basic aspects of social cognition. He is actively developing an interdisciplinary approach to the understanding of intersubjectivity and social cognition in collaboration with psychologists, psycholinguists and philosophers. He has been George Miller visiting professor at the University of California at Berkeley.
Pictorial space
Wednesday 15 November, 4pm
Prof Jan Koenderink DSc
Helmholtz Institute, Utrecht University
When you look AT a picture you see a planar object covered with pigments in a certain simultaneous order; when you look INTO a picture you experience "pictorial space". Pictorial space is filled with "pictorial objects" that appear to have positions, spatial attitudes, shapes and material properties. "Pictorial shape" is a geometrical property that is a purely mental entity based on "pictorial cues" but contains a significant "beholder's share". In many cases human observers exploit the cue structure completely, then the beholder's share coincides with the ambiguity left by the cues. The beholder's share may be identified with the group of proper movements or congruences of pictorial space, thus defining its (non-Euclidian) structure. "Pictorial shapes" are invariants under these congruences.
Jan Koenderink graduated in Physics and Mathematics in 1967 at Utrecht University. He was associate professor in Experimental Psychology at the Universiteit Groningen, then in 1974 returned to the Universiteit Utrecht where he presently holds a chair in the Department of Physics and Astronomy. He founded the Helmholtz Instituut in which multidisciplinary work in biology, medicine, physics and computer science is coordinated.He has received an honorary degree (D.Sc.) in Medicine from the University of Leuven and is a member of the Royal Netherlands Academy of Arts and Sciences. His current interests include the mathematics and psychophysics of space and form in vision and active touch, the structure of perceptual spaces, and ecological physics, including applications in art and design.
Ostwald colour science
Thursday 16 November, 12pm
Prof Jan Koenderink DSc
Helmholtz Institute, Utrecht University
Wilhelm Ostwald's contributions to colorimetry dominated colour science of continental Europe in the first half of the 20th century yet have almost totally vanished from the textbooks of today. It remains virtually unknown that Ostwald's color atlas is a conceptual entity, based on formal colorimetry, as opposed to the Munsell atlas (in current use) that is based upon eye measures. The key ideas can easily be formalized and are remarkably elegant and useful if only a few minor issues are dealt with. The result is a happy synthesis between two (apparently) mutually exclusive threads, one the Newton-Maxwell-Helmholtz-Schroedinger, the other the Goethe-Schopenhauer-Hering-Ostwald tradition.
Jan Koenderink graduated in Physics and Mathematics in 1967 at Utrecht University. He was associate professor in Experimental Psychology at the Universiteit Groningen, then in 1974 returned to the Universiteit Utrecht where he presently holds a chair in the Department of Physics and Astronomy. He founded the Helmholtz Instituut in which multidisciplinary work in biology, medicine, physics and computer science is coordinated.He has received an honorary degree (D.Sc.) in Medicine from the University of Leuven and is a member of the Royal Netherlands Academy of Arts and Sciences. His current interests include the mathematics and psychophysics of space and form in vision and active touch, the structure of perceptual spaces, and ecological physics, including applications in art and design.
Seeing through a Bayes window
Wednesday 22 November, 4pm
Prof Richard Gregory FRS
University of Bristol, UK.
How do we recognise shadows? A dark region might be a shadow, or it might be a patch of paint. Probabilities come into play, which may be described and explained with Bayesian concepts. The Revered Thomas Bayes considered probabilities from gambling, in the 18th Century. Now these concepts are tools and insights into perceptual brain function.
Richard Gregory went to Cambridge just after the war, and stayed on as a Lecturer and a Fellow of Downing and Corpus Christi Colleges, running the Special Senses Laboratory. He then went to Edinburgh to start, with two colleagues, the first department of Artificial Intelligence in Europe. After this, he became Professor of Neuropsychology and Director of the Brain and Perception Laboratory in the Medical School in the University of Bristol. He has written about 20 books on perception and philosophy of science and has done a large variety of research, much of it on illusions. He is a Fellow of the Royal Society and a CBE.
The Human in the Loop - Challenges of Human-Robot Interaction Experiments
Wednesday 29 November, 4pm
Prof Kerstin Dautenhahn
University of Hertfordshire
This talk will address challenges of experiments involving robots and people. The field of Human-Robot Interaction (HRI) in a growing multi- and interdisciplinary domain that requires a synthesis of concepts, models and methods from a variety of disciplines including psychology, robotics, ethology, social sciences and computer science. After introducing basic concepts used in the field I will survey two projects that emphasize the “human in the loop” of interaction experiments. First, within the Cogniron project (www.cogniron.org) the team at University of Hertfordshire studies a cognitive robot companion. Such a companion should a) be able to carry out useful tasks in a home scenario, and b) perform these tasks in a manner that is socially acceptable for people.
Results from recent studies in the “Robot House”, a domestic environment where people can interact with robots in a more naturalistic setting, will be presented. Next, I will describe our research goals and results in the Aurora project (www.aurora-project.com) which investigates the potential use of robots as therapeutic toys for children with autism. I will describe several studies that emphasize how the robot may encourage social interaction skills, imitation and joint attention in children with autism. Importantly, the primary goal of this project is to help children with autism to make contact with other children and adults, i.e. the robot serves the role as a social mediator.
Prof. Dr. Kerstin Dautenhahn received her Ph.D. degree from the Biological Cybernetics Department of the University of Bielefeld, Bielefeld, Germany, in 1993. She is Professor of Artificial Intelligence in the School of Computer Science and coordinator of the Adaptive Systems Research Group at the University of Hertfordshire in England. She has published more than 100 research articles on social robotics, robot learning, human-robot interaction and assistive technology. Prof. Dautenhahn has edited several books and frequently organises international research workshops and conferences. For example, she hosted the AISB’05 convention at was general Chair of IEEE RO-MAN 2006. She is involved in several European projects and is Editor in Chief of the journal Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems.
Morphic resonance and memory
Wednesday 11 January, 4pm
Dr. Rupert Sheldrake
Institute of Noetic Sciences, San Francisco & University of Cambridge, UK
We usually assume that nature is governed by fixed laws. But we live in an evolutionary universe. Perhaps the laws of nature evolve: they may in fact be more like habits. Rupert Sheldrake will summarize his hypothesis of morphic resonance, according to which all species draw upon a collective memory. In the human realm this hypothesis leads to a new way of thinking about what C.G. Jung called the collective unconscious. It also leads to a radical reinterpretation of the nature of animal and human memory.
Rupert Sheldrake, Ph.D. is a biologist and author of more than 75 technical papers and six books, the most recent being THE SENSE OF BEING STARED AT, AND OTHER ASPECTS OF THE EXTENDED MIND. He is a Fellow of the institute of Noetic Sciences, near San Francisco, and the Perrott-WarricK Research Scholar, funded by Trinity College, Cambridge.
Why neither brains nor computers can be conscious, but cells might be
Wednesday 18 January, 4pm
Prof. Jonathan Edwards
University College London
In around 2001 Steven Sevush (1) and Jonathan Edwards (2) independently came to the conclusion that phenomenal consciousness must be a property of an individual cell, not a group of cells. This conclusion at first seems bizarre and even terrifying. However, it may make a number of things easier to explain; the seamlessness of consciousness, the evolution of consciousness from protozoal times, the layout of the brain and the strange reports of people with damaged brains. The idea requires there to be a phenomenon within cells which is notionally 'quantised' but rests otherwise on a purely classical biophysical analysis with no need for 'quantum computation', entanglement or suchlike. The current proposal is that all that is required is conventional cable-theory based neurophysiology plus a piezooelectric field, possibly of the type already known to exist in cochlear hair cells. It is not sugggested that single cells think, but merely that each is a separate observer and that there is no more global observer in our heads. Despite all attempts so far it has proved impossible to discover why this idea might be wrong, but suggestions are welcome.
A research paper published by UCL’s Jonathan Edwards, Professor in Connective Tissue Medicine, has been named as one of the fastest–breaking publications in the world by Thomson ISI Essential Science Indicators, which track the rate of citations generated by academic papers across all academic subjects. His research ‘Efficacy of B-Cell Targeted Therapy with Rituximab in Patients with Rheumatoid Arthritis’ was published in the ‘New England Journal of Medicine’ in 2004. The number and rate of citations that an academic receives for their papers is considered to be a measure of their influence, and hence the report shows him to be at the forefront of his discipline.
As well as his work in arthritis, Professor Edwards also conducts research into the nature of consciousness. He has a paper just published in the ‘Journal of Consciousness Studies’ which argues that conscious experience must be a property of individual cells, rather than a global property of the brain, as has been previously assumed.
The perceptual structure of color corresponds to singularities in reflection properties
Wednesday 25 January, 4pm
Dr. Kevin O'Regan
Laboratoire de Psychologie Expérimentale, Université René Descartes, Paris
Psychophysical studies suggest that different colors have different perceptual status: red and blue for example are thought of as elementary sensations whereas yellowish green is not. The dominant account for such perceptual asymmetries attributes them to specificities of the neuronal representation of colors. Alternative accounts involve cultural or linguistic arguments. What these accounts have in common is the idea that the physics of light and surfaces provide no reasons for the existence of asymmetries that could underlie the perceptual structure of colors, and this is why neuronal or cultural processes must be invoked as the essential underlying mechanisms that structure color perception.
Here, we suggest a biological approach for surface reflection properties that takes into account only the information about light that is accessible to an organism given the photopigments it possesses, and we show that now asymmetries appear in the behavior of surfaces with respect to light. These asymmetries provide a classification of surface properties that turns out to be identical to the one observed in linguistic color categorization across numerous cultures, as pinned down by cross cultural studies. Further, we show that data from psychophysical studies about unique hues and hue cancellation are consistent with the viewpoint that stimuli reported by observers as special are those associated with singular surface properties under a standard illuminant. The approach also predicts that unique blue and unique yellow should be aligned in chromatic space while unique red and unique green should not, a fact usually considered to result from nonlinearities in chromatic pathways.
After studying theoretical physics at Sussex and Cambridge Universites, Kevin O'Regan moved to Paris in 1975 to work in experimental psychology at the Centre National de Recherche Scientifique. Following his Ph. D. on eye movements in reading he showed the existence of an optimal position for the eye to fixate in words. His interest in the problem of the perceived stability of the visual world led him to question established notions of the nature of visual perception, and to discover, with collaborators, the phenomenon of "change blindness". His current work involves exploring the empirical consequences of a new "sensorimotor" approach to vision and sensation in general. He is particularly interested in the problem of the nature of phenomenal consciousness, which he addresses experimentally in relation to sensory substitution, and theoretically in relation to color perception. He is interested in applying this work to robotics. Kevin O'Regan is currently director of the Laboratoire de Psychologie Expérimentale, CNRS, Université Paris 5.
Creativity in music performance: treading the line between the provocative and the outrageous
Wednesday 1 February, 4pm
Dr. Aaron Williamon
Royal College of Music, London
Today's most distinguished performing musicians are people who offer new musical possibilities to their audiences. Yet, although we may appreciate innovative performances, there seems to be a limit to our acceptance of novelty before we reject it as unmusical, inappropriate or tasteless. Bound by cultural traditions and stylistic norms, innovative performers must tread a fine line between the unique and the downright outrageous. Current discourse on creativity often conflates three quite distinct concepts: 'creativity', 'originality' and 'value'. Much empirical work in psychology since the 1950s has purported to focus on the first of these. This paper exams the under-researched concepts of originality and value and suggests a means of charting their inter-relationships.
Aaron Williamon is the Research Fellow in Psychology of Music at the RCM, where he heads the Centre for the Study of Music Performance (CSMP). He also holds a research fellowship in the Faculty of Medicine at Imperial College London. His research focuses on music cognition, expert performance and (in particular) applied psychological and health-related initiatives that enable musicians to perform at their peak. His recent book, Musical Excellence, is published by Oxford University Press. It draws together the findings of initiatives from across the arts and sciences, with the aim of offering musicians new perspectives and practical guidance for enhancing performance and managing performance-related stress. In addition, Dr Williamon is interested in how audiences perceive and evaluate music performances and, in 1998, was awarded the Hickman Prize by the Society for Education, Music and Psychology Research (SEMPRE) for his work on this topic. He has performed as a trumpeter in chamber and symphony orchestras, brass bands and brass quintets in both Europe and North America.
Cancelled: Effing the Ineffable: can machines be conscious?
Wednesday 22 February, 4pm
Steve Grand
Cyberlife, UK.
Nothing is more precious to us than our sense of self. Most of us refer to "our" bodies as if they were a possession, while "we" are something else entirely - a conscious mind that is somehow distinct from and (we hope) independent of our corporeal form. One thing we definitely don't want to be told, therefore, is that some kind of jumped-up pocket calculator has achieved a state of consciousness.
Artificial intelligence, aided and abetted by the behavioural sciences, is hence at least partly to blame for the sense of belittlement and nihilism that such an idea generates. If machines can be like us, then we are perhaps "no more than" machines ourselves, and this seems a depressing prospect. Many of us prefer instead to retreat into metaphysics, or simply dodge the issue as if it doesn't matter. But it does matter. Everything from animal welfare to abortion and euthanasia depends upon it.
The mistake, it seems to me, lies not so much in our overblown opinion of ourselves but in our pathetically limited conception of machines. Dualism and reductionistic materialism are not the only alternatives. Admitting that we are a mechanism need not be as demeaning as it sounds, and attempting to make machines that are conscious may throw a valuable and not unflattering light on the nature of being. Discuss.
Steve's first claim to fame was that he was the architect and lead programmer for the computer game Creatures, in which he did what he could to bring a new form of life into existence! He has since written a book about his ideas on life the universe and everything, (but especially intelligence), which was published in 2000 by Weidenfeld & Nicolson and shortlisted for the Aventis Prize in 2001. Steve has also written columns for the Guardian in which he has, amongst other things, further discussed his ideas concerning Cyborgs and Artificial Life.
Enhancing Function with Neurotechnology: Validation for Emotion, Cognition, Immune Function and Performing Arts
Wednesday 1 March, 4pm
Prof. John Gruzelier
Goldsmiths College, University of London
Enhancing function is set to become one of the cultural debates of the decade (Human Reengineering; The Guardian, 30/1). Compared with smart drugs, neurotechnology offers a non-invasive approach. Validation research undertaken at ICL will be reviewed along with projects beginning at Goldsmiths with a view to inviting collaboration and new initiatives. Validation involved EEG-neurofeedback and Heart Rate Variability (HRV) biofeedback, along with psychological interventions such as self-hypnosis and energy medicine. Neurofeedback involves computer assisted technology which allows brain rhythms to be recorded, fedback symbolically to the participant in real time, selectively enhanced and brought under voluntary control. This is relatively easy to learn and has even been demonstrated with psychotic patients and autistic children. Learned control of faster rhythms has improved attention and memory in students and ADHD children (P300, CPT, ANT, WM).
Adjacent high spectral bands were capable of opposite effects while elevating slower theta rhythms (hypnogogia), enhanced artistry to a professionally significant degree in RCM students. Dance performance in university competitions has also benefited from theta training (and from HRV biofeedback), while mood has been elevated in schizotypally withdrawn students. Hypnogogia historically has been associated with the creative process ( Koestler). It is theorised that the theta rhythm benefits the creative process by facilitating long distance connectivity in the brain, allowing novel associations to be made between memory representations, and unconscious processes contributing to self and individuality. The theta rhythm has also been implicated in motivational circuits and in complex sensory-motor integration, dramatically seen in virtuoso music performance as well as in superior sporting achievement. New initiatives include:- NESTA funding for music performance in primary and secondary schools and adult education; EU 6FP funding on Creative Presence States, to include originating and performing arts/sports. Research will continue with children with special needs, and neurorehabilitation and ageing applications are under consideration. Technological developments include audiovisual entrainment and virtual reality adapted for hypnotic visualisation procedures, which have reliably enhanced immune function and health. Further initiatives are invited.
John Gruzelier, Professor of Psychology, in 2006 joined Goldsmiths from Imperial College London, as a Professorial Research Fellow to further research on creativity in the Arts and Humanities using neurotechnology, notably EEG-neurofeedback. New grants include an EU grant on Presence with responsibility for creative presence states in the originating and performing arts, and an award from NESTA to enhance music performance in primary and secondary school children and in adult education. He has over 250 publications spanning schizophrenia, psychosis-proneness, psychophysiological measurement, brain lateralisation, and hypnosis, with current emphasis on functional enhancement with biofeedback for peak performance together with clinical applications including ADHD, and immune enhancement with self-hypnosis and healing. In 2004 he received the Ernest R. Hilgard award of the International Hypnosis Society, and in 2001 the US Society of Clinical and Experimental Hypnosis best clinical paper for the negative effects of hypnosis and stage hypnosis. He co-edited the International Journal of Psychophysiology 1984 – 2004, and since 2001 has edited Contemporary Hypnosis. He has been President of the British Psychophysiology Society, Vice-President of the Federation of European Psychophysiological Societies, is a Governor of the International Organisation of Psychophysiology, and recently established the Society of Applied Neuroscience.
A role for art in the science of consciousness?
Wednesday 8 March, 4pm
Dr. Ron Chrisley
University of Sussex, UK
Although there are many ways in which science can assist in the production, appreciation and analysis of art, I will explore the converse possibility: the ways in which creative artistic practice and theory can aid our attempts to understand, in a scientific way, subjective experiential states. Specifically, a science of conscious experience requires a systematic phenomenology within which to identify the phenomena to be explained, yet orthodox (linguistic, literal, disembodied, abstract) means of doing so typically fail to capture the richness, affect and subjectivity of consciousness. I propose that scientists and philosophers look to the arts for creative methods and techniques of specifying experiential states. However, any such project faces a considerable challenge: Can it be done in a way that not only does justice to the richness of subjective experiential states, but that also permits rigorous, systematic reference to them? This lecture attempts to lay some of the conceptual groundwork necessary for answering these questions.
Ron Chrisley is Reader in Philosophy at the University of Sussex where he is also Director for Research in Cognitive Science.
Cancelled: The brain from a Biophysicochemist's and Systemicist's point-of-view
Wednesday 15 March, 4pm
Christian Haan
Subtopics: (A) General criteria for a brain's distinctive qualities; (B) Basic notions on systems and on systems' qualities from the viewpoint of biophysicochemistry and systems engineering; (C) The brain viewed and imagined as the organism's governor (cybernetics); (D) Quantitative data on the brain's structural and functional development and its organisational complexity under evolutionary and degenerative constraints; (E) The brain as governor, or as a trial-and-error muddle-through manager? (F) Fédérico Garcia Lorca's contribution to Brain Science.
Christian Haan is a cybernetician based in Paris who has studied with Changeux, Ashby, Monod, and Pask.
The neural basis of perceptual and cognitve pleasure
Wednesday 22 March, 4pm
Prof. Irving Biederman
Our selection of which movie to see or book to read, whether to stay in a conversation at a party or freshen our drink, and where to look with our next fixation is decidedly non random. What controls this selection when an individual is not engaged in the classical survival modes of satisfying hunger, avoiding harm, etc? And how can this expression of interest be manifested in real time, at the rate of three visual fixations per second? The surprising discovery of a gradient of mu-opioid receptors in cortical areas associated with perception and cognition may provide the key for understanding the spontaneous selectivity of perception and thought. These receptors are sparse in the early sensory areas and dense in the association areas.
If we assume that experiences are preferred that maximize this opioid activity, then preferred inputs will tend to be those that are richly interpretable (not just complex) insofar as they would produce high activation of associative connections in areas that have the greatest density of mu-opioid receptors. Once an input is experienced, however, competitive learning would serve to reduce associative activity and hence opioid activity, resulting in habituation and boredom. Behavioral and neuroimaging tests have confirmed this account. This system serves to maximize the rate at which we acquire new but interpretable information--rendering us infovores--and leads to an understanding of the neural basis of aesthetics.
Irving Biederman is the Harold W. Dornsife Professor of Neuroscience at the University of Southern California.
The Body and Soul of Schizophrenia: Contributions from Neurology and Psychiatry
Wednesday 12 October, 4pm
Dr. Jean Oury and Prof. Jacques Schotte
La Borde Clinic and U.C.L. Llouvain.
Dr. Jean Oury is an eminent reformer of psychiatry. Now eighty-three years of age, he worked closely with Julian de Ajuriaguerra, the distinguished neurologist who pioneered work on the phantom limb and the cerebral cortex. Dr. Oury spearheaded the post-war reformation of the psychiatric institution and has developed the psychotherapy of schizophrenia for over forty years. He has always stressed the necessity of psychiatrists and psychoanalysts alike being grounded in neurology. At the clinic of La Borde, founded by Dr. Oury in 1953, more schizophrenic patients are currently being treated than in any other establishment in France. Dr. Oury's publications have been translated throughout Europe, the Americas and Asia. Professor Jacques Schotte, retired emeritus professor of U.C.L Llouvain, is a leading scholar on the histories of neurology and phenomenology. For almost fifth years, he has attracted wide acclaim for his scholarship on the psychopathology of Viktor von Weizsäcker. A key writings anthology of Oury, Schotte and Weizsäcker is currently in preparation.
Could we build a conscious robot?
Wednesday 19 October, 4pm
Prof. Owen Holland
Department of Computer Science, University of Essex, UK.
In the last few years a new discipline has begun to emerge: machine consciousness. This talk will describe the background to this movement, and will present a line of thought showing how the problem of constructing a truly autonomous robot may also constitute an approach to building a conscious machine. The basis of the theory is that an intelligent robot will need to simulate both itself and its environment in order to make good decisions about actions, and that the nature and operation of the internal self model may well support some consciousness-related phenomena.
As part of an investigation into machine consciousness, We are currently developing a robot that we hope will acquire and use a self-model similar to our own. We believe that this requires a robot that does not merely fit within a human envelope, but one that is anthropomimetic - with a skeleton, muscles, tendons, eyeballs, etc. - a robot that will have to control itself using motor programs qualitatively similar to those of humans. The early indications are that such robots are very different from conventional humanoids; the many degrees of freedom and the presence of active and passive elasticity do provide strikingly lifelike movement, but the control problems may not be tractable using conventional robotic methods.
The project is limited to the construction and study of a single robot, and there are no plans for the robot to have any encounters with others of its kind, or with humans. Without any social dimension to its existence, and without language, could such a robot ever achieve a consciousness intelligible to us?
After training as a production engineer, Owen became interested in psychology, graduating from Nottingham University in 1969 and going on to teach experimental methods at Edinburgh University Psychology Department for three years. He then moved into commerce, and then back into engineering, working as Special Projects Manager for Dellfield Digital Ltd., a telecomms start-up (1983-87), and as Senior Production Engineer for Renishaw Metrology Ltd. (1987-1990). In 1988 Owen began to take an interest in behaviour based robotics; in 1990, this work won me a Small Firms Merit Award for Research and Technology from the Department of Trade and Industry, and he then set up a consultancy company, Artificial Life Technologies.
He worked on a variety of projects, notably the MARCUS prosthetic hand (a European Community TIDE project), before moving to the University of the West of England, Bristol, (UWE) to help set up the Intelligent Autonomous Systems Engineering Laboratory in 1993. For 1993-94 Owen was a Visiting Research Fellow at the Zentrum fur interdisziplinare Forschung at the University of Bielefeld, Germany and in 1997 was Visiting Associate in Electrical Engineering at Caltech, working in the Microsystems Laboratory. In 1998 I was appointed Reader in Electrical Engineering at UWE, and in 1999 I spent a year as Principal Research Scientist at the CyberLife Institute (now CyberLife Research) before returning to Caltech in 2000. Owen's next port of call was the legendary Starlab, in Brussels where he spent several months as Chief Scientist before joining Essex in October 2001.
Synthetic Performers with Embedded Audio Processing
Wednesday 26 October, 4pm
Prof. Barry Vercoe
Professor of Media, Arts & Sciences, MIT; Assoc Academic Head & Founding Member, MIT Media Lab.
This talk will trace the development of advanced human-computer music interaction, from the author's first developments in Paris (IRCAM) in the 80s to the world's first software-only professional audio system released in Japan in 2002. Digital processing of audio changed in 1990 when it first became real-time on desk-top machines. Human interaction previously constrained to custom hardware was suddenly possible on general-purpose machines, and the 90s saw new experiments in gestural control over complex audio effects. The pace of development outpaced Moore's Law when cross-compilers allowed rapid prototyping of audio structures on DSPs using large amounts of processor power. An interactive music performance system using hand-held devices running real-time audio software will be demonstrated. The talk will also be illustrated by other examples of music research at the MIT Media Lab, including the Audio Spotlight, applications of cognitive audio processing, compositions from the Experimental Music Studio, soundtrack from a recent Hollywood movie, and a new method of music recommendation on the Internet.
Barry Vercoe is Professor of Music and Professor of Media Arts and Sciences at MIT , and Assoc Academic Head of the Program in Media Arts & Sciences . He was born and educated in New Zealand in music and in mathematics, then completed a doctorate in Music Composition at the University of Michigan. In 1968 at Princeton University he did pioneering work in the field of Digital Audio Processing, then taught briefly at Yale before joining the MIT faculty in 1971. In 1973 he established the MIT computer facility for Experimental Music -- an event now commemorated on a plaque in the Kendall Square subway station. During the '70's and early 80's he pioneered the composition of works combining computers and live instruments. Then on a Guggenheim Fellowship in Paris in 1983 he developed a Synthetic Performer -- a computer that could listen to other performers and play its own part in musical sync, even learning from rehearsals.
In 1992 he won the Computer World / Smithsonian Award in Media Arts and Entertainment, and recently gained the 2004 SEAMUS Lifetime Achievement Award. Professor Vercoe was a founding member of the MIT Media Laboratory in 1984, where he has pursued research in Music Cognition and Machine Understanding. His several Music Synthesis languages are used around the world, and a variant of his Csound and NetSound languages has recently been adopted as the core of MPEG-4 audio -- an international standard that enables efficient transmission of audio over the Internet. At the Media Lab he currently directs research in Machine Listening and Digital Audio Synthesis (Music, Mind and Machine group), and is Associate Academic Head of its graduate program in Media Arts and Sciences.
Visual Routes to Knowledge and Action
Thursday 3 November, 4pm
Melvyn A. Goodale
The University of Western Ontario, London Ontario
Visual systems first evolved not to enable animals to see, but to provide distal sensory control of their movements. Vision as 'sight' is a relative newcomer on the evolutionary landscape, but its emergence has enabled animals to carry out complex cognitive operations on representations of the world. In the more ancient visuomotor systems, there is a basic isomorphism between visual input and motor output. In representational vision, there are many cognitive 'buffers' between input and output. Thus, in this system, the relationship between what is on the retina and the behaviour of the organism cannot be understood without reference to other mental states, including those typically described as "conscious". The duplex nature of vision is reflected in the organization of the visual pathways in the primate cerebral cortex.
The dorsal 'action' stream projecting from primary visual cortex to the posterior parietal cortex provides flexible control of more ancient subcortical visuomotor modules for the control of motor acts. The ventral 'perceptual' stream projecting from the primary visual cortex to the temporal lobe provides the rich and detailed representation of the world required for cognitive operations.This might sound rather like Cartesian dualism?the existence of a conscious mind separate from a reflexive machine. But the division of labour between the two streams has nothing to do with the kind of dualism that Descartes proposed. Although the two kinds of visual processing are separate, both are embodied in the hardware of the brain. Moreover, there is a complex but seamless interaction between the ventral and the dorsal streams in the production of adaptive behavior. The selection of appropriate goal objects depends on the perceptual machinery of the ventral stream, while the execution of a goal-directed action is mediated by dedicated on-line control systems in the dorsal stream and associated motor areas.
Moreover, as I will argue, the integration of processing in the two streams goes well beyond this. The dorsal stream may allow us to reach out and grasp objects with exquisite ease, but it is trapped in the present. Evidence from the behaviour of both neurological patients and normal observers shows that, by itself, the dorsal stream can deal only with objects that are visible when the action is being programmed. The ventral stream, however, allows us to escape the present and bring to bear information from the past ? including information about the function of objects, their intrinsic properties, and their location with reference to other objects in the world. Ultimately then, both streams contribute to the production of goal-directed actions.
Professor Melvyn A. Goodale, (Ph.D., F.R.S.C), currently works with the Group on Action and Perception at The University of Western Ontario, London Ontario.
Visual Illusions & Actions: A little less consciousness perception, a little more action
Wednesday 16 November, 4pm
Dr. Gregory DiGirolamo
Cambridge University, UK.
Considerable debate surrounds the extent and manner that motor control is, like perception, susceptible to visual illusions. Using the Brentano version of the Müller-Lyer illusion, we measured the accuracy of voluntary (anti-saccadic eye movements and ballistic arm movements) and reflexive (pro-saccadic eye movements) actions to the endpoints of equal length line segments that appeared different (Exps. 1 & 3) and different length line segments that appeared equal (Exps. 2 & 4). From this data, I will argue that the representations underlying perception and action interact and influence even the most reflexive movements with a stronger influence for movements that are consciously controlled.
Dr. DiGirolamo started his career in cognitive neuroscience as an undergraduate working with Stephen Kosslyn (at Harvard) doing PET and fMRI of visual mental imagery. He then went on to do his Ph.D with Mike Posner (at the University of Oregon) studying visual attention. After which he did a brief (6 month) post-doc with Art Kramer & Gordan Logan (at the Beckman Institute at University of Illinois) studying visual attention. Dr. DiGirolamo then landed at Cambridge where he has been ever since (5 years). Dr. DiGirolamo is a University Lecturer in the Department of Experimental Psychology, and a fellow of medical sciences at Jesus College, Cambridge.
Application of the Fisher Rao metric to structure detection
Wednesday 23 Novemeber, 4pm
Prof. Steve Maybank
School of Computer Science and Information Systems, Birkbeck, University of London, UK.
Many image structures in computer vision form parameterised families. For example, the set of all lines in an image forms a two dimensional family in which each line not containing the origin is specified uniquely by the coordinates of the point on the line nearest to the origin. In order to locate a particular image structure, measurements are made in an image and the structure most compatible with the measurements is found. The parameter space for the image structures can be given a metric which is derived from the error model for the measurements. Two structures are close together in this metric if they are hard to distinguish given a measurement. The metric is known in statistics as the Fisher-Rao metric. In most cases the Fisher-Rao metric cannot be found in closed form. However, if the noise level is low, then the Fisher-Rao metric can be approximated by the leading order term in an asymptotic expansion of the metric. In many cases of practical interest this leading order term can be obtained in closed form, or in terms of well known and easily computed functions. Examples of such cases include lines, ellipses and projective transformations of the line.
The main application of this approximation to the Fisher-Rao metric is that it gives for the first time an easily computed measure of the complexity of structure detection in http://igor.gold.ac.uk/~mas02mb/http://igor.gold.ac.uk/~mas02mb/seminars/Whitehead folders/images. This measure is equal to the volume of the parameter space under the Fisher-Rao metric divided by the volume of a region of the parameter space corresponding to a single distinguishable structure. If this ratio of volumes is large then structure detection is difficult because there is a large number of distinguishable structures. If the ratio of volumes is small then structure detection is easy because there is only a small number of distinguishable structures.
Steve Maybank is Professor in the School of Computer Science and Information Systems at Birkbeck College, University of London. He is also Visiting Professor at the Institute of Automation, Chinese Academy of Sciences and Member of the Academic Committee of State Key Laboratory for Image Processing and Intelligent Control, Huazhong University of Science and Technology, Wuhan, China. Steve is Editor for Computing and Informatics, Associate Editor for Acta Automatica Sinica and Member of the Editorial Board for the International Journal of Computer Vision.
Top-down processes in visual selection
Wednesday 30 November, 4pm
Prof. Glyn Humphreys
School of Psychology, University of Birmingham, Birmingham, UK.
Traditionally, several pieces of evidence have been used to argue for the primary role of bottom-up saliency in visual selection, including search asymmetries, visual grouping effects and pop-out effects. I will present recent evidence that, in each of these instances, processing can be modulated by top-down knowledge - either the 'template' of the target or particular items held in working memory. Neuropsychological studies with patients showing extinction further show that the match between bottom-up information and information held in working memory enables a stimulus to pass into conscious awareness.
Glyn Humphreys is Professor of Cognitive Psychology and currently Head of the School of Psychology at the University of Birmingham, UK.
Algebraic Semiotics, Ontologies, and Visualisation of Data in Animations
Wednesday 7 December, 4pm
Dr. Grant Malcolm
University of Liverpool, Liverpool, UK.
Ferdinand de Saussure highlighted both the arbitrary nature of signs (why should the sound /kat/ either spell "cat" or mean `feline'?), and the systematic way in which signs, however arbitrary, function: signs convey meaning by (arbitrary) convention, yet they obtain their meaning by contrast with other signs. This talk will explore the ways in which the structure and systematicity of signs can be exploited in developing animations of algorithms.
Goguen has proposed algebraic semiotics as a way of capturing the systematic way that signs are organised and used to build higher-level signs. We apply algebraic semiotics to the study of user-interface design, and show how relationships between signs can indicate the effectiveness of a user interface. Then we view animations of algorithms as user interfaces, and relate their sign systems to ontologies describing the concepts underlying the algorithm. Dynamic aspects of animations can then be seen as relationships between signs and entities in the ontology, which can be further illuminated by narratology and the development of conceptual spaces.
Dr. Malcolm is lecturer in Computer Science at the University of Liverpool. His research interests are Algebraic Semiotics; Biologically Motivated Computing; Ontologies and Hidden Algebra.
Architectures for human-like machines
Wednesday 19 January, 4pm
Prof. Aaron Sloman
University or Birmingham
Much discussion of the nature of human minds is based on prejudice or fear of one sort or another -- sometimes arising out of 'turf wars' between disciplines, sometimes out of dislike of certain theories of what we are, sometimes out of religious concerns, sometimes out of ignorance of what has already been learnt in various disciplines, sometimes out of over-reliance on common sense and introspection, or what seems 'obviously' true. But one thing is clear to all: minds are active, changing entities: you change as you read this abstract and you can decide whether to continue reading it or stop here. I.e. minds are active machines of some kind. So I propose that we investigate, in a dispassionate way, the variety of design options for working systems capable of doing things that minds can do, whether in humans or other animals, in infants or adults, in normal or brain-damaged people, in biological or artificial minds.
We can try to understand the trade-offs between different ways in which complete systems may be assembled that can survive and possibly reproduce in a complex and changing environment (including other minds.) This can lead to a new science of mind in which the rough-hewn concepts of ordinary language (including garden-gate gossip and poetry) are shown not to be wrong or useless, but merely stepping stones to a richer, deeper, collection of ways of thinking about what sorts of machines we are, and might be. This will also help to shed new light on the recent (confused) fashion for thinking that emotions are 'essential' for intelligence. It should also help us to understand how the concerns of different disciplines, e.g. biology, neuroscience, psychology, linguistics, philosophy, etc. relate to different layers of virtual machines operating at several different levels of abstraction, as also happens in computing systems.
Aaron Sloman is Professor of Computer Science at the University of Birmingham. He was born in Southern Rhodesia 1936, went to school and university in Cape Town where he read mathematics and physics. After graduating with a first class degree Aaron obtained a Rhodes Scholarship and went up to Oxford to study mathematics, but eventually found philosophy more tempting. He started teaching Philosophy at Hull University in 1962, then moved to Sussex in 1964. He later spent the years 1972-3 in Edinburgh as Senior Visiting Fellow, and was converted to "AI as the best way to do philosophy." He returned to Sussex October 1973, and helped (with Max Clowes, Margaret Boden, Alistair Chalmers, and others) to develop a Cognitive Studies Programme in the School of Social Sciences which eventually grew into the Sussex School of Cognitive and Computing Sciences. Over the years he has dabbled in vision, the study of forms of representation, motivation and emotion, architectures for complete agents, and good ways to teach novices programming and AI.
Redefining implicit and explicit memory: the electrophysiology and functional neuroanatomy of priming, remembering, and control of retrieval
Wednesday 26 January, 4pm
Dr. Alan Richardson-Klavehn
Department of Psychology, Goldsmiths College, University of London
The cognitive neuroscience of human memory has been dominated by distinctions between forms of memory that involve different kinds of consciousness. Foremost is the distinction between explicit and implicit memory. Explicit memory involves conscious remembering of prior episodes, often via intentional retrieval of those episodes, whereas implicit memory involves influences of prior episodes on current behaviour without intentional retrieval, and sometimes without conscious remembering of those prior episodes. Many studies of implicit memory have focused on priming, the facilitated processing of stimuli as a function of prior exposure, an important mechanism by which memory facilitates perception. It has been proposed that priming and explicit memory depend on distinct neural systems. Although there is support for this view, a separation at the neural level has not yet been firmly established owing to conceptual and methodological ambiguities in most prior studies of brain activity.
Typically these have compared incidental tests (in which participants respond with the first item coming to mind) with intentional tests (in which participants try to retrieve studied items), or they have only used incidental tests. Brain activity in incidental tests can, however, reflect not only priming, but also unintentional conscious remembering of prior episodes (unintentional explicit memory), and sometimes "contamination" by intentional retrieval of prior episodes. Moreover, brain activity in intentional tests reflects not only explicit memory for specific episodes but also the general intention to retrieve prior episodes.
Addressing these ambiguities has awaited a theoretical approach that distinguishes implicit and explicit memory for specific episodes from retrieval intention, and, more specifically, unintentional implicit memory from unintentional and intentional explicit memory. The approach prescribes a novel behavioural paradigm that permits this separation, which we have implemented with electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI). Our results provide firm evidence that priming and explicit memory are neurally separable at encoding and at retrieval, both in electrophysiology and in functional neuroanatomy. They also show that retrieval intention engages neural processes and structures distinct from those involved in both priming and explicit memory. These results have important implications for theories of memory and consciousness, which often equate consciousness with control.
Alan Richardson-Klavehn received his BA from the University of Oxford and his PhD in psychology from the University of California at Los Angeles, where he worked with Robert A. Bjork. He is a Senior Lecturer in Psychology at Goldsmiths, and holds an International Leibniz Fellowship at the Centre for Advanced Imaging at the University of Magdeburg, Germany. His research has mainly focused on the relationship between consciousness and memory, which he has explored in recent years using multimodal brain-activity measurements. His work has appeared or is in press in journals including Annual Review of Psychology, Journal of Experimental Psychology, Psychological Science, Journal of Cognitive Neuroscience, Neuroimage, and Proceedings of the National Academy of Sciences USA. He has authored chapters on long-term memory in the Nature Publishing Group's Encyclopedia of Cognitive Science, and the Oxford Handbook of Memory.
Colour representation in humans and machines
Wednesday 2 February, 4pm
Prof. Steve Westland
School of Design, University of Leeds
Colour perception in humans is a three-dimensional percept that is based upon the responses of three classes of light-sensitive cells (or cones) in the human retina. Digital colour-image representation in machines is also three dimensional and is based upon somewhat arbitrary red (G), green (G) and blue (B) signals. This talk will describe the relationship between colour representation in humans and machines and will discuss how colour fidelity can be maintained in common imaging devices such as cameras, display devices and printers. Current issues in colour management will be outlined including spectral colour imaging.
Stephen Westland obtained a BSc in Colour Chemistry and a PhD in Colour Physics from the University of Leeds. He worked for four years as a colour physicist at Courtaulds Research before joining Keele University in 1990 as a post-doc to work with Professor David Foster in Computational Neuroscience. He later became a lecturer in Colour Vision from 1994 until 1999 in the Institute of Communication and Neuroscience. In 1999 he was appointed as a Reader in Colour Imaging at the Colour and Imaging Institute at Derby University. In 2003 he was appointed as Professor in Colour Science and Technology in the School of Design at the University of Leeds. His current research interests include colour measurement, colour and spatial vision, spectral imaging, and image-device characterization. Professor Westland is a member of the Midland Vision Group, the Applied Vision Association, the Colour Measurement Committee (CMC) and a member of the Colour Group (UK) Committee. Since April 1996 he has been a director of Colourware Ltd. In 1998 he was invited to join the International Editorial Board of the Journal of the Society of Dyers and Colourists.
Musical communication and meaning
Wednesday 9 February, 4pm
Prof. Geraint Wiggins
Goldsmiths College, University of London, UK
I will introduce a range of issues related to the study of human musical behaviour in a context of cognitive science, and, specifically, from the point of view of computational linguistics. I will discuss, and give examples of, different aspects of musical communication, and attempt, where appropriate, to contrast this with common practice in computational linguistics. No prior musical knowledge will be assumed; indeed, this presentation is intended to serve as an introduction to some of the issues involved in understanding musical communication and meaning. There will be no "difficult" musical examples to listen to, though a baby and a group of guinea pigs will probably feature at some point.
Until he moved to City University, London in 1999, Geraint worked in the University of Edinburgh, as a computing officer, then as a research fellow on the ESPRIT Compulog II project (on Logic Program Synthesis and Transformation in the Mathematical Reasoning Group) and, finally, following a year of consultancy work during which he was involved with the successful AME project at the Institute of Ecology and Resource Management, as a lecturer in Artificial Intelligence. His first PhD was also from Edinburgh, in Computational Linguistics. Prof. Wiggins has served on two of the UK Technology Foresight Creative Digital Media Task Groups, Community and Education and on the Creative Digital Media Subgroup. From 2000-2003 he was chair of the SSAISB, the UK learned society for AI/Cognitive Science.
Vision, structure and the processing of British Sign Language
Wednesday 23 February, 4pm
Prof. Bencie Woll
City University, London, UK.
This talk will introduce the different typological properties of signed and spoken languages and how these reflect properties of the articulators and perceptual systems used to process these two types of language. Sign languages are natural languages, created and used by deaf communities throughout the world. They are not derived from or related to the spoken languages of the hearing communities that surround them. Their structures reflect the options available to visual spatial languages, with a lexicon that exhibits visual motivation, and a grammar that exploits the possibility of placing and moving multiple articulators through space. In contrast to processing of an auditory communication system, the processing features of a visual communication system include slow temporal resolution, an asymmetric feedback loop, and large and visible articulators. Sign languages exploit these features in their structure and therefore provide insight into which features of language are universal and which are modality-specific. Illustrations will be provided from linguistic, functional imaging, and psycholinguistic research, and implications for computer modelling of language will be discussed.
Bencie Woll came to the Department of Language and Communication Science at City University London in 1995 to take up the newly created Chair in Sign Language and Deaf Studies, the first chair in this field in the UK. Before that Bencie worked on language acquisition and then was a co-founder of the Centre for Deaf Studies, pioneering research on the linguistics of BSL and on Deaf Studies as an academic discipline. Bencie's research and teaching interests embrace a wide range of topics related to sign language, including the linguistics of British Sign Language (BSL) and other sign languages, the history and sociolinguistics of BSL and the Deaf community, the development of BSL in young children, and sign language and the brain. In recent years I have begun to look specifically at acquired and developmental sign language impairments. Professor Woll co-authored "Sign Language: the study of Deaf People and their Language" with Jim Kyle, and "The Linguistics of BSL: an Introduction" (CUP) with Rachel Sutton-Spence, which was the winner of the 1999 Deaf Nation Award and 2000 BAAL Book Prize.
Measuring cognitive control via task-switching
Wednesday 2 March, 4pm
Dr. Guy Mizon
School of Psychology, University of Exeter, UK
When we switch between two (or more) tasks in rapid alternation, we are slower and less accurate than when we repeat the same task. If anapproaching task change is signaled in advance by a cue, these switching costs are reduced, suggesting that we are able to re-configure our mental set in advance. This re-configuration process has been studied as an example of our control over our own mental processes. In this talk, I will discuss recent challenges to the way task-set reconfiguration is measured, and I will present a new method we have developed to answer these challenges. I will also show how this new method is being used to investigate the electrophysiological correlates of task-set reconfiguration.
Guy Mizon is currently working as a post-doctoral research fellow with Stephen Monsell in the School of Psychology at the University of Exeter. Prior to this, he worked with Nilli Lavie as a PhD student in the Department of Psychology at University College London. His research interests include task-switching, cognitive control, selective attention and inhibition.
Social tapestries: excavating social knowledge in civil society
Wednesday 9 March, 4pm
Giles Lane
Proboscis and the London School of Economics, London, UK
Social Tapestries is a research programme run by Proboscis investigating the uses and impact of local knowledge mapping and sharing through the convergence of new mobile technologies and geographic information systems. The programme is an umbrella for a series of discrete projects and experiments in different social and cultural contexts that attempt to understand more about how these technologies can benefit communities, or have an adverse impact on them. The project builds upon a 2 year R&D project called Urban Tapestries which explored why, what and how people could use spatial annotations for and developed a software platform to test our findings.
Giles Lane is co-director and founder of Proboscis, a non-profit creative studio based in London. Giles leads Proboscis' research programme, SoMa (social matrices), as well as specific projects and activities such as Urban Tapestries; Mapping Perception; Private Reveries, Public Spaces; Peer2Peer; DIFFUSION and others. Giles is currently Associate Research Fellow in Media & Communications at the London School of Economics and previously was a Research Fellow at the Royal College of Art, first in the Computer Related Design Research Studio, and latterly in the School of Communications.
Effects of attention and acetylcholine on neuronal activity in primate V1
Wednesday 16 March, 4pm
Alex Thiele
University of Newcastle, UK
Attention enables the most refined aspects of neural processing to tamper with the most basic aspects, and thereby exert a critical and pervasive control. Major advances have been made throughout the last decade in understanding effects of attention on neuronal processing, but the mechanisms mediating these effects are still unknown. We have demonstrated psychophysically that attention alters our perception about external events, such that expectations and assumptions about the structure of the world become less influential. This reduced influence might (in part) be mediated by action of the cholinergic system. Our electrophysiological data obtained under conditions of increased attention and increased acetylcholine support this view. Thus attention may exert its critical control over the flow of information by changing neuronal properties such that they rely mostly on information coming in directly from the senses, and less on information coming from higher order areas.
Dr. Alex Thiele is currently Reader in fMRI and Vision Sciences, based in the Henry Wellcome Building at the University of Newcastle, Newcastle upon Tyne. His current research encompasses work in: High field fMRI; Visual motion processing; Neuropharmacology of visual attention; Attention and visual inference; Neural synchrony and visual processing.
Human brain stimulation and visual search
Wednesday, 6 October, 4pm
Dr. Vince Walsh
ICN, University College London
Successful search for a target in a visual scene requires many components including orienting, detecting the target and rejecting distractors. Performance in search is affected by the number of targets and distractors ,their similarity, motion in the display, location and viewing history of the stimuli etc. A task with so many stimulus variables and behavioural or neural responses may require different brain areas to interact in ways that depend on specific task demands. Until recently the right posterior parietal cortex was envisaged as having a pre-eminent role in visual search. Based on recent physiological and brain imaging evidence, and on a programme of magnetic stimulation studies designed to compare directly the contributions of the parietal cortex and the human frontal eye fields in search, we have generated an account of similarities and differences between these two brain regions. The comparison suggests that the frontal eye fields are important for some aspects of search previously attributed to the parietal cortex and that accounts of the cortical contributions to search need to be reassessed in the light of these findings.
Vince Walsh is Reader in Psychology and a Royal Society Research Fellow at University College London. His research interests include: the roles of the Frontal eye fields, parietal cortex and extrastriate cortex in visual search; interactions between different cortical visual areas, in particular extrastriate cortex and V1; the processing of temporal information for the experience of and action in time; transcranial magnetic stimulation (methodology and technical).
Minds, machines and Turing
Wednesday, 13 October, 4pm
Prof. Stevan Harnad
Department of Electronics and Computer Science, Southampton University
Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discernible respect (T5). This is a "reverse-engineering" hierarchy of (decreasing) empirical underdetermination of the theory by the data. Level t1 is clearly too underdetermined, T2 is vulnerable to a counterexample (Searle's Chinese Room Argument), and T4 and T5 are arbitrarily overdetermined. Hence T3 is the appropriate target level for cognitive science. When it is reached, however, there will still remain more unanswerable questions than when Physics reaches its Grand Unified Theory of Everything (GUTE), because of the mind/body problem and the other-minds problem, both of which are inherent in this empirical domain, even though Turing hardly mentions them.
Stevan Harnad was born in Hungary, did his undergraduate work at McGill University and his graduate work at Princeton University and is currently Professor of Cognitive Science at Southampton University. His research is on categorisation, communication and cognition. Founder and Editor of Behavioral and Brain Sciences (a paper journal published by Cambridge University Press), Psycoloquy (an electronic journal sponsored by the American Psychological Association) and the CogPrints Electronic Preprint Archive in the Cognitive Sciences Publications, PrePrints etc.
Object based attention
Tuesday, 19 October, 4pm
Venue: WB208
Dr. David Soto
Departamento de Psicoloxia Social y Basica, Facultad de Psicoloxia, University of Santiago de Compostela, Santiago de Compostela
There is now much experimental evidence supporting the idea that visual attention can be deployed in at least two ways: one space-based and other object-based. However, it is not clear whether space- and object-based attention work in an integrated way within the visual system. In this article, we present two experiments in which we compare both components of attention within a cueing paradigm. Participants had to discriminate the orientation of a line that appeared within one of four moving circles, differing in colour. A cue appearing close to one of the four circles indicated the location or circle where the target stimulus was likely to appear. Spatial and object cueing effects were observed: responses were faster when target appeared either at the pre-cued location or within the pre-cued object. In addition, the object-cueing effect occurred only when the cue was spatially invalid and not when it was spatially valid. These results suggest that object- and space-based attention interact, with selection by location being primary over object-based selection.
David Soto's research interests are in the field of visual cognitive neuroscience. Much of his work has focused on the type of representation on which visual selection is carried out. He has been concerned with the role of spatial and object-based factors on attentional deployment and their function within the visual system. What is the function of object-based attention for visuomotor processing? Do both attention systems work independently or in an interactive manner within the visual system? Also, he is concerned with the inter-relations between both forms of attention and the processes that control attention (e.g. working memory). His future work is directed to find out the neural loci of both attention systems and their linked control structures using neuropsychological and brain imaging approaches.
Musical similarity, structure and expression
Wednesday, 27 October, 4pm
Dr. Michael Casey
Goldsmiths College, University of London, UK
We introduce elements of research into machine understanding of music modelling aspects of both the human auditory system and cognitive processes. We discuss musical similarity from multiple perspectives and show its relevance to analysing musical structure and expression. In the second part of the talk, we present recent work on detecting and recognising musical performance features in audio recordings such as trills, appoggiaturas and chord-spreadings. The goal of our work is the comparative analysis of musical performances.
Michael Casey is a Senior Lecturer in the Department of Computing at Goldsmiths College where he is a member of a large group working in Computational Creativity. His current research includes automatic segmentation and indexing of audio for creative media applications (EPSRC GR/S84750/01). Michael completed his Ph.D. in "Statistical Basis Methods for Structured Audio" at the Massachusetts Institute of Technology Media Laboratory in 1998. Since then he has been an editor and co-chair for the MPEG-7 International Standard for Multimedia Content Description. In addition to scientific interests, Michael is a composer and has received prizes from the Bourges and Newcomp music festivals.
Mechanical bodies, mythical minds; dancing with pixies
Wednesday, 10 November, 4pm
Dr. Mark Bishop
Goldsmiths College, University of London, UK.
A cursory examination of the history of Artificial Intelligence, AI, serves to highlight several strong claims from its researchers, especially in relation to the populist form of computationalism that holds, ‘any suitably programmed computer will instantiate genuine conscious mental states purely in virtue of carrying out a specific series of computations’. The argument to be presented in this talk is grounded upon ideas first outlined in Hilary Putnam’s 1988 monograph, “Representation & Reality”, then developed by the author in two papers, “Dancing with Pixies”, and “Counterfactuals Cannot Count”. This work further extends these ideas to form a novel thesis against computationalism which, if correct, has important implications for Cognitive Science; both with respect to the prospect of ever developing a computationally instantiated consciousness and more generally for any computational, (purely-functional), explanation of mind.
Dr. Bishop is Reader in Computing at Goldsmiths College, University of London. He has published extensively both in the field of Philosophy of Artificial Intelligence and in Neural Computing. He recently co-edited a major retrospective volume on the influence of John Searle on artificial Intelligence, ('Views into the Chinese Room', Preston, J., & Bishop, J.M. OUP). The project involved collaboration with many eminent philosophers and cognitive scientists including the Nobel Laureate Herbert Simon; Sir Roger Penrose; John Taylor; Kevin Warwick; Terry Winograd; Stevan Harnad; John Searle; Ned Block; John Haugeland and George Rey.
Attention and context integration in early vision
Wednesday, 17 November, 4pm
Dr. Eliot Freeman
ICN, University College London, UK
What we perceive from moment to moment depends not only on what we are looking at, but the surrounding visual context and our current behavioural context. For example, such basic processes as those involved in grouping small oriented segments (Gabor patches) into the perception of a global contour can depend strongly on which parts of the visual context are attended, and also the specific task that is being performed on them (Nat. Neuro. 4 no.10 p.1032). I will review my recent psychophysics data on this phenomenon, and I will also briefly introduce some new demonstrations from ongoing studies of context effects in ambiguous motion. Such examples may help to characterise the stimulus-driven and attentionally-driven mechanisms by which perceptual conflicts are resolved in context.
Dr. Elliot Freeman is a Research Fellow in the department of Psychology at University College London. His research interests span: attention & perception grouping; task-dependent processing in vision; pupil size as an index of visual information processing; relationship between modal/a-modal completion processes and focal attention.
Is consciousness worth it?
Wednesday, 24 November, 4pm
Prof. John Taylor
Dept of Mathematics, Kings College, University of London, UK
The answer to the question of the title depends on what the functions of consciousness are supposed to be. Some suggest there are none, and that consciousness gets in the way of creativity, for example. Others disagree, and regard consciousness as the supreme controller in the brain. I will argue for the second point of view, and develop support for this by describing recent brain results indicating that there are at least two main functions of consciousness (and two sorts of consciousness), each crucial to our survival. I will conclude on how one might develop these functions and have a bright and burnished consciousness.
For over 20 years John Taylor has been Professor of Mathematics at King's College, London. But the title belies the breadth of his interests. He trained as a physicist and spent much of his career in conventional research, investigating the fundamental properties of matter, from quarks to black holes. But he's long nurtured a deep interest in the workings of the brain, which have now become the centre- piece of his investigations. The team that Taylor has assembled at King's is studying some of the most far-reaching ideas to emerge from neural networks: computer simulations of simple networks of nerve cells in the human brain.
Human visual cortex and awareness
Wednesday, 1 December, 4pm
Dr. Vince Walsh
ICN, University College London
The role of cortical areas in visual awareness is a subject of much debate. The ability of patients with damage to primary visual cortex (V1) to detect and discriminate stimuli which they do not consciously perceive has led to suggestions that V1 is necessary for conscious visual perception. I will discuss recent results form studies of the blindsight patient GY in whom we have applied magnetic brain stimulation to intact and "blind" areas of the visual cortex. These studies have established intercation between his blind and seeing fields. I will also discuss brain stimulation studies, in neurologically intact subjects, in which we have established the role of V1 both in terms of the time course and content of visual awareness. I will argue that V1 is essential to normal visual awareness.
Vince Walsh is Reader in Psychology and a Royal Society Research Fellow at University College London. His research interests include: the roles of the Frontal eye fields, parietal cortex and extrastriate cortex in visual search; interactions between different cortical visual areas, in particular extrastriate cortex and V1; the processing of temporal information for the experience of and action in time; transcranial magnetic stimulation (methodology and technical).