Workshops and Tutorials

Full-day

Giulia D’Angelo, Alexander Hadjiivanov, Matthias Kampa, James Knight, Yulia Sandamirskaya

https://giuliadangelo.github.io/NCDL2025.github.io/

The Neuromorphic Computing for Development and Learning Workshop is a full-day event dedicated to the intersection of neuromorphic computing and developmental learning. This workshop brings together leading researchers, industry professionals, and interdisciplinary experts to explore how biologically inspired computing paradigms, such as event-driven sensing and spiking neural networks (SNNs), enable adaptive and energy-efficient learning in artificial and robotic systems. The program features keynote talks on neuromorphic sensing, contrastive learning through time, spike-based learning, and cognitive architectures for sensorimotor intelligence. Hands-on tutorials will provide participants with practical experience in applying neuromorphic principles to selective attention, perception, and real-time sensorimotor adaptation. Industry representatives will showcase real-world applications, demonstrating how neuromorphic systems enhance efficiency in robotics, autonomous systems, and edge AI. As part of the International Conference on Development and Learning (ICDL), this workshop fosters discussions on lifelong learning, embodiment, and the role of biologically inspired mechanisms in AI. Participants will engage with topics such as synaptic plasticity, bio-plausible backpropagation, continual learning, and closed-loop neuromorphic architectures. The event will also include networking opportunities, poster sessions, and an open call for paper submissions. This workshop is designed for researchers, students, and industry experts interested in the future of adaptive intelligence, bridging neuroscience, AI, and robotics to advance the next generation of efficient, autonomous learning systems.

Sam Wass, John Franchak, Melina Knabe

Children’s development is deeply shaped by their physical and social environments, including the home, school, and neighborhood. Evidence from animal studies, computational models, and environmental interventions suggests that early environments have lasting, complex effects. Yet, traditional research methods—like questionnaires and lab observations—are limited in capturing the dynamic, moment-by-moment realities of real-world behavior. Recent advances in video and wearable sensor technologies are changing this. Researchers can now gather long-form recordings that provide a richer view of how a child’s behavior and environment evolve over time. Cameras and microphones detect subtle changes in light and sound, and AI techniques such as machine learning can process this data to identify objects, scenes, attention patterns, speech, emotions, and social interactions. This dedicated workshop will explore the latest applications of machine learning in infancy research, focusing on perception, sleep, movement, language, emotion, and social interaction. Featuring 15 leading experts, the event is structured around three main talk sessions: 1. What Infants Do: This session will showcase how researchers are using machine learning to analyze infants’ everyday behaviors from wearable sensors, demonstrating how algorithms can detect key behaviors like sleep states, movement, joint attention, and proximity to caregivers. 2. What Infants Experience: This session focuses on using audio and video data to measure the environments infants are exposed to, such as the Cquality of caregiver speech, household noise, and visual context. 3. What Infants Learn: The final session will draw on insights from developmental psychology, robotics, and AI, speakers to discuss how combining vision, language, and action fosters more effective learning—both in humans and machines. An accompanying poster session will allow attendees to engage with emerging research and foster collaboration.

Eduargo Camargo, Leonardo De Lellis Rossi, Paula Dornhofer Paro Costa, Ricardo Ribeiro Gudwin, Esther Colombini

As robots and artificial systems become more integrated into daily life, replicating human-like cognitive abilities remains a key challenge in artificial intelligence and cognitive modeling. This tutorial provides a hands-on exploration of cognitive system development using the Cognitive Systems Toolkit (CST), focusing on modeling Piaget’s first sensorimotor substage. Participants will engage in a structured learning process that integrates theoretical foundations with practical implementation, covering key cognitive functions such as sensory processing, perception, and attention. Through guided activities, they will progressively build a functional cognitive model, leveraging CST’s modular and asynchronous architecture. By the end of the tutorial, attendees will have developed a cognitive system that simulates fundamental mechanisms of early cognition, equipping them with essential skills for advancing research in AI, cognitive modeling, and computational neuroscience.

Hadar Karmazyn-Raz, Samantha Wood, Smith Linda, Chen Yu, Minoru Asada, Giorgio Metta, Alice Heine, Josh Bongard, Merkourios Simos, Alexander Mathis

How do learners make sense of the noisy, complex data of the real world? Robots, infants and chicks may use a common solution: embodied learning. By interacting with their environments, robots, infants, and chicks generate their own diverse, but temporally coherent, training data. Their actions in one moment elicit responses from the environment that in turn impact their subsequent actions. This continuous dynamic feedback loop offers embodied learners flexibility to overcome the complexity of the real-world. Interdisciplinary insights and discussions integrating robotics, cognitive science, and developmental psychology research can lead to more efficient learning systems

Half-day

Leticia Mara Berto, Marco Gabriele Fedozzi, Renan Lima Baima

https://sites.google.com/view/aicogdev-workshop/

The convergence of learning and development within the design of robust and adaptive cognitive agents is imperative. However, the domains of Developmental Robotics/AI and Cognitive Architectures have seen limited intersection to date. This workshop aims to close this gap by fostering meaningful discussion among the diverse communities associated with these topics. The workshop’s primary objective is to foster new connections between young researchers, sharing their early results and concepts, and renowned experts in the matters presented. We aspire to pave the way for fruitful interdisciplinary collaborations through this stage. Emphasizing a dynamic exchange, the workshop format includes succinct presentations, allowing ample time for questions and discussions. A culminating panel discussion featuring all presenters will further enrich the discourse, promoting a comprehensive exploration of the potential synergies between Developmental Robotics/AI and Cognitive Architectures.

Alejandro Romero, Martin Naya-Varela

The increasing autonomy of robotic systems presents both opportunities and challenges in ensuring their alignment with human values, ethical considerations, and practical purposes. “Alignment through Purpose in Autonomous Robots” aims to explore how defining a clear purpose in autonomous systems can enhance their alignment with human expectations, societal norms, and safety requirements. This workshop will bring together researchers from robotics, artificial intelligence, cognitive science, neuroscience, ethics, and psychology to discuss interdisciplinary approaches to alignment. By integrating perspectives from these diverse fields, we aim to foster a deeper understanding of how purpose-driven development can contribute to the safe and beneficial deployment of autonomous robots.

Jacqueline Fagard, Daniela Corbetta, Jeffrey J Lockman

Our hands are exquisite organs that are used to perform some of the most sophisticated and refined actions. For example, our hands are fundamental for manipulating a variety of tools to achieve complex tasks such as using a pen to write, scissors to cut, or even coordinating the use of forks, knives, or chopsticks to eat. These complex tasks involve mastering the unique properties of the tools at hand but also require controlling the multiple joints of the hand and fingers, coordinating vision and action, and keeping the goal in mind such that action, goal, and interrelation between objects can be properly established and orchestrated. The developmental pathway to achieve such a level of manual dexterity begins in infancy with elementary behaviors. The scaffolding process that ensues those initial behaviors is a protracted one involving a step-by-step assembly of rudimentary behaviors, that are culminating into some of the first basic tool use behaviors that children perform at around 2 years of age. This process, which takes a relatively long time from infancy to early toddlerhood, also raises engineering challenges to design robots capable of developing such dexterous skills. The aim of this half-day workshop is to present some of the developmental stages involved in using and gaining control of the hands, from pre-natal behavior to self-touch, to reaching for objects and manipulating tools in early toddlerhood (first 5 talks). Then, some themes related to humanoid robots and infant models of self-touch exploration will be presented in 3 additional talks.

Laura Faßbender, Loïc Goasguen, Francisco Martín López

In recent years, artificial intelligence (AI) has achieved human or even super-human performance in multiple tasks, namely related to language, computer vision, and abstract reasoning. However, despite this meteoric rise in AI capabilities, many real-world applications that children learn in their first few years of life remain too challenging for robots. This phenomenon is known as Moravec’s paradox: contrary to humans, artificial agents appear to struggle more with seemingly simple sensorimotor behaviors than with abstract reasoning.This difference is rooted in the crucial role the body plays in learning about the world through active exploration and interactions. This first edition of the workshop on the Development of Embodied Cognition (DECO) at the IEEE International Conference on Development and Learning will explore how the body influences learning and cognitive development from the perspectives of neurobiology, psychology, robotics, and machine learning. To this end, the DECO workshop will include presentations by distinguished scientists in each of these fields as well as opportunities for young researchers to present their ongoing projects and a panel discussion.

Xavier Hinaut, Laura Cohen, Alexandre Pitti


Given the great interest of the community for first three editions of the SMILES workshop at previous ICDL sessions (100 of registrations when the workshops where online), we propose to pursue the workshop for ICDL 2025. Previous year’s workshop website can be accessed at https://sites.google.com/view/smiles-workshop On the one hand, models of sensorimotor interaction are embodied in the environment and in interaction with other agents. On the other hand, progress of Large Language Models (LLMs) in the past years offers impressive language generation. However, these language model and speech models are disembodied in the sense that they are trained from static datasets of text or speech. How can we bridge the gap from low-level sensorimotor interaction to high- level compositional symbolic communication? The SMILES workshop will address this issue through an interdisciplinary approach involving researchers from (but not limited to) these topics: Sensorimotor learning; Emergent communica- tion in multi-agent systems; Chunking of perceptuo-motor gestures (gestures in a general sense: motor, vocal, …); Sensorimotor learning; Symbol grounding and symbol emer- gence; Compositional representations for communication and action sequence; Hierarchical representations of temporal in- formation (temporal sensorimotor data, temporally extended actions, …); Language processing and acquisition in brains and machines; Developmental language learning; Models of animal communication; Understanding composition and temporal processing in neural network models; and Enaction, active perception, and perception-action loops.

Workshops are exciting opportunities to present a focused research topic cumulatively.

Tutorials are meant to provide insights into specific topics through hands-on training and interactive experiences. 

In the spirit of the conference, we invite applicants to propose interdisciplinary workshops and tutorials that might create links between different research areas, approaches, and methodologies.

Workshops and tutorials can be either full-day or half-day (including oral presentations, posters and live demonstrations) and they will be held on the first day of the conference, September 16, 2025.

X