See the full program at https://icdl2025.fel.cvut.cz/workshops/.
See the full program at https://icdl2025.fel.cvut.cz/workshops/.
See the full program at https://icdl2025.fel.cvut.cz/workshops/.
See the full program at https://icdl2025.fel.cvut.cz/workshops/.
Location: Lord Mayor’s Residence
Humans easily learn complex manipulation skills like changing a car tire by observing other people or watching instructional videos, a capability unmatched by current artificial systems. This talk will highlight our progress towards enabling robots to have such visuo-motor learning capabilities.
W1: Learning Locomotion by Co-Evolution of Morphological and Neural Parameters
W2: Camera-Based Assessment of Gendered Toy Preference in Free-Play Parent-Child Interactions
W3: Early Detection of Visual Impairments at Home Using a Smartphone Red-Eye Reflex Test
W4: Purpose-Driven Open-Ended Learning: Biasing OEL through External Guidance
W5: Human Scanpath Prediction in Target-Present Visual Search with Semantic-Foveal Bayesian Attention
W6: Can My Comfort Reflect Your Preferences? an Exploratory Study on Comfort-Driven Architecture in Human-Robot Interaction
W7: Accessible Automation: Evaluating Object Segmentation Solutions for Parent-Child Interaction Research
W8: A Cognitively-Inspired Ensemble Architecture for Robust Decision-Making in Adversarial Environments
W9: Measuring Predictability in the Home Environment Using Daylong Audio Recordings
W10: Variational Adaptive Noise and Dropout towards Stable Recurrent Neural Networks
W11: Free Lunch? Low-Cost Intelligence through Pattern-Guided Exploration
W12: The Role of Conflicting Cues in Childrens Partner Selection
W13: A Graph-Theory Approach for Testing Childrens Block Construction
W14: Assessing Whisper for Infant Research: Benchmarking ASR Accuracy and Failure Analysis on Caregiver-Infant Interactions
W1: Learning Locomotion by Co-Evolution of Morphological and Neural Parameters
W2: Camera-Based Assessment of Gendered Toy Preference in Free-Play Parent-Child Interactions
W3: Early Detection of Visual Impairments at Home Using a Smartphone Red-Eye Reflex Test
W4: Purpose-Driven Open-Ended Learning: Biasing OEL through External Guidance
W5: Human Scanpath Prediction in Target-Present Visual Search with Semantic-Foveal Bayesian Attention
W6: Can My Comfort Reflect Your Preferences? an Exploratory Study on Comfort-Driven Architecture in Human-Robot Interaction
W7: Accessible Automation: Evaluating Object Segmentation Solutions for Parent-Child Interaction Research
W8: A Cognitively-Inspired Ensemble Architecture for Robust Decision-Making in Adversarial Environments
W9: Measuring Predictability in the Home Environment Using Daylong Audio Recordings
W10: Variational Adaptive Noise and Dropout towards Stable Recurrent Neural Networks
W11: Free Lunch? Low-Cost Intelligence through Pattern-Guided Exploration
W12: The Role of Conflicting Cues in Childrens Partner Selection
W13: A Graph-Theory Approach for Testing Childrens Block Construction
W14: Assessing Whisper for Infant Research: Benchmarking ASR Accuracy and Failure Analysis on Caregiver-Infant Interactions
W15: A Motivational-Based Learning Model for Mobile Robots
W16: Multi-Object Graph Affordance Network: Goal-Oriented Planning through Learned Compound Object Affordances
W17: A Dynamical Model of Infant Gaze Behavior Applied to Language Development
W18: Using Convolutional Neural Networks to Analyze Children’s Drawings As Predictors of Cognitive Aptitude
W19: Everyday language environments of young children with Down syndrome: leveraging long form recordings
W20: CuriosityGym – A Unified Framework for Curiosity-Driven Reinforcement Learning
W21: An Evaluation of Statistical Learning to Account for Meta-Cognitive Skills in Artificial Agents
W22: Integration of Heat and Pressure Sensors in Bionic Prostheses to Enhance Human-Robot Interaction
W23: Human-In-The-Loop Lifelong Visuomotor Edge-Case Adaptation
W24: Infants As Active Explorers: Motor Burstiness and Visual Complexity in Early Development
W25: Online Sensorimotor Sequence-Based Learning Using Predictive Trees
W26: Childrens Active Play As a Natural Context for Motor Learning: A Literature-Based Conceptual Synthesis
W27: Learning from the User: Machine Learning for Prosthetic Training in Arms
W28: Growth-Based Morphological Development for Learning Stable Gaits in Bipedal Robots
W29: Improving Visual Representation Learning with Eye and Body Movements
W30: Qualitatively Guided Training of Skills
W31: Children’s Perceptions of and Behavioral Responses to Care-Assistive Autonomous Mobile Robots: Do Zoomorphic Looks Matter?
W32: Dynamic Belief Updating under Uncertainty in Late Childhood
W33: Democratising Access to Machine Learning in Developmental Science: The Case of Automated Face Detection
W34: Design and Development of a Low-Cost Functional Myoelectric Hand Prosthesis Using 3D Printing and Independent Finger Articulation
W35: Signage As Scaffold: Museum Signage Increases Exhibit Engagement and Parent-Child Collaboration
W36: Same Object Dominance, Different Handling: Contrasting Dyadic Sensorimotor Behaviours in Typically Developing and Neurodivergent Young Children
W37: Hand-To-Mouth Touch Behavior in Infants Born Very Premature
W38: Motivation-Effort-Reward Tradeoffs Explain Infant Exploratory Play
W39: Evaluating Robots Like Human Infants: A Case Study of Learned Bipedal Locomotion
Human social cognition develops along highly diverse trajectories. I will discuss the diversity of social cognitive development from an open systems science perspective, with a particular focus on visualizing the dynamics of mother-infant interactions through multiple organ-layer indices, such as gut microbiota, autonomic nervous system, neural brain system, and behavior.
Aperitif – Gender and other stereotypes in engineering and developmental science
Location: FEL Cafe
In this talk, I will discuss the links between spatial cognition and multimodal language from a developmental and individual differences perspective.
See also: https://icdl2025.fel.cvut.cz/speakers/
T1: Feature-Based Lie Group Transformer for Real-World Applications
T2: Robot Learning Theory of Mind through Self-Observation: Exploiting the Intentions-Beliefs Synergy
T3: Simulated Cortical Magnification Supports Self-Supervised Object Learning
T4: Exploring within-task calibration in free-flowing manual sampling in 9-month-olds
T5: Automated Head-Turn Estimation from Nose Position in Infant Videos
T6: Bridging Traditional and AI-Enhanced Scaffolding: A Systematic Integrative Review of Early Childhood Interventions
T7: Analyzing Multimodal Integration in the Variational Autoencoder from an Information-Theoretic Perspective
T8: Computational Models of the Emergence of Self-Exploration in 2-Month-Old Infants
T9: Emulating Perceptual Development in Deep Reinforcement Learning
T10: From Action to Protocol: The Emergence of Proto-Verbal Structure in Multi-Agent Communication Systems
T11: Towards a Novel Method for Evaluating Gait Stability with a Focus on Upper and Lower Limb Coordination in Fall Prevention
T12: Advances in Compliance Detection: Novel Models Using Vision-Based Tactile Sensors
T13: Robots That Learn to Solve Symbolic Novelties with Self-Generated RL Simulations
T14: Generative to Discriminative Knowledge Distillation for Object Affordance
T15: School-Aged Children’s Exploration Patterns
T16: Behavioral Modeling of Pedestrian Agents: A Value-Driven Approach
T17: Performance of Large Language Models and Analysis of Responses in the Wisconsin Card Sorting Task
T18: Homeostasis As a Foundation for Learning and Development in an Autonomous Robot
T19: The Ungrounded Alignment Problem
T1: Feature-Based Lie Group Transformer for Real-World Applications
T2: Robot Learning Theory of Mind through Self-Observation: Exploiting the Intentions-Beliefs Synergy
T3: Simulated Cortical Magnification Supports Self-Supervised Object Learning
T4: Exploring within-task calibration in free-flowing manual sampling in 9-month-olds
T5: Automated Head-Turn Estimation from Nose Position in Infant Videos
T6: Bridging Traditional and AI-Enhanced Scaffolding: A Systematic Integrative Review of Early Childhood Interventions
T7: Analyzing Multimodal Integration in the Variational Autoencoder from an Information-Theoretic Perspective
T8: Computational Models of the Emergence of Self-Exploration in 2-Month-Old Infants
T9: Emulating Perceptual Development in Deep Reinforcement Learning
T10: From Action to Protocol: The Emergence of Proto-Verbal Structure in Multi-Agent Communication Systems
T11: Towards a Novel Method for Evaluating Gait Stability with a Focus on Upper and Lower Limb Coordination in Fall Prevention
T12: Advances in Compliance Detection: Novel Models Using Vision-Based Tactile Sensors
T13: Robots That Learn to Solve Symbolic Novelties with Self-Generated RL Simulations
T14: Generative to Discriminative Knowledge Distillation for Object Affordance
T15: School-Aged Children’s Exploration Patterns
T16: Behavioral Modeling of Pedestrian Agents: A Value-Driven Approach
T17: Performance of Large Language Models and Analysis of Responses in the Wisconsin Card Sorting Task
T18: Homeostasis As a Foundation for Learning and Development in an Autonomous Robot
T19: The Ungrounded Alignment Problem
T20: Temporal Patterns in the Complexity of Child-Directed Song Lyrics Reflect Their Functions
T21: Correspondence Learning between Morphologically Different Robots Via Task Demonstrations
T22: Guessing Human Intentions to Avoid Dangerous Situations in Caregiving Robots
T23: Play by Play: Interacting with Targets from Crawling to Walking
T24: Mirroring-Based Prediction of Motor Intentions Using an Echo State Network in a Simulated Robotic Environment
T25: Multilingual Parent-Child Turn-Taking in Naturalistic Interactions
T26: Multi-Agent Symbol Emergence Based on Variational Bayes Naming Game
T27: Parental Social Norms and Explanations about Technology Use Rules
T28: Cultural Drift in AI Music: Co-Evolution of Generative Strategies and Evaluative Preferences
T29: Multi-Agent Reinforcement Learning Based on Variational Bayesian Naming Game
T30: Looming Visual Motion Perception in Full-Term and Premature Individuals, from Infancy to 6 Years of Age
T31: Initiation Asymmetry in the Ontogenesis of Social Routines: Caregivers Scaffold 1-Year Olds to Respond, but 2-year Olds Initiate
T32: Leveraging First-Person Experience to Predict Third-Person Beliefs in a Competitive Gridworld Task
T33: Brief Exposure to Positive Interparental Interaction Engages Infants Social Brain Networks
T34: Can social robots support shy preschoolers? Early insights from robot-assisted warm-ups in assessment settings
T35: Automatic Discovery of Affordances for Robotic Manipulation
T36: A computational toolkit for analysing the visuo-temporal complexity of video to advance research on the impacts of screentime
T37: The Magic Drawer Paradigm: A Developmental Perspective on How the Brain Processes Errors in Motor Adaptation
T38: Eye-Hand Coordination During Food Transport Using Chopsticks
T39: The Sesame Street Archive: an interactive database of educational children’s television, 1969-2018
T40: Bionic Robotic Arm Prosthesis Controlled by Myoelectric Sensors
In this talk, I will discuss principles to consider when designing neurorobots to test brain theories and to build intelligent agents. I will provide background on the topic and present some of the latest work from our lab.
See also: https://icdl2025.fel.cvut.cz/speakers/
Location: Národní 63/26, Prague 1 – New Town the nearest tram stop is Narodní třída (metro line B, tram 2, 9, 18, 22, 23)
In this talk, I will discuss the relationship between structure and functions as a guide to search for a general adaptation process and the converging power that a discussion about the machinery implementing a cognitive architecture could provide for our ICDL community. I will do that by addressing a few questions: Can we understand functional development without explicitly referring to the structure of the underlying machinery? Can we study development without referring to evolution? And which scientific communities should enforce converging activities?
See also: https://icdl2025.fel.cvut.cz/speakers/
F1: Learning Conditionally Independent Transformations Using Normal Subgroups in Group Theory
F2: Explore-Exploit Behaviors During Rat-Robot Interactions Optimize Social and Spatial Security
F3: Push, See, Predict: Emergent Perception through Intrinsically Motivated Play
F4: Towards Understanding Ambiguity Resolution in Multimodal Inference of Meaning
F5: Cyclic Exploration and Exploitation in Surprise Minimizing Reinforcement Learning
F6: Teaching a Robot to Read Faces: Incremental Emotion Learning with Selective Visual Attention
F7: Are Multimodal Signals Synchronous?: Temporal Relation of Declarative Gestures and Language Instructions in Human Robot Interaction
F8: Computational Modelling of Infant Gaze Following in Cluttered Environments and Reduced Caregiver Gaze Reliability
F9: Groups Matter: Investigating the Effects of Homophily in Child Interactions in an Inclusive Classroom
F10: Fast or Slow: Adaptive Decision-Making in Reinforcement Learning with Pre-Trained LLMs
F11: Who Said What (WSW2.0)? Enhanced Automated Analysis of Preschool Classroom Speech
F12: Contingent Behavior During Caregiver-Child Interaction Improves the Quality of Word Learning Opportunities
F13: Modeling the Impact of Phonological and Semantic Connectivity on Early Vocabulary Growth
F14: Unified Attention Modeling for Efficient Free-Viewing and Visual Search Via Shared Representations
F15: SHIFT: An Interdisciplinary Framework for Scaffolding Human Attention and Understanding in Explanatory Tasks
F16: The Role of Social Cues in Infants Word Segmentation When Interacting with a Furhat Robot
F17: How Socioeconomic Status and the Home Environment Influence Early Cognitive Development in British Pre-Schoolers
F18: Comparative Learning Signals Lead to Aligned Representations in an Infant-Inspired Visual Task
F1: Learning Conditionally Independent Transformations Using Normal Subgroups in Group Theory
F2: Explore-Exploit Behaviors During Rat-Robot Interactions Optimize Social and Spatial Security
F3: Push, See, Predict: Emergent Perception through Intrinsically Motivated Play
F4: Towards Understanding Ambiguity Resolution in Multimodal Inference of Meaning
F5: Cyclic Exploration and Exploitation in Surprise Minimizing Reinforcement Learning
F6: Teaching a Robot to Read Faces: Incremental Emotion Learning with Selective Visual Attention
F7: Are Multimodal Signals Synchronous?: Temporal Relation of Declarative Gestures and Language Instructions in Human Robot Interaction
F8: Computational Modelling of Infant Gaze Following in Cluttered Environments and Reduced Caregiver Gaze Reliability
F9: Groups Matter: Investigating the Effects of Homophily in Child Interactions in an Inclusive Classroom
F10: Fast or Slow: Adaptive Decision-Making in Reinforcement Learning with Pre-Trained LLMs
F11: Who Said What (WSW2.0)? Enhanced Automated Analysis of Preschool Classroom Speech
F12: Contingent Behavior During Caregiver-Child Interaction Improves the Quality of Word Learning Opportunities
F13: Modeling the Impact of Phonological and Semantic Connectivity on Early Vocabulary Growth
F14: Unified Attention Modeling for Efficient Free-Viewing and Visual Search Via Shared Representations
F15: SHIFT: An Interdisciplinary Framework for Scaffolding Human Attention and Understanding in Explanatory Tasks
F16: The Role of Social Cues in Infants Word Segmentation When Interacting with a Furhat Robot
F17: How Socioeconomic Status and the Home Environment Influence Early Cognitive Development in British Pre-Schoolers
F18: Comparative Learning Signals Lead to Aligned Representations in an Infant-Inspired Visual Task
F19: Sustained and Joint Attention in Young Children with and without down Syndrome During Free-Flowing Interaction: Insights from Dual Head-Mounted Eye-Tracking
F20: Evaluating Hand Detection Accuracy on a Unique Egocentric Dataset of Children with and without down Syndrome
F21: Using Video from the Crib to Explore Infants Self-Touch Over the Transition to Independent Crawling
F22: Infant Age Classifier for a Baby Brain-Computer Interface
F23: Unveiling Neural Dynamics in Mother-Infant Interactions: Insights from Graph Theory
F24: Adaptive Collaborative Control for Social Humanoid Robots
F25: Optic Flow Perception in Full-Term and Preterm Infants and Children
F26: Qualitative Causal Models for Self-Learning Autonomous Robots
F27: Analyzing Moves and Gaze Patterns in Autistic and Non-Autistic Adults on an Online Block Design Task
F28: Physiology Meets Baby Talk: Cardiac Foundations of Infant-Directed Speech
F29: Multidimensional Physiological States of Infant Visual Attention
F30: Discovering Vocal Chunks in Birdsong Using Language Model Tokenizers
F31: Developing a Framework for Assessing Mutual Trust in Human-Robot Interaction
F33: Variability Constrains Word Learning and Generalization: A Neurocomputational Account
F34: Investigating and Improving Eye-Tracking Data Quality in Comparative and Developmental Psychology
F35: TinyTouch: A Manual Labelling Application of Spontaneous Self-Touch Behavior
F36: Listening In: Evaluating Automated Keyword Spotting in Parent Child Interactions
F37: How Do Children Perceive the Body and Emotions of an Anthropomorphic Robot? an Example with the NAO Robot
F38: Error-Related Brain Activity Reveals Sensorimotor Constraints During Visuomotor Adaptation in Children with a Neurodevelopmental Motor Disorder
F39: Emotion recognition development in infancy. A multimodal longitudinal study in the wild.