A new AI model learns about language thanks to recordings of a baby’s life

The power of Artificial Intelligence (AI) seems to have no limits. A recent study has used recordings of the daily life of a one-year-old baby to develop and train a new model based on word learning and language development. The project, led by a team of artificial intelligence and developmental psychology researchers at New York University, focuses on capturing how babies learn words and concepts through interaction with their environment.

The audio and video recordings were made by Sam, an Australian baby who for a year and a half (from six to 25 months of age) wore a helmet with a built-in camera that recorded the interactions he had with his parents and grandparents. In day to day. In reality, he recorded only 1% of his waking time for the duration of the experiment. Still, the study has provided valuable data about their language acquisition, including interactions with parents, everyday objects, and social situations.

The artificial intelligence model is called CVCL (Child’s View for Contrastive Learning, contrastive learning from the child’s perspective) and has been trained with 64 visual categories – utensils, toys, animals, among others – and the transcription of what Sam was hearing while looking at these objects. Once this database was created, the study has shown to be able to identify and learn key words from the baby’s vocabulary, as well as understand the context in which these words are used and their connection to the visual world.

This unique approach offers new insights into how humans acquire language skills from an early age and could have applications in the development of more advanced natural language processing systems. Although the study is still in its early stages, its findings promise to open new avenues of research in the field of artificial intelligence and developmental psychology.