Recurrent babbling: evaluating the acquisition of grammar from limited input data


Recurrent Neural Networks (RNNs) have been shown to capture various aspects of syntax from raw linguistic input. In most previous experiments, however, learning happens over unrealistic corpora, which do not reflect the type and amount of data a child would be exposed to. This paper remedies this state of affairs by training a Long Short-Term Memory network (LSTM) over a realistically sized subset of child-directed input. The behaviour of the network is analysed over time using a novel methodology which consists in quantifying the level of grammatical abstraction in the model’s generated output (its babbling), compared to the language it has been exposed to. We show that the LSTM indeed abstracts new structuresas learning proceeds.

Conference on Computational Natural Language Learning 2020