Rabia Ergin, Limor Raviv and Simon Kirby
Natural languages use various strategies for expressing semantic roles (who did what to whom/what), with the most prevalent one being word order. Here we investigate how word order conventions emerge and conventionalize as a factor of event structure and language age.
The uneven distribution of word orders in the world’s languages (i.e., with SOV and SVO being far more common than others (Dryer, 2013; Napoli & Sutton-Spence, 2014)) suggests that word order preferences are affected by cognitive and communicative factors causing certain semantic roles to precede/follow others. Specifically, word order preferences are shaped by event structure hierarchies such as animacy and agency. Evidence from spoken and signed languages show that, across languages, there is a tendency for agents to precede patients (e.g., Jackendoff, 2002) and animate arguments to precede inanimate ones (e.g., MacWhinney, 1977). Yet crucially, word order preferences are also affected by language age. Previous work on emerging sign languages demonstrate systematic variation in word order preferences depending on agent and patient animacy in the initial stages of language formation (Meir et al., 2017, Ergin et al., 2018).
The current study investigates how animacy and agency shape word order patterns in the manual modality, and how these patterns evolve and conventionalize over time. We do this by comparing three types of manual systems in different stages of conventionalization: silent gesture (i.e., an improvised form of communication with no established conventions); Central Taurus Sign Language (i.e., CTSL: an emerging sign language with emerging conventions); and Turkish Sign Language (i.e., TID: an older sign language with more established conventions). We code elicited responses for word order in a comparable scheme using measures of conditional entropy to analyze word order patterns. Specifically, we quantify the degree of variability within and across participants, capturing whether individuals and the community as a whole are consistent in their selected word order in a given scenario.
Preliminary results show that there is a high level of variability within participants, such that across the three manual systems, participants use 2-4 word order variants on average. This is also reflected by a relatively high level of entropy within and across participants. Crucially, entropy is the lowest in CTSL compared to both improvised silent gesture and TID. While it is expected to observe more word-order variability in silent gesture compared to CSTL (as there are no communicative pressures and established conventions in an improvised gestural system), it is somewhat surprising to observe more variability in TID than CTSL. We tentatively propose that TID has other conventionalised linguistic devices (e.g., spatial devices) allowing the system to be flexible with word order. It is possible that CTSL is in a transition stage where some conventions of word order that are sensitive to the relative animacy of arguments are being established. During this stage, CTSL may be primarily depending on systematic word order patterns as a linguistic device to reliably convey the intended messages (rather than spatial devices frequently used in TID).