Introduction: In the dynamic field of phonetic research, finding innovative ways to collect data can be both a challenge and an opportunity. At our P&P group, Cong, Yanyu, and Damar are taking a leap forward with their latest study, “Gamifying Phonetic Data Collection: An Accent Identification and Attitude Study.” This blog post delves into how they are incorporating gamification into phonetics, transforming the way they understand accents and linguistic attitudes.
The Challenge of Traditional Data Collection: Traditionally, collecting phonetic data has been a straightforward yet often tedious task, both for researchers and participants. Recognizing this, Cong and her colleagues wondered: Could the principles of gaming enhance the data collection process in phonetic studies?
The Innovative Approach: The study is pioneering in its approach, merging the realms of gaming with linguistic research. The core aims include:
Evaluating the effectiveness of gamified methods in phonetic data collection.
Investigating the influence of geographical location on accent identification and attitudes.
Gamification in Action: One of the highlights of the project is the use of a game-based study by identifying native English accents used in the speech on the map of the UK. The leader board and the points shown at the end of each block function as the motivations for participants. This innovative tool not only engages participants in a unique way but also offers deeper insights into how accents are perceived and understood in various geographical contexts.
Preliminary Findings and Insights: While it was a small-scale study in which participants were mainly members of our group, initial findings suggest that gamification can significantly improve participant engagement and the quality of data collected. These insights have the potential to reshape how we conduct phonetic research in the future.
Thoughts From the Group:
Ghada: Researchers in the fields of phonetics and phonology have long recognized the accent bias issues in everyday activities. Now, it seems to be the right time to move a step forward and explore what we, as phonoticians or linguists, should do next.
Fengting: In many sociolinguistic studies on language attitudes, as well as perceptual studies on accentual adaptation and generalization, researchers often do not provide details on how they recruit talkers, and standards have not been set for the selection of accented speech used in this research. This raises the question of how we can be sure that we are researching the actual accents that interest us. A further question is: from whom can accented speech be considered a good representation of the accent?
Conclusion: With continuous exploration in the realm of gamified phonetic studies, we’re enthusiastic about the prospects this approach holds. Stay tuned for more updates and detailed findings in the future.
Your thoughts, feedback, and insights are always welcome!
From August 7th to 11th, our research group embarked on an illuminating expedition to the 20th International Congress of the Phonetic Sciences (ICPHS) 2023. As one of the most significant congresses in the field of phonetics, the ICPHS is a pivotal event that only occurs once every four years. We were proud to have nine active participants from our group in attendance. Of these, three members delivered compelling oral presentations, while the rest engaged the academic community through insightful poster presentations.
Within our research group, the conference served as a testament to our unity and collective strength. Every individual contributed to the tapestry of success, and the supportive atmosphere propelled each of us to excel. As questions flowed from the audience, they illuminated the depth of engagement and curiosity that permeated the conference. More details of what we presented in the congress will be provided in the following. This post also aims to encapsulate our experiences, learnings, and the milestones achieved during this prestigious event.
Objectives and Expectations
The overarching theme of ICPHS 2023 was “Intermingling Communities and Changing Cultures,” which deeply resonates with the emerging dynamics of our interconnected world. Over the past few decades, there has been an unprecedented surge in mobility and interpersonal contacts, disrupting the boundaries of national languages and impacting speech patterns universally.
Our primary objective for attending the congress was twofold. Firstly, we aimed to share our own insights and research findings with a broader academic community. We were particularly eager to contribute to the ongoing dialogue about how modern societal shifts are influencing phonetics and phonology. Secondly, we were excited to learn from other leading researchers in the field. We wanted to grasp what constitutes ‘trendy’ research currently and to understand how the academic discourse in this field is changing and evolving.
Moreover, we were keen to explore potential directions for future research and possible collaborative efforts. Given that the congress serves as a melting pot of ideas and innovations, we were optimistic about forging new academic alliances that could pave the way for co-operative ventures in the years to come.
Highlights and Contributions
Oral Presentations
1.Turnbull Rory: Phonological Network Properties of Non-words Influence Their Learnability
Rory’s study underscored the significance of a word’s phonological neighborhood in phonetic processing, extending this concept to non-words. By analyzing participant responses in an experimental setting, this study demonstrated that non-words with more “neighbors” and well-connected neighbors are learned with higher accuracy, indicating that existing lexicon can significantly influence the acquisition of new words.
2. Du Fengting: Rapid Speech Adaptation and Its Persistence Over Time by Non-Standard and Non-Native Listeners
Fengting’s study delved into the intriguing phenomena of how listeners adapt to accented speech, especially when the talkers share the same, similar or different language backgrounds. Through methodical research, the study revealed that both non-standard and non-native English listeners were more adept at perceiving and adapting to accents the same as their own, but not the similar or different one. Notably, this adaptation was not only immediate but also persisted over a 24-hour period, suggesting intriguing implications for language learning and communication.
This presentation offered an innovative perspective on how incremental cue training could aid in lexical tone learning for non-tonal language speakers. The findings suggest that employing exaggerated contrasts in pitch movements during the initial stages of training can significantly improve the learners’ ability to discern tonal differences, leading to comparable end-stage performance with conventional training methods.
Poster Presentations
1. Kelly Niamh: Interactions of Lexical Stress, Vowel Length, and Pharyngealisation in Palestinian Arabic
Niamh’s research filled a crucial gap in the understanding of lexical stress in Palestinian Arabic. By analyzing the acoustic correlates of lexical stress, she shed light on the nuanced interactions among stress, phonemic length, and pharyngealization, offering a comprehensive phonetic description of stress patterns in this Arabic variety.
2.Zhang Cong, Lai Catherine, Napoleão de Souza Ricardo, Turk Alice and Bogel Tina: Language Redundancy Effects on Fundamental Frequency (f0): A Preliminary Controlled Study
This study investigated the effects of language redundancy on fundamental frequency (f0), supporting the Smooth Signal Redundancy Hypothesis. Their controlled experiments revealed that language redundancy could indeed affect f0, potentially adding another layer to our understanding of prosodic structure.
This paper delved into the intricate relationship between voice onset time (VOT) and fundamental frequency (f0) in Jazani Arabic, proposing that f0 perturbation is predictable from VOT patterns. This lends further evidence to the theory of Laryngeal Realism and offers new insights into phonological representation.
This study questioned whether accent familiarity impacts narrative recall, particularly when listeners are exposed to different accents. Findings indicated a ‘familiarity benefit’ in Tyneside listeners, extending the impacts of accentual familiarity on language perception.
Our group’s robust contributions across diverse areas in phonetics and phonology were met with great interest and sparked important academic discussions, marking a significant footprint in the advancements of the field.
Reflection and Future Directions in Phonetics and Phonology
The congress served as an eye-opening experience that showcased the incredible diversity and depth of current research in phonetics and phonology. With hundreds of insightful studies, the event drew a roadmap for the future of these fields. While each study was a piece of a larger puzzle, the six keynote lectures stood out as beacons guiding the way forward.
Technological Advancements and Precision
Research in phonetics and phonology cannot stand alone; it requires the support of powerful tools to quantify sound features and visualize articulatory processes. Over the past few decades, these research tools have continually evolved, and numerous researchers have actively applied state-of-the-art technology in their studies. John Esling’s focus on the larynx as an articulator hinted at the importance of advanced imaging techniques, opening new avenues for understanding language development and linguistic diversity. Similarly, Paul Boersma’s discussion about the future of Praat emphasized how technology will revolutionize phonetic and phonological models. The convergence between these talks suggests a future where technology plays an increasingly central role in refining our analyses and providing greater computational power for simulating the nuances of speech.
Interdisciplinary Synergies
As the landscape of research in phonetics and phonology broadens, the call for multidisciplinary perspectives becomes increasingly urgent. Ravignani Andrea and Stuart-Smith Jane both underscored the importance of multidisciplinary approaches. While Andrea seeks to combine ethology, psychology, neuroscience, and behavioral ecology to explore the origins of vocal rhythmicity, Stuart-Smith envisions a future where sociophonetic and social-articulatory data deepen our understanding of speech patterns related to identity, social class, and dialect. The common thread here is the necessity for interdisciplinary collaboration to answer complex questions that cannot be addressed by any single field alone.
Embracing Social Responsibility
Perhaps the most poignant insights came from talks focusing on the social aspects of research. Titia Benders emphasized the crucial need for expanding child language acquisition research to lesser-studied languages, not only to understand their unique phonological elements but also to develop inclusive research methods. Pavel Trofimovich, on the other hand, urged for a socially responsible approach to second-language speech research, one that balances academic rigor with meaningful social impact. These talks collectively call for a future where research is not just theoretically robust but also socially responsible, reaching communities and languages that have been traditionally underrepresented.
Together, the keynotes painted a vibrant picture of a future that is technologically advanced, inherently interdisciplinary, and deeply rooted in social responsibility. It is clear that the next wave of research in phonetics and phonology will be as diverse and dynamic as the voices that make up human language itself.
Summary
The insights gained from this year’s congress serve as a valuable roadmap for the direction of phonetic and phonological research, areas that are central to the mission of our research group. From the pivotal role of technology in advancing research methodologies to the importance of interdisciplinary collaboration and social responsibility, the keynotes and studies presented offer a multifaceted view of the field’s future. As our group continues to explore new avenues of research, we are invigorated by the wealth of possibilities that these emerging trends present. They not only affirm the work we are currently undertaking but also challenge us to think about how we can contribute to these evolving dialogues in meaningful ways. Thank you for following along with our coverage of the congress, and stay tuned for upcoming research projects that will reflect these dynamic shifts in the field.
Being interested in sound systems from the perspective of both production and perception, Niamh Kelly ran a project examining the production of /l/ sounds by bilingual speakers of English and Spanish from the El Paso region, to investigate the effects of language dominance on velarisation patterns. She also ran a pilot study where she looked at the production of /z/ sound of a bilingual across time. Here, we had her give us a presentation of the outputs of her work.
Part 1: A bilingual community on the US-Mexico border: what are they doing with their [l]?
Background information:
Transfer can occur in the productions of multilinguals, where one language influences the other and such effects can go in either direction between the L1 and L2. Sometimes, speakers are found to have productions that are intermediate between the two languages. In some regions, the whole community is bilingual, making it convenient to look at language transfer effects.
Although similar to each other, the /l/ sounds in American English (AmE) and Spanish are not exactly matched up. While /l/ sound in Spanish is realised as fronted (light/clear) /l/, in AmE it is more velarised overall, especially when it is in codas.
The participants in this research lived in a city (El Paso) on the US-Mexico border, which is a bilingual community.
Research questions:
This research asks:
To what extent do balanced bilingual speakers show transfer effects in laterals? That is, are there positional effects in just English or in both English and Spanish?
What effects of vowel height and front/backness have on velarisation in laterals in both languages?
Hypotheses:
Since these speakers are balanced bilinguals, they could be expected to keep their languages separate: English /l/ would be more velarised overall than Spanish /l/, and that in English, coda /l/ would be more velarised than onset /l/, while in Spanish no such difference would occur.
Results:
From the analysis of the participants’ production in both English and Spanish the following results emerged:
English and Spanish were significantly different in both positions. /l/ was more velarised in English than Spanish in both onset and coda position.
/l/ was more velarised in codas than onsets in English while no such positional difference emerged in Spanish.
Next steps:
Research like this and more further research can help in adding to the description of non-mainstream varieties of English and varieties used by multilinguals.
Part 2: A bilingual across time: what happened to his /z/? Acquisition of voicing in English /z/ by an L1 Norwegian speaker in a 25-year period.
Background information:
The English /s/ – /z/ contrast has been found to be difficult to acquire for L2 English speakers who do not have this contrast in their L1. Norwegian-accented English has a lack of voicing in /z/ since Norwegian does not have /z/.
Current study:
This study is a longitudinal study of the L2 English of L1 Norwegian speaker Ole Gunnar Solskjær. Ten interviews from two time periods, 1996-8 and 2021 were examined, focusing on his English productions of /s/ and /z/ and how production patterns change over time. There variables were coded: position in word (medial or final), preceding segment (voiced or voiceless), and morphemic status (morphemic, e.g., ‘goals’ vs stem, e.g., ‘please’).
Results:
In the Early timeframe, 100% of /s/ tokens were voiceless and 93% of /z/ tokens were voiceless. In the Late timeframe, 98.5% of /s/ tokens were voiceless (no significant effect of timeframe) and 46% of /z/ tokens were voiceless (a significant effect of timeframe).
Duration was longer when voiceless (supporting the auditory categorisation) but not affected by position in word.
No difference based on morphemic status.
Discussion and next steps:
More exposure to and practise with the L2 has led to an increase in L2-like voicing productions. OGS is acquiring a new voicing contrast, but has not acquired it completely as only about half of /z/ tokens were voiced. More work can be done to look at other fricatives and also the intermediate time frame.
General conclusion:
Here we find transfer of L1 phonetic and phonological patterns to L2 at the individual level, which can continue even after years of exposure and use.
It also occurs on a larger scale when a community is bilingual.
It is important for linguists to describe non-mainstream varieties.
The finals of the 3-minute thesis are taking place on the 16th of June. One of our members, Carol-Ann McConnellogue, has made it to the final. Carol-Ann is developing an individualised speech therapy programme for children with cerebral palsy and is doing her PhD jointly with ECLS and FMS.
The Three Minute Thesis (3MT) competition asks doctoral students to explain their research in just three minutes using only one slide. The explanation should be easily understood by a non-specialist. Originally developed by the University of Queensland, Australia it has been taken up by Universities across the world. The competition offers training then the opportunity to compete in a University final in front of the public. The winner of this final will go forward to compete in the national Vitae 3MT competition in September.
It’s a great opportunity to listen to students from different disciplines talk about their PhD topics in a succinct and non-technical way.
From January 2022 to May 2022, our research group has continued to be well-engaged in projects from Semester 1 and arranged new workshops to discuss topics of interest amongst our team members. The following is a summary of what we covered this semester:
Accent and Social Justice
Since the beginning of this academic year, we have focused on the theme ‘Accent and Social Justice’, reviewed several related articles, and had Melissa Baese-Berk from the University of Oregon share her and her colleagues’ recent research with us. We organised and held an interdisciplinary workshop on accent, communication, and social justice in March of this Semester which was very successful. We were honoured to have presenters from both within our research group and outside of the group share their research and opinions. More information can be found in this blog post.
Many Speech Analyses
One of our main discussion topics of Semester 2 has been the Many Speech Analyses project we signed up for at the end of last semester. This project aims to compare what approaches different researchers take to answer the same research question using the same dataset. The general research question is: ‘Do speakers phonetically modulate utterances to signal atypical word combinations?’. We scheduled fortnightly meetings for this project. We started by reviewing other studies to help us plan a suitable analysis and decided to measure the timing of utterances to answer the research question. We imported the sound files to the MFA (Montreal Forced Aligner) for the forced alignment, and the results were distributed to the members for the crossed hand-correcting. Rory Turnbull, our project leader who is also a member of the P&P research group, guided us in extracting the timing of articulation of certain vowels. After analysing the dataset and submitting our report, we took a few weeks to review the reports from other researchers/research groups. Some peer analyses involved certain research methods or related tools unfamiliar to us, allowing us to expand our knowledge outside our expertise. These included:
Forced alignment and inter-rater reliability in Praat
During a couple of weekly meetings, we had Caitlin Halfacre and Rory run Forced alignment in the Montreal Forced Aligner and demonstrate how to hand-correct it in Praat, such as tier setting, labelling and calibration of the initial phone etc. Group members teamed up separately to help each other and shoot problems together. Bruce Wang coded in Praat to sample and measure the agreement of each text grid from the crossed hand-correct. The inter-rater reliability of our group members turned out to be quite strong.
Praat Phonetic Analysis
After checking the correctness and reliability of phone alignment, Rory led two sessions demonstrating how to extract specific labels and measure the timing of utterances by coding in Praat.
Digital Signal Processing (DSP)
When we reviewed other researchers’ reports, we found certain research methods which were unfamiliar to us, such as DSP. We used a session as an introduction to these techniques. During this session, we watched a video to recap the anatomy of sound perception, discussed the anatomy of the cochlea, and talked about the acoustic versus auditory differences between two tones that are 100 Hz apart and gammatone filter back. However, without a well-established background in neurolinguistics, it’s still difficult for us to fully understand what the results of one peer reviewed report meant.
To conclude, we successfully ran the ‘Accent and Social Justice’ workshop and completed the Many Speech Analyses project together this semester and learnt much research knowledge and relevant expertise from this experience. We expect to explore more exciting topics and themes in the future and keep updating and publicising our work here.
Our research theme for 2021/22 has been “Accent and Social Justice”. We have read and reviewed literature on accent processing and perception, and discussed the prejudices towards certain accents and the injustices those may experience.
In order to spread awareness about accent and social justice, and the research our group has undertaken, we have organised an interdisciplinary workshop on accent, communication and social justice, which will be held on 30/03/2022.
The workshop will consist of presentations from members of our research group and academics outside of the field of Phonetics and Phonology who have an interest and knowledge in our research theme. Topics which will be discussed include self-descriptions of UK-based English accents; constructing native speakerism in Chinese community schooling; and racist nativism in England’s education policy, to name a few. The abstracts for each presentation can be found here. There will be time for discussion after each presentation to give attendees the chance to ask questions, exchange ideas, and explore the topics further.
The workshop will begin at 9am on Wednesday 30th March. It will be a hybrid event meaning people can attend in person or via Zoom. For those attending in person, the workshop will be held in room G.21/22 of the Devonshire Building at Newcastle University. Lunch will begin at 12pm and refreshments will be provided. This will be another opportunity for attendees to mingle and discuss the topics explored. The workshop will end at 1pm. The full workshop programme can be viewed here.
If you are interested in attending our workshop, you can sign up using this link.
We look forward to seeing you and hope this workshop enables you to delve into rich discussion around a very important issue.
From September 2021 to February 2022, our research group has been very active and involved in several projects. Here is a short summary of what we discussed during our weekly meetings:
Accent and Social Justice: Within our research theme for this year, “Accent and Social Justice”, we reviewed recent literature on how different accents are processed, perceived and potentially discriminated against. We also attended a talk byMelissa Base-Berk from the University of Oregon, in which she discussed her novel and fascinating research on accent perception and adaptation. Have a look at this blog post if you would like to find out more. Currently, we are organising an interdisciplinary workshop on accent, communication and social justice, to be held in March 2022. Watch this space for further information on the event.
Quantitative Methods: Bilal Alsharif, a member of our research group, provided us with an introduction to Bayesian methods. We discussed their benefits and challenges in comparison with frequentist methods. Our interest in everything quantitative did not stop there, as we held weekly study group meetings to brush up on our statistics and R skills. The statistic study group will be continuing this semester.
Many Speech Analyses: As a group, we signed up for this large collaborative project. The aim of the project is to compare the approaches that different researchers take to answer the same research question (“Do speakers phonetically modulate utterances to signal atypical word combinations?”) with the same dataset. We have already explored the dataset and will discuss in the following weeks which methods we want to use. You can find out more about Many Speech Analysis on the project website.
Noise-Masking of Speech: Another topic of discussion came from Andreas Krug, who was wondering why some of the speakers in his study were easier to hear over noise than others. We had a look at potential acoustic measures to quantify this and how to deal with these differences in an experimental design and statistical analysis.
Transcription Training: We practised our phonetic transcription skills with some of Ghada Khattab‘s Arabic data. We discussed the differences in our transcriptions and compared the realisations we heard with the target realisations in Arabic. We are planning to practise transcriptions of other speech data this semester, including dysarthric speech, to further our transcription skills.
New Doctors: Our members Nief Al-Gambi and Bruce Wang successfully completed their vivas. Congratulations to the two of them!
We are looking forward to keep working on these projects in Semester 2. You can check our website to keep up to date with our work.
Prof. Melissa and her colleagues and students are constantly productive in the research of speech processing as well as accent perception and adaptation. In her talk, she walked us through their new work on the adaptation to unfamiliar speech and the perception of non-native speech (see Cheng et al., 2021).
The main examining issues in their studies include:
The difficulties in communication brought about by linguistics properties of non-native speech, language background of talkers and listeners, and certain cognitive factors (McLaughlin, Baese-Berk, Bent, Borrie & Van Engen 2018)
The conditions under which accent general adaptation might occur (Afghani, Baese-Berk & Waddell, under review at the time when the talk happened)
The main results found by them are:
Listeners may make the most of different resources to facilitate their speech processing; some cognitive factors, like vocabulary and working memory, correlate with listening challenges; the noises from the environment can degrade rhythm perception (McLaughlin et al., 2018).
Incentives may be an answer for a better performance in speech processing, and listeners incentivised can start processing better and learn more quickly than those who are not (Afghani, Baese-Berk, & Waddell, under review at the time of the talk).
Speech perception is more difficult when it is:
Dysarthric speech
Speech-in-noise
Time-compressed speech
Synthetic speech
However, practice listening in these conditions may improve speech processing for listeners.
The issues to be looked at next:
The role of memory in comprehension
The similarities / difference between the adaptation to a talker and to an accent
The interaction between adaptation and physical and linguistics context
Reflections from our Research Group:
This is a very relevant topic to what is currently being discussed in our research group around accent and social justice. Our group is hosting an event in Spring 2022 which will discuss some of the topics addressed.
It directed us to other literature surrounding the topic.
A good way to network with others interested in the topic.
I found it very interesting that incentivising participants can make a significant difference in how they process speech.
In August 2019, I was supported by a PhilSoc travel bursary to attend the 19th International Congress of Phonetic Sciences, to present a poster. The conference was in Melbourne, hosted by The Australasian Speech Science and Technology Association and had 422 oral presentations and 397 poster presentations. The poster I presented was based on my MA and was also included in the Congress proceedings papers. My title was North-South Dividers in privately educated speakers: a sociolinguistic study of Received Pronunciation using the foot-strut and trap-bath distinctions in the North East and South East of England.
There is a model of accent variation in England that demonstrates the interactions between regional variation and variation based on social class. The high level of regional variation found in working class speakers seems to reduce going up the socio-economic spectrum, see, with the top of the triangle forming the accent called Received Pronunciation (RP – popularly known as BBC English). However, this model has not been updated for almost 40 years. My research involves recording speakers from different regions whose socio-economic status would place them near the top of this triangle and investigating a variety of accent features that would general display regional variation.
The paper I presented discussed what are known as the FOOT-STRUT and TRAP-BATH splits, descriptions of what vowels speaker uses. The FOOT-STRUT split is whether the two words (and those in the same sets) rhyme or not, and the TRAP-BATH split is whether words like bath have the same vowel as TRAP, generally found in the North, or the same vowel as PALM, generally found in the South. In 10 privately educated speakers from the North East and South East I found that they all behaved the same as each other in the FOOT-STRUT split, demonstrating that this feature acts in a non-regional manner. However, regarding the TRAP-BATH split, I found that the speakers reflected the patterns found in their local region. This is likely due to the social salience of the feature; non-linguists have a strong awareness of how people in different regions pronounce words in the BATH set (e.g. glass, path, mast) and see it as a regional identity marker.
Presenting this poster gave me the opportunity to gain feedback on both my methods and results, invaluable information for data collection for my PhD. I also was able to meet and discuss my findings with leading researchers in the field, whose work has greatly influenced mine. Including the researcher who illustrated the above model, and another who is the only other person currently publishing sociophonetic research on RP.
I would like to thank PhilSoc for awarding me the travel bursary, I used it to supplement the funds my department were able to give in order to make up the required amount. This congress only happens once every four years and I could have missed out on the opportunity to attend without their support.
My poster and proceedings paper can be found on my website.
Training in the usage and analysis of UTI (Ultrasound Tongue Imaging) with Natasha Zharkova
by Andreas Krug
Over the course of two sessions, Natasha introduced us to the use of ultrasound tongue imaging in linguistics research. We learned about data collection with the ultrasound machine as well as the subsequent manipulation and analysis of the data. Natasha showed that ultrasound techniques are fruitful not only in clinical settings but can be used in sociolinguistics to quantify, for example, the distribution of clear and dark /l/.
We learned that the ultrasound tongue images are created by placing a probe behind a participant’s chin. When adjusted correctly, this probe creates an image of the tongue that can be time-aligned with what the participant’s utterances. The tongue images can further be used in conjunction with spectrograms to get ‘the best of both worlds’: images from a comparably non-invasive articulatory method and acoustic data.
The tongue images, which take up a considerable amount of memory space, are analysed as splines. The coordinates of these splines depend on the relative position of the tongue in the mouth and can be imported into R for further analysis. In our workshop, we took a first attempt at this and successfully visualised two individual splines of Ghada’s productions of /l/.
It was great to learn some of the basics of ultrasound tongue imaging from one of the experts in the field in a hands-on manner. There are now more studies in clinical and non-clinical linguistics that use ultrasound techniques and understanding how it works makes it easier to follow many of the papers. I personally plan to use it at some point to look into the articulatory properties of TH-fronting more closely.