Tag Archives: Phonology

Revolutionizing Phonetic Research with Gamification: A New Approach to Accent Studies

Introduction: In the dynamic field of phonetic research, finding innovative ways to collect data can be both a challenge and an opportunity. At our P&P group, Cong, Yanyu, and Damar are taking a leap forward with their latest study, “Gamifying Phonetic Data Collection: An Accent Identification and Attitude Study.” This blog post delves into how they are incorporating gamification into phonetics, transforming the way they understand accents and linguistic attitudes.

The Challenge of Traditional Data Collection: Traditionally, collecting phonetic data has been a straightforward yet often tedious task, both for researchers and participants. Recognizing this, Cong and her colleagues wondered: Could the principles of gaming enhance the data collection process in phonetic studies?

The Innovative Approach: The study is pioneering in its approach, merging the realms of gaming with linguistic research. The core aims include:

  • Evaluating the effectiveness of gamified methods in phonetic data collection.
  • Investigating the influence of geographical location on accent identification and attitudes.

Gamification in Action: One of the highlights of the project is the use of a game-based study by identifying native English accents used in the speech on the map of the UK. The leader board and the points shown at the end of each block function as the motivations for participants. This innovative tool not only engages participants in a unique way but also offers deeper insights into how accents are perceived and understood in various geographical contexts.

Preliminary Findings and Insights: While it was a small-scale study in which participants were mainly members of our group, initial findings suggest that gamification can significantly improve participant engagement and the quality of data collected. These insights have the potential to reshape how we conduct phonetic research in the future.

Thoughts From the Group:

  • Ghada: Researchers in the fields of phonetics and phonology have long recognized the accent bias issues in everyday activities. Now, it seems to be the right time to move a step forward and explore what we, as phonoticians or linguists, should do next.
  • Fengting: In many sociolinguistic studies on language attitudes, as well as perceptual studies on accentual adaptation and generalization, researchers often do not provide details on how they recruit talkers, and standards have not been set for the selection of accented speech used in this research. This raises the question of how we can be sure that we are researching the actual accents that interest us. A further question is: from whom can accented speech be considered a good representation of the accent?

Conclusion: With continuous exploration in the realm of gamified phonetic studies, we’re enthusiastic about the prospects this approach holds. Stay tuned for more updates and detailed findings in the future.

Your thoughts, feedback, and insights are always welcome!

A Resounding Success: Our Journey at ICPHS 2023

From August 7th to 11th, our research group embarked on an illuminating expedition to the 20th International Congress of the Phonetic Sciences (ICPHS) 2023. As one of the most significant congresses in the field of phonetics, the ICPHS is a pivotal event that only occurs once every four years. We were proud to have nine active participants from our group in attendance. Of these, three members delivered compelling oral presentations, while the rest engaged the academic community through insightful poster presentations.

Within our research group, the conference served as a testament to our unity and collective strength. Every individual contributed to the tapestry of success, and the supportive atmosphere propelled each of us to excel. As questions flowed from the audience, they illuminated the depth of engagement and curiosity that permeated the conference. More details of what we presented in the congress will be provided in the following. This post also aims to encapsulate our experiences, learnings, and the milestones achieved during this prestigious event.

Objectives and Expectations

The overarching theme of ICPHS 2023 was “Intermingling Communities and Changing Cultures,” which deeply resonates with the emerging dynamics of our interconnected world. Over the past few decades, there has been an unprecedented surge in mobility and interpersonal contacts, disrupting the boundaries of national languages and impacting speech patterns universally.

Our primary objective for attending the congress was twofold. Firstly, we aimed to share our own insights and research findings with a broader academic community. We were particularly eager to contribute to the ongoing dialogue about how modern societal shifts are influencing phonetics and phonology. Secondly, we were excited to learn from other leading researchers in the field. We wanted to grasp what constitutes ‘trendy’ research currently and to understand how the academic discourse in this field is changing and evolving.

Moreover, we were keen to explore potential directions for future research and possible collaborative efforts. Given that the congress serves as a melting pot of ideas and innovations, we were optimistic about forging new academic alliances that could pave the way for co-operative ventures in the years to come.

Highlights and Contributions

Oral Presentations

1. Turnbull Rory: Phonological Network Properties of Non-words Influence Their Learnability  

Rory’s study underscored the significance of a word’s phonological neighborhood in phonetic processing, extending this concept to non-words. By analyzing participant responses in an experimental setting, this study demonstrated that non-words with more “neighbors” and well-connected neighbors are learned with higher accuracy, indicating that existing lexicon can significantly influence the acquisition of new words.

2. Du Fengting: Rapid Speech Adaptation and Its Persistence Over Time by Non-Standard and Non-Native Listeners

Fengting’s study delved into the intriguing phenomena of how listeners adapt to accented speech, especially when the talkers share the same, similar or different language backgrounds. Through methodical research, the study revealed that both non-standard and non-native English listeners were more adept at perceiving and adapting to accents the same as their own, but not the similar or different one. Notably, this adaptation was not only immediate but also persisted over a 24-hour period, suggesting intriguing implications for language learning and communication.

3. Li Yanyu, Khattab Ghada and White Laurence: Incremental Cue Training: A Study of Lexical Tone Learning by Non-Tonal Listeners

This presentation offered an innovative perspective on how incremental cue training could aid in lexical tone learning for non-tonal language speakers. The findings suggest that employing exaggerated contrasts in pitch movements during the initial stages of training can significantly improve the learners’ ability to discern tonal differences, leading to comparable end-stage performance with conventional training methods. 

Poster Presentations

1. Kelly Niamh: Interactions of Lexical Stress, Vowel Length, and Pharyngealisation in Palestinian Arabic  

Niamh’s research filled a crucial gap in the understanding of lexical stress in Palestinian Arabic. By analyzing the acoustic correlates of lexical stress, she shed light on the nuanced interactions among stress, phonemic length, and pharyngealization, offering a comprehensive phonetic description of stress patterns in this Arabic variety.

2. Zhang Cong, Lai Catherine, Napoleão de Souza Ricardo, Turk Alice and Bogel Tina: Language Redundancy Effects on Fundamental Frequency (f0): A Preliminary Controlled Study

This study investigated the effects of language redundancy on fundamental frequency (f0), supporting the Smooth Signal Redundancy Hypothesis. Their controlled experiments revealed that language redundancy could indeed affect f0, potentially adding another layer to our understanding of prosodic structure.

3. Dallak Abdulrahman, Khattab Ghada and Al-Tamimi Jalal: Obstruent Voicing and Laryngeal Feature in Arabic

This paper delved into the intricate relationship between voice onset time (VOT) and fundamental frequency (f0) in Jazani Arabic, proposing that f0 perturbation is predictable from VOT patterns. This lends further evidence to the theory of Laryngeal Realism and offers new insights into phonological representation.

4. Krug Andreas, Khattab Ghada and White Laurence: The Effects of Accent Familiarity on Narrative Recall in Noise 

This study questioned whether accent familiarity impacts narrative recall, particularly when listeners are exposed to different accents. Findings indicated a ‘familiarity benefit’ in Tyneside listeners, extending the impacts of accentual familiarity on language perception.

Our group’s robust contributions across diverse areas in phonetics and phonology were met with great interest and sparked important academic discussions, marking a significant footprint in the advancements of the field.

Reflection and Future Directions in Phonetics and Phonology

The congress served as an eye-opening experience that showcased the incredible diversity and depth of current research in phonetics and phonology. With hundreds of insightful studies, the event drew a roadmap for the future of these fields. While each study was a piece of a larger puzzle, the six keynote lectures stood out as beacons guiding the way forward.

Technological Advancements and Precision

Research in phonetics and phonology cannot stand alone; it requires the support of powerful tools to quantify sound features and visualize articulatory processes. Over the past few decades, these research tools have continually evolved, and numerous researchers have actively applied state-of-the-art technology in their studies. John Esling’s focus on the larynx as an articulator hinted at the importance of advanced imaging techniques, opening new avenues for understanding language development and linguistic diversity. Similarly, Paul Boersma’s discussion about the future of Praat emphasized how technology will revolutionize phonetic and phonological models. The convergence between these talks suggests a future where technology plays an increasingly central role in refining our analyses and providing greater computational power for simulating the nuances of speech.

Interdisciplinary Synergies

As the landscape of research in phonetics and phonology broadens, the call for multidisciplinary perspectives becomes increasingly urgent. Ravignani Andrea and Stuart-Smith Jane both underscored the importance of multidisciplinary approaches. While Andrea seeks to combine ethology, psychology, neuroscience, and behavioral ecology to explore the origins of vocal rhythmicity, Stuart-Smith envisions a future where sociophonetic and social-articulatory data deepen our understanding of speech patterns related to identity, social class, and dialect. The common thread here is the necessity for interdisciplinary collaboration to answer complex questions that cannot be addressed by any single field alone.

Embracing Social Responsibility 

Perhaps the most poignant insights came from talks focusing on the social aspects of research. Titia Benders emphasized the crucial need for expanding child language acquisition research to lesser-studied languages, not only to understand their unique phonological elements but also to develop inclusive research methods. Pavel Trofimovich, on the other hand, urged for a socially responsible approach to second-language speech research, one that balances academic rigor with meaningful social impact. These talks collectively call for a future where research is not just theoretically robust but also socially responsible, reaching communities and languages that have been traditionally underrepresented.

Together, the keynotes painted a vibrant picture of a future that is technologically advanced, inherently interdisciplinary, and deeply rooted in social responsibility. It is clear that the next wave of research in phonetics and phonology will be as diverse and dynamic as the voices that make up human language itself.

Summary

The insights gained from this year’s congress serve as a valuable roadmap for the direction of phonetic and phonological research, areas that are central to the mission of our research group. From the pivotal role of technology in advancing research methodologies to the importance of interdisciplinary collaboration and social responsibility, the keynotes and studies presented offer a multifaceted view of the field’s future. As our group continues to explore new avenues of research, we are invigorated by the wealth of possibilities that these emerging trends present. They not only affirm the work we are currently undertaking but also challenge us to think about how we can contribute to these evolving dialogues in meaningful ways. Thank you for following along with our coverage of the congress, and stay tuned for upcoming research projects that will reflect these dynamic shifts in the field.

More Photos

PoLaR Workshop with Byron Ahn: A Dive into Prosodic Analysis

On Tuesday, we were delighted to welcome Dr. Byron Ahn for an in-depth workshop on the use of PoLaR in analyzing prosodic features of speech. The three-hour session delved deep into the intricate layers of intonation.

The workshop began by laying the groundwork. While segments in English (like consonants and vowels) shape the words we say, it’s the suprasegmentals that color how we say them. Prosody, thus, captures the nuances in tone, pitch, duration, and emphasis that breathe life into our words.

What sets PoLaR apart in the realm of prosodic analysis? Its rise in popularity stems from its decompositional and transparent labels, making it easy to grasp and apply. Unlike other systems such as TOBI, PoLaR labels concentrate solely on the foundational elements of prosodic structure, namely boundaries and prominences. This results in a richer phonetic detail about the pitch contour. Additionally, there’s no need for a language-specific phonological grammar with PoLaR, making it versatile and cross-linguistically applicable. Yet, it’s essential to note that PoLaR complements other labeling systems, like ToBI, rather than replacing them.

After providing the essential background introduction, Dr. Ahn guided us through the main tiers of PoLaR labelling, including the Prosodic Structure, Ranges Tier, Pitch Turning Points, and Scaled Levels. The session also touched upon Advanced labels, enabling a systematic tracking of a labeller’s theoretical analysis.

We’d like to express our deepest appreciation to Dr. Ahn for imparting his expertise and to all attendees for their active participation!

Recent Workshop Recap

We are delighted to update our community on the successful completion of our recent workshop this Monday, titled “Training Your First ASR Model: An Introduction to ASR in Linguistic Research“.

Workshop Overview:
The workshop was designed to delve deep into the foundational elements of Automatic Speech Recognition (ASR) and its classical architecture. Focusing on the application of ASR practices in linguistic research, participants were guided through a flexible workflow of automatic forced alignment, demonstrated using various research scenarios. The primary objective of this session was to help our attendees understand the core concepts of ASR and provide them with the necessary tools to utilize ASR in their linguistic research.

Speaker Spotlight:
Our workshop was led by Dr Chenzi Xu, a Postdoctoral Research Associate at the University of York. Dr. Xu’s current work revolves around the fascinating project “Person-specific Automatic Speaker Recognition.” Concurrently, she is concluding her doctorate at the University of Oxford. Dr. Xu’s remarkable achievements in the field have been recognized with the prestigious Leverhulme Early Career Fellowship, which she will commence at the University of Oxford next year.

Workshop Outline:

  1. Introduction to ASR
  2. Exploration of Statistical Speech Recognition
  3. The Role of ASR in Linguistic Research
    • Phonetics and Phonology
    • Transcribing Fieldwork Speech Data
    • Implementing Automatic Forced Alignment
    • Examining Allophone Distributions
  4. Hands-on Session 1: Practising Automatic Forced Alignment
  5. Hands-on Session 2: Adapting Existing Models
  6. Hands-on Session 3: Training Acoustic Models

We trust that our attendees found the workshop both informative and practical. We appreciate the active participation and look forward to the impact this knowledge will have on our individual linguistic research projects!

Recap Semester 1 2022/2023

Since September 2022, our research group has been conversing in rich discussion about various research projects occurring both within and outside our P&P team. Recent talks and presentations which have been discussed this semester include:

  • SLS Seminar- Leveraging the Adaptable Speech Perception System for Dysarthria Remediation (Stephanie Borrie):
    Carol-Ann McConnellogue gave a run through of Stephanie Borrie’s seminar for those members who could not make her presentation. Her work stresses the importance of taking the focus off speakers with speech disorders impacting their intelligibility and encouraging listeners to take more responsibility in communication. Stephanie found that listeners increased their perceptive skills of disordered speech through listening, transcribing, and imitating the disordered speech.
  • Working Memory and Speech Perception (Kai Alter):
    Kai presented a study about working memory and lexical access. The study stressed the importance of memory capacity as (a) it correlates with language comprehension in children and geriatric populations; (b) it relates to performance on school achievement tasks; and (c) it offers opportunities for intervention in SLI and DLD, with the causes of DLD still under debate. This study used high-frequency, mono-syllabic concrete nouns recorded and presented at different paces to test participants’ maximal number of items to be recalled. Unlike previous research, it found that the medical number of memory capacity was 4 instead of 7 plus or minus 2.
  • Retainability and Sustainability of Phonetic Discrimination Ability of an English Vowel Contrast by German infants (Hiromasa Kotera):
    Hiromasa presented his preliminary PhD research to the group. More information on his presentation can be found HERE.
  • Tianjin Mandarin Prosody (Cong Zhang):
    Cong came to the group with some questions and ideas regarding a presentation she planned to present at talks in Cambridge and Edinburgh. Topics she brought to the group included: (a) Is there intonation in Chinese?; (b) tone systems of Standard Mandarin vs. Tianjin Mandarin; (c) floating boundary tone; (d) functional Principal Component analysis (fPCA); (e) chanted call tune in Tianjin Mandarin; and (f) how she can turn data from recorded TV shows into a scientific study. The group provided Cong with feedback and advice.
  • Nonword Learning Project (Rory Turnbull):
    Rory presented his nonword learning project to the group so that he could receive general feedback on the overall idea, specific feedback on the pilot methodology, and recommendations of any literature and related topics. His main question posed to the group focused on why languages are the way they are, with a secondary idea questioning what are the things that influence language structure. More information on his presentation can be found HERE.
  • Correlates of Stress in Palestinian Arabic (Niamh Kelly):
    Niamh discussed her study which is investigating the main indicators of lexical stress in Palestinian Arabic through acoustical analysis. After her presentation she opened the floor to members of the group for feedback and ideas on how to improve her methodology and analysis. More information on his presentation can be found HERE.
  • Pathological Speech Analysis (Shufei Duan):
    Shufei is a visiting scholar of scalable group. She is currently Associate Professor in Taiyuan University of Technology in China, Shanxi Province. Shufei spoke to us about her project focusing on the classification of imbalanced data of Chinese dysarthria based on affective articulation. The project forms an emotional Chinese pronunciation dataset and an emotional pathological speech dataset and conducts research on the data processing of affective articulatory movement for Chinese dysarthria. Her aim is to develop an AI-assisted medical treatment and diagnosis so that diagnoses can be made remotely. Shufei’s work was of great interest to many group members and gave rise to several question-and-answer sessions. She has since attended some of our sessions, alongside her host Elly, in Semester 2 to learn about the research occurring within our group.

A new subgroup with the lovely name PIG (Prosody Interest Group) has been set up this year for our members who have an interest in prosody and intonation. This group will involve practice talks, feedback on research ideas, and discussion of relevant papers.

Some members of our group have submitted abstracts to the International Congress of Phonetic Sciences (ICPhS). We wish them the best of luck in their submissions.  

We would also like to welcome three new members to our research group: Dr Cong Zhang, Niamh Kelly, and Hajar Moussa.

Effects of vowel and syllable position on laterals in bilingual speakers of English and Spanish

Date: 23/11/2022

Being interested in sound systems from the perspective of both production and perception, Niamh Kelly ran a project examining the production of /l/ sounds by bilingual speakers of English and Spanish from the El Paso region, to investigate the effects of language dominance on velarisation patterns. She also ran a pilot study where she looked at the production of /z/ sound of a bilingual across time. Here, we had her give us a presentation of the outputs of her work.

Part 1: A bilingual community on the US-Mexico border: what are they doing with their [l]?

Background information:

Transfer can occur in the productions of multilinguals, where one language influences the other and such effects can go in either direction between the L1 and L2. Sometimes, speakers are found to have productions that are intermediate between the two languages. In some regions, the whole community is bilingual, making it convenient to look at language transfer effects. 

Although similar to each other, the /l/ sounds in American English (AmE) and Spanish are not exactly matched up. While /l/ sound in Spanish is realised as fronted (light/clear) /l/, in AmE it is more velarised overall, especially when it is in codas. 

The participants in this research lived in a city (El Paso) on the US-Mexico border, which is a bilingual community.

Research questions:

This research asks:

  1. To what extent do balanced bilingual speakers show transfer effects in laterals? That is, are there positional effects in just English or in both English and Spanish?
  2. What effects of vowel height and front/backness have on velarisation in laterals in both languages?

Hypotheses:

Since these speakers are balanced bilinguals, they could be expected to keep their languages separate: English /l/ would be more velarised overall than Spanish /l/, and that in English, coda /l/ would be more velarised than onset /l/, while in Spanish no such difference would occur. 

Results:

From the analysis of the participants’ production in both English and Spanish the following results emerged:

  1. English and Spanish were significantly different in both positions. /l/ was more velarised in English than Spanish in both onset and coda position.
  2. /l/ was more velarised in codas than onsets in English while no such positional difference emerged in Spanish.

 Next steps:

Research like this and more further research can help in adding to the description of non-mainstream varieties of English and varieties used by multilinguals. 

Part 2: A bilingual across time: what happened to his /z/? Acquisition of voicing in English /z/ by an L1 Norwegian speaker in a 25-year period.

Background information:

The English /s/ – /z/ contrast has been found to be difficult to acquire for L2 English speakers who do not have this contrast in their L1. Norwegian-accented English has a lack of voicing in /z/ since Norwegian does not have /z/.

Current study:

This study is a longitudinal study of the L2 English of L1 Norwegian speaker Ole Gunnar Solskjær. Ten interviews from two time periods, 1996-8 and 2021 were examined, focusing on his English productions of /s/ and /z/ and how production patterns change over time. There variables were coded: position in word (medial or final), preceding segment (voiced or voiceless), and morphemic status (morphemic, e.g., ‘goals’ vs stem, e.g., ‘please’).

Results:

  1. In the Early timeframe, 100% of /s/ tokens were voiceless and 93% of /z/ tokens were voiceless. In the Late timeframe, 98.5% of /s/ tokens were voiceless (no significant effect of timeframe) and 46% of /z/ tokens were voiceless (a significant effect of timeframe).
  2. Duration was longer when voiceless (supporting the auditory categorisation) but not affected by position in word.
  3. No difference based on morphemic status.

Discussion and next steps:

More exposure to and practise with the L2 has led to an increase in L2-like voicing productions. OGS is acquiring a new voicing contrast, but has not acquired it completely as only about half of /z/ tokens were voiced. More work can be done to look at other fricatives and also the intermediate time frame. 

General conclusion:

  1. Here we find transfer of L1 phonetic and phonological patterns to L2 at the individual level, which can continue even after years of exposure and use. 
  2. It also occurs on a larger scale when a community is bilingual.
  3. It is important for linguists to describe non-mainstream varieties. 

Nonword Learning Project

Date: 14/11/2022

Rory Turnbull gave us a talk on his research on what influences the phonological structure of the words in a language. 

The train of thoughts:

The talk started with narrowing down the research questions: from the big question of why languages are the way they are to specific questions of what influences language structure and makes languages have the words that they have and not other words. His current research is on: what influences the phonological structure of the words in a language? While the typical answer to the question is phonotactics, his response is that ‘some’ functional pressures may also affect the phonological structure. 

Prior work:

Rory’s prior work suggests that natural languages have unexpectedly smooth phonological networks where each word is a nod and a link exists between two words if they are phonological neighbours (only differ by the deletion, insertion, or substitution of a single phoneme). It means that some words are alone in the network while some have loads of neighbours. Based on previous findings, he proposed: these ‘extreme’ words (unusually clumpy or sparse in the lexicon) are harder to learn and harder to retain than non-extreme words.

Pilot planning:

A pilot is in the plan which aims to test the nonwords learning of native British English participants. At the end of this session, our group provided feedback on the experimental design and recommendations on literature and related topics.

Recap Semester 2 2021/2022

From January 2022 to May 2022, our research group has continued to be well-engaged in projects from Semester 1 and arranged new workshops to discuss topics of interest amongst our team members. The following is a summary of what we covered this semester:

  • Accent and Social Justice

Since the beginning of this academic year, we have focused on the theme ‘Accent and Social Justice’, reviewed several related articles, and had Melissa Baese-Berk from the University of Oregon share her and her colleagues’ recent research with us. We organised and held an interdisciplinary workshop on accent, communication, and social justice in March of this Semester which was very successful. We were honoured to have presenters from both within our research group and outside of the group share their research and opinions. More information can be found in this blog post.

  • Many Speech Analyses

One of our main discussion topics of Semester 2 has been the Many Speech Analyses project we signed up for at the end of last semester. This project aims to compare what approaches different researchers take to answer the same research question using the same dataset. The general research question is: ‘Do speakers phonetically modulate utterances to signal atypical word combinations?’. We scheduled fortnightly meetings for this project. We started by reviewing other studies to help us plan a suitable analysis and decided to measure the timing of utterances to answer the research question. We imported the sound files to the MFA (Montreal Forced Aligner) for the forced alignment, and the results were distributed to the members for the crossed hand-correcting. Rory Turnbull, our project leader who is also a member of the P&P research group, guided us in extracting the timing of articulation of certain vowels. After analysing the dataset and submitting our report, we took a few weeks to review the reports from other researchers/research groups. Some peer analyses involved certain research methods or related tools unfamiliar to us, allowing us to expand our knowledge outside our expertise. These included:

  • Forced alignment and inter-rater reliability in Praat

During a couple of weekly meetings, we had Caitlin Halfacre and Rory run Forced alignment in the Montreal Forced Aligner and demonstrate how to hand-correct it in Praat, such as tier setting, labelling and calibration of the initial phone etc. Group members teamed up separately to help each other and shoot problems together. Bruce Wang coded in Praat to sample and measure the agreement of each text grid from the crossed hand-correct. The inter-rater reliability of our group members turned out to be quite strong.

  • Praat Phonetic Analysis 

After checking the correctness and reliability of phone alignment, Rory led two sessions demonstrating how to extract specific labels and measure the timing of utterances by coding in Praat.

  • Digital Signal Processing (DSP)

When we reviewed other researchers’ reports, we found certain research methods which were unfamiliar to us, such as DSP. We used a session as an introduction to these techniques. During this session, we watched a video to recap the anatomy of sound perception, discussed the anatomy of the cochlea, and talked about the acoustic versus auditory differences between two tones that are 100 Hz apart and gammatone filter back. However, without a well-established background in neurolinguistics, it’s still difficult for us to fully understand what the results of one peer reviewed report meant.

To conclude, we successfully ran the ‘Accent and Social Justice’ workshop and completed the Many Speech Analyses project together this semester and learnt much research knowledge and relevant expertise from this experience. We expect to explore more exciting topics and themes in the future and keep updating and publicising our work here.

Accent and Social Justice Workshop

Our research theme for 2021/22 has been “Accent and Social Justice”. We have read and reviewed literature on accent processing and perception, and discussed the prejudices towards certain accents and the injustices those may experience.

In order to spread awareness about accent and social justice, and the research our group has undertaken, we have organised an interdisciplinary workshop on accent, communication and social justice, which will be held on 30/03/2022.

The workshop will consist of presentations from members of our research group and academics outside of the field of Phonetics and Phonology who have an interest and knowledge in our research theme. Topics which will be discussed include self-descriptions of UK-based English accents; constructing native speakerism in Chinese community schooling; and racist nativism in England’s education policy, to name a few. The abstracts for each presentation can be found here. There will be time for discussion after each presentation to give attendees the chance to ask questions, exchange ideas, and explore the topics further.

The workshop will begin at 9am on Wednesday 30th March. It will be a hybrid event meaning people can attend in person or via Zoom. For those attending in person, the workshop will be held in room G.21/22 of the Devonshire Building at Newcastle University. Lunch will begin at 12pm and refreshments will be provided. This will be another opportunity for attendees to mingle and discuss the topics explored. The workshop will end at 1pm. The full workshop programme can be viewed here.

If you are interested in attending our workshop, you can sign up using this link.

We look forward to seeing you and hope this workshop enables you to delve into rich discussion around a very important issue.

Melissa Baese-Berk’s Talk

Date: 06/12/2021

Prof. Melissa and her colleagues and students are constantly productive in the research of speech processing as well as accent perception and adaptation. In her talk, she walked us through their new work on the adaptation to unfamiliar speech and the perception of non-native speech (see Cheng et al., 2021). 

The main examining issues in their studies include:

  1. The difficulties in communication brought about by linguistics properties of non-native speech, language background of talkers and listeners, and certain cognitive factors (McLaughlin, Baese-Berk, Bent, Borrie & Van Engen 2018)
  2. The conditions under which accent general adaptation might occur (Afghani, Baese-Berk & Waddell, under review at the time when the talk happened)

The main results found by them are:

  1. Listeners may make the most of different resources to facilitate their speech processing; some cognitive factors, like vocabulary and working memory, correlate with listening challenges; the noises from the environment can degrade rhythm perception (McLaughlin et al., 2018).
  2. Incentives may be an answer for a better performance in speech processing, and listeners incentivised can start processing better and learn more quickly than those who are not (Afghani, Baese-Berk, & Waddell, under review at the time of the talk).

Speech perception is more difficult when it is:

  • Dysarthric speech
  • Speech-in-noise
  • Time-compressed speech
  • Synthetic speech

However, practice listening in these conditions may improve speech processing for listeners.

The issues to be looked at next:

  1. The role of memory in comprehension
  2. The similarities / difference between the adaptation to a talker and to an accent
  3. The interaction between adaptation and physical and linguistics context

Reflections from our Research Group:

  • This is a very relevant topic to what is currently being discussed in our research group around accent and social justice. Our group is hosting an event in Spring 2022 which will discuss some of the topics addressed. 
  • It directed us to other literature surrounding the topic.
  • A good way to network with others interested in the topic.
  • I found it very interesting that incentivising participants can make a significant difference in how they process speech.

Melissa’s Twitter: @uospplab