UM  > Faculty of Law  > DEPARTMENT OF GLOBAL LEGAL STUDIES
Residential Collegefalse
Status已發表Published
Tackling Bias in Pre-trained Language Models: Current Trends and Resource-restricted Societies
Yogarajan, Vithya; Dobbie, Gillian; Keegan, Te Taka; Neuwirth, Rostam J.
2023-12-03
Source PublicationComputers and Society (cs.CY); arXiv:2312.01509 [cs.CY]
Abstract

The benefits and capabilities of pre-trained language models (LLMs) in current and future innovations are vital to any society. However, introducing and using LLMs comes with biases and discrimination, resulting in concerns about equality, diversity and fairness, and must be addressed. While understanding and acknowledging bias in LLMs and developing mitigation strategies are crucial, the generalised assumptions towards societal needs can result in disadvantages towards under-represented societies and indigenous populations. Furthermore, the ongoing changes to actual and proposed amendments to regulations and laws worldwide also impact research capabilities in tackling the bias problem. This research presents a comprehensive survey synthesising the current trends and limitations in techniques used for identifying and mitigating bias in LLMs, where the overview of methods for tackling bias are grouped into metrics, benchmark datasets, and mitigation strategies. The importance and novelty of this survey are that it explores the perspective of under-represented societies. We argue that current practices tackling the bias problem cannot simply be 'plugged in' to address the needs of under-represented societies. We use examples from New Zealand to present requirements for adopting existing techniques to under-represented societies.

KeywordAi Law Discrimination Bias Large Language Models
URLView the original
Language英語English
Document TypeJournal article
CollectionDEPARTMENT OF GLOBAL LEGAL STUDIES
Faculty of Law
Corresponding AuthorYogarajan, Vithya
AffiliationUniversity of Macau
Recommended Citation
GB/T 7714
Yogarajan, Vithya,Dobbie, Gillian,Keegan, Te Taka,et al. Tackling Bias in Pre-trained Language Models: Current Trends and Resource-restricted Societies[J]. Computers and Society (cs.CY); arXiv:2312.01509 [cs.CY], 2023.
APA Yogarajan, Vithya., Dobbie, Gillian., Keegan, Te Taka., & Neuwirth, Rostam J. (2023). Tackling Bias in Pre-trained Language Models: Current Trends and Resource-restricted Societies. Computers and Society (cs.CY); arXiv:2312.01509 [cs.CY].
MLA Yogarajan, Vithya,et al."Tackling Bias in Pre-trained Language Models: Current Trends and Resource-restricted Societies".Computers and Society (cs.CY); arXiv:2312.01509 [cs.CY] (2023).
Files in This Item: Download All
File Name/Size Publications Version Access License
Yogarajan (Bias in P(1146KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Yogarajan, Vithya]'s Articles
[Dobbie, Gillian]'s Articles
[Keegan, Te Taka]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yogarajan, Vithya]'s Articles
[Dobbie, Gillian]'s Articles
[Keegan, Te Taka]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yogarajan, Vithya]'s Articles
[Dobbie, Gillian]'s Articles
[Keegan, Te Taka]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Yogarajan (Bias in Pre-Trained Models) 2023.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.