Bias, Subjectivity and Norm in Large Language Models
Résumé
This article reevaluates the concept of bias in Large Language Models, highlighting the inherent and varying nature of these biases and the complexities involved in post hoc adjustments to meet legal and ethical standards. It argues for shifting the focus from seeking bias-free models to enhancing transparency in filtering processes, tailored to specific use cases, acknowledging that biases reflect societal values.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |