144
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 18 Aug 2024
144 points (100.0% liked)
Texas
1474 readers
457 users here now
A community for news, current events, and overall topics regarding the state of Texas
Other Texas Lemmy Communties to follow
Sports
- Houston Astros
- Houston Texans
- Houston Rockets
- Texas Rangers
- Dallas Cowboys
- Dallas Stars
- Austin FC
- San Antonio Spurs
Rules (Subject to Change)
-
Be a Decent Human Being
-
Posting news articles: Please use exact article Titles
-
Posts must have something to do with Texas
-
Zero tolerance for Racism/Sexism/Ableism/etc.
-
No NSFW content
-
Abide by the rules of lemmy.world
founded 1 year ago
MODERATORS
I suspect that the human graders were the biased ones, and that this automated test is more accurate. Schools frequently inflate test results when given the opportunity (especially when low results reflect poorly on the school).
How do students known to be fluent in English do on it? Do they pass reliably?
Edit: Here's a discussion of a similar phenomenon in the context of high-school graduation rates. Graduation rates regularly go up by a very large amount when standardized tests stop being required, but that's not because otherwise-qualified students were doing poorly on standardized tests.
It's possible for both things to be true. Human reviewers might be biased towards awarding higher scores and the computer could be dog shit at scoring. I have no idea how this can meaningfully be grading fluency. Fluency in a spoken language consists of vocabulary, grammar, and pronunciation.
I have seen plenty of people who were very fluent who speak with an extremely noticeable accent who were none the less comprehensible. Software is extremely likely to perform poorly at recognizing speak by non-native speakers and fail individuals who are otherwise comprehensible. Because it wont even recognize the words its nearly entirely testing pronunciation and then denying such students access to electives that would allow them to further their education.
It's quite possible that you're right. I haven't been able to find any research that attempts to quantify how accurate the software is, and without that I can only speculate.
If I understand the article correctly, the system is doing some kind of AI speech recognition to score how people speak. It's not a natural environment for people to talk to a computer, and could easily be biased by accents. I doubt any automated scoring that isn't just multiple choice is accurate.
According to my own experience as a fluent English speaker who has a strong accent, modern voice-recognition systems have no problem with my accent, but I agree that they have flaws. They're not perfect, but I expect that they're more accurate than teachers because teachers have motives other than accuracy.
My wife and her family have a hell of a time getting Google to understand their requests (Hispanic, wife is first generation) and has no issue understanding my requests, so I could see significant issues with the software misinterpreting.
Interesting. A few people have told me that I enunciate more clearly than a native speaker, so if that's the case then my experience with speech-recognition systems will not be representative. With that said, older speech recognition systems did have trouble understanding me whereas newer ones don't so I think there really has been improvement.
I tried to find data about how students fluent in English do on this test but I wasn't able to. Comparing native English speakers to native Spanish speakers who have already learned English would be informative.
Huh. My wife's Filipino accent is pretty heavy and Google almost always understands her.