NOTE: This is the UNEDITED version of the article submitted to the Shenzhen Daily, that was published on August 14th, 2017.
Until recently, I was an English teacher at a middle school, and had taught many students at the middle school level.
I believe that pronunciation is important, but I also appreciate that different students will have different ways of saying things, in the same way that Americans will say things differently from British people and Australians.
This is why I have always been concerned about the use of computers to judge the listening and speaking components of the English test of the zhongkao and gaokao exams. It is my firm belief that computers are only as good as the coding, or programming, that goes into them.
I was asked by a friend to have a look at the computer system that students use to practice their speaking and listening homework. For those that are not familiar, there is an app that can be used by students to record their listening and speaking homework on and it gives them an instant score.
That particular day, I sat down with my friend and we looked at the program. It occurred to me that the programmers would only have recorded certain accents and pronunciations to be acceptable, and other accents would result in a lower score. For the sections where students are required to create their own sentences, I believed that specific key words would have to be triggered to create a high score.
On that basis, I decided to test the system out by recording my own voice, under my friend’s username, just to see what happened. She had already recorded her voice and gotten a good score, but I created a nonsense story that any teacher would acknowledge would be silly but fall within the parameters of the pictures presented. However, my score was lower than her score.
This is the inherent problem with having computers judging humans in this way – there are simply too many variables.
The reason I wrote this article is because I read about an Irish vet that wants to stay in my country of Australia. She is a native speaker of English and has two degrees. She has been working as an equine, or horse, vet for two years as a skilled worker but wanted to get permanent residency.
However, Pearson Test of English (PTE) Academic determined that her oral English was not good enough, and scored her a 74, when a 79 is required by immigration authorities in Australia for permanent residency.
Pearson is the only provider, out of five, in Australia that uses voice recognition technology to test speaking ability, according to the Guardian. The fact that Pearson Asia Pacific had the gall to blame the immigration department, instead of committing to examining their software or technology, speaks to a wider problem.
I am certainly not a luddite, and I am a big fan of technology on the whole. However, I believe that when it comes to making decisions that will affect people’s lives, humans are much better at it than machines.