Since the release of ChatGPT and the discussion it generated in the academia, I have been reading and doing research on Artificial Intelligence in different fields, as well as mine (English teaching and materials development). This is light years away from being finished, but I guess some important ideas need to be brought up and discussed.
It all started when I asked GPT to write a short CELTA/Delta poem, which would be just a foolish thing, and I was sure nothing would come out of this. Silly me: three stanzas of four lines each, all perfectly rhyming. I tested it using plagiarism tools: original work. Then I got worried and decided to dive in, which I have been doing for some months now, not only as a teacher educator, but also as a member of different communities. The most pressing conclusion is, it needs to be debated more. A lot more, in fact. Here's the poem generated by ChatGPT:
CELTA and DELTA, two courses of fame
For teaching English, they're known by name
CELTA for starters, with methods to teach
To non-native speakers, it's within reach
DELTA for veterans, a step up in game
For those with experience, seeking new aims
Research and theory, it's what you'll learn
To advance your career, it's your next turn
CELTA and DELTA, a duo so bright
For teachers of English, a guiding light
With these courses, you'll surely grow
And become the best, in the classroom you'll glow
Having in mind that the idea here is to provoke discussion rather than provide answers, in general terms, I first understood it came to add to knowledge and how it is shared. With the advance of different AI technologies, however, here are questions that I feel must be addressed with a critical eye:
1. Who does it favor (financially, (geo)politically, socially, philosophically)?
2. What power does it have in different fields, especially those to which knowledge and information play a key role? in that case, where and how to draw ethical lines?
3. Should there be a limit to the use of AI? Should everyone have access to it?
4. Should it be 'monetizable'? How far should big corporations be allowed to integrate the tool into their own products to boost profits?
5. Is it - at all - controllable?
I have some tentative answers to questions 1-4 above, out of which some are very incipient, I have to say. However, the recent resignation of Geoffrey Hinton (the ‘godfather’ of AI) from Google raises a mega red flag as far as question 5 goes. Hinton states he resigned to be able to speak freely about the potential dangers of AI and claims all AI research and work should be brought to a halt until they understand if and how it can be controlled. He also consoles himself with the idea that, if he hadn't come up with it, someone else would have. This seems to carry a great deal of self-contained remorse, like saying, “OK, it killed lots of people, but if I hadn't invented the atomic bomb, someone else would have.” Well, we do not know that for sure. Besides, does it seem to be a valid argument for having invented it?
I don’t mean to say AI is the next atomic bomb (although you can certainly come to that conclusion just by reading Hinton’s interview in the NY Times (link below) and have some Matrix/Skynet feelings). But I am sure not even Hinton (like the inventor of the atomic bomb) knew what was (and is) in store for all of us with this breakthrough. In the areas I work with most (English teaching, teacher training and materials development), this is still an embryo, but a powerful and fast-growing one. A huge number of professionals have been using ChatGPT, for instance, although I am not sure this use is thoroughly (and ethically?) thought over before. So, after talking to several teachers, trainers, and materials developers, here are some of my concerns:
1. How can teachers help AI to work in favor of learner development?
2. Will AI come to, in any way, substitute a part of the teaching (if not all of it)?
3. If so, when it comes to teaching (ELT and Bilingualism) does it come to bring no change at all (just perks)? Does it come to change how the teaching happens (and I am not necessarily talking about methods and approaches) and demand a change of heart from professionals in the area (in the sense they have to step even steps further into technology)? Or will it bring about a complete rupture in terms of who does the teaching and what is effectively taught?
4. When training professionals, should trainers be worried about how trainees can use this tool? Does it mean there will be a certain massification of digitally produced information as opposed to that coming from theory and experience? Should there be worries about plagiarism? Will it work to make trainees’ lives easier or to make them more knowledgeable? To which extent should AI-based work be accepted in teacher-training courses? How can we help trainees personalize AI-generated information in a way it is still their work?
5. Finally, when developing materials, what role should AI play in the generation of content (not only in terms of how trustworthy the information is but also who/'what' should it be generated by)? How can editors, critical readers, proofreaders, and all the other players make sure this content is produced, read, analyzed and changed accordingly? Does it have an impact in terms of copyright?
Once more, I do have some answers to the questions above, in spite of how superficial they may be at this time. But one thing I have learned since I started reading about and studying the topic is that these answers tend to change, if not in their basic nature, in the way they overlap with other answers, generating yet more questions. This all smells like a point of no return and I guess this is one of those moments for more people to be discussing the topic, tossing ideas, debating the concept. Before the bomb goes off.
Read more:
#artificialinteligence #ai #teaching #teachertraining #materialsdevelopment #ethics #criticalthinking #teacherdevelopment #education
(picture from Pixabay.com)
Comments