Podcast on YouTube: Link to YouTube
Podcast on Spotify: Link to Spotify
Benvenuti, Bienvenue, Bienvenidos, Croeso and Welcome.
Hi, I’m Juliet. Join me on my language learning journey and discover my thoughts on different aspects of language learning with the A Language Learning Tale Podcast. Today I’m talking about…
When I Wouldn’t Use a Large Language Model
Recently, I talked about using ChatGPT as a conversational tool and how I thought that was probably an acceptable use of a Large Language Model. However, my thoughts on using it to learn grammar are somewhat different. Why? Isn’t it just the same as speaking? Well, no, it isn’t.
Now, I’m not talking about translating a phrase or a word using Google Translate or Reverso Context, or even Microsoft Word’s language tools. Yes, those involve grammar, and what we call AI to some extent, but they’re not questions. That’s more like proofreading.
What I mean is something like, “Please explain to me how to use direct and indirect pronouns in Italian”.
Why?
Let’s start with the most obvious reason why I would be hesistant to use a Large Language Model for grammar instruction. Large language models lie. Quite frequently. Also known as hallucinating. Did you hear about the case where someone in a law firm used ChatGPT to look up relevant cases and it completely made one up, which the employee took at face value? Yeah, not good. And there are many other examples.
Imagine if the Large Language Model made up a whole speil about a grammar point and you took it as true, when it was nothing of the kind. Or even if it just made up one small element of what it told you. That could be embarrassing. You may not get caught out, but most people don’t double-check information. Just look at all the fake news that gets shared on social media. Actually, don’t. That’s a very bad idea.
But that’s not my biggest problem with it. That has to do with using people’s blogs, videos, whatever to scan for this information and then gaily spit it out to all who ask, without payment, without consent from the original creators. These models have scraped that information from a tonne of websites online without permission. Those creators spent countless hours of work creating detailed and specific, correct information about grammar.
But this is not the whole story. If people keep using Large Language Models to get their answers to these kinds of questions, rather than looking at the individual websites, the actual experts will eventually disappear.
Imagine that you’re a language teacher and you have this really popular blog online. You have a tonne of information on it, been building it up for years. Until now, it’s been very popular and you’ve been able to sell courses and personal instruction. But you notice the number of new visitors to the blog falling dramatically and it gets to the point where your business just isn’t viable anymore.
Obviously, this could happen at any time for a whole host of reasons. But one big reason could be that everyone is just asking a Large Language Model to use all that stolen data to give them the answer to their grammar query in seconds, without them having to search for it. You know that kind of thing is happening, whether it’s to do with languages, or not. You may have done this yourself.
Imagine this happening to hundreds and thousands of blog creators across all sorts of topics. What happens if they all give up putting out useful, correct and relevant information, because there’s no longer any point?
The Large Language Models will start copying themselves. Why? Because there will be some enterprising people, I’m trying to be kind here, who start taking the info from the Large Language Model and putting it into their own blog posts, hoping for some ad revenue. And the Large Language Model, being the greedy info monster it is, might then ingest that, so it’s effectively being taught … refined by its own, not necessarily one hundred percent acccurate, information.
Yes, this dystopian future of the Internet only being Large Language Model generated could happen, but not if we continue to support individual blogs and websites.
Now, I don’t have a problem with someone using a Large Language Model to ask for a list of websites that could help them with a grammar point, you know, like a regular search engine, but maybe a little quicker. Maybe, more targetted. All I’m asking is that you think before you take the easy route. What kind of Internet do you want in the future? And yes, it is down to you. Make your choices wisely.
That’s all for this season of the A Language Learning Tale Podcast. I’ll be back in January. In the meantime, check out the A Language Learning Tale YouTube channel for additional, non-podcast content.
Ciao, salut, adiĆ³s, hwyl and bye for now.