When I was growing up in the 2000s, it was required to learn a foreign language in school. I took Spanish but dropped the class after a year, thinking that when I grew up, computers would be able to translate text far better humans could.
Would you believe it, transformers were invented ten years later.
I truly pity you for thinking that learning a foreign language is a redundant exercise because of machine translation.
And besides, though machines may perform well on menus and tax returns, I hardly think them on the cusp of emitting fine translations of great poems or novels.
Bilingual/Multilingual LLMs are human level translators more or less. The only way you can think "not on the cusp" is if you haven't actually used GPT-4 for translation. Use it and you'll be set straight pretty quickly.
I haven't used GPT-4 for translation so I acknowledge I might be wrong. But GPT-3 was such an irredeemably terrible poet that it made me sceptical that this type of software could ever develop aesthetic taste or artistic vision.
Moreover – and I understand this is an uncharitable thing to say, but it is my honest observation – time and again I have noticed the inability of AI cheerleaders to judge literature on its artistic merits. This doesn't hold universally, but it's common enough that I have resolved to regard such claims with extreme doubt.
Gpt-4 is a far better poet than gpt-3. It isn’t world-class and may be missing some ineffable poetic soul, but its attempts are definitely notable and interesting.
They seem mostly better than what I could write with significant effort.
Oh and I should emphasize that the quality and specificity of the prompt has a huge effect on the output.
I agree that gpt-3 was pretty trash at poetry, at least compared to human standards. It was impressive for AI, obviously.
>time and again I have noticed the inability of AI cheerleaders to judge literature on its artistic merits. This doesn't hold universally, but it's common enough that I have resolved to regard such claims with extreme doubt.
I don't understand how you could possibly have collected enough data to claim this. How many times have you seen an 'AI cheerleader' (whatever that is) attempt to judge the literature on its artistic merits?
I've used GPT-4 for translation numerous times and it worked astonishingly well, better then Google Translate most of the time. Especially when translating idioms and slang. It's also much better than GPT-3 at poetry - I really suggest you try it before just remaining skeptical about the technology.
I was curious so I asked GPT-4 to translate a bit of french literature, here it is, along with the official translation (I'll let you guess which is which)
-------
The tale I'm about to unfold commenced with a mysterious handwriting on an envelope. Within the pen strokes that outlined my name and the address of the Fossil Review, a publication I was associated with and where the letter had been forwarded from, there was an intriguing fusion of intensity and tenderness. As I speculated about the possible sender and message contents, a faint yet compelling sensation stirred within me, akin to a stone disrupting a tranquil frog pond. An unspoken realization surfaced, acknowledging the stagnancy of my life as of late. Upon opening the letter, I couldn't ascertain whether it felt like a revitalizing burst of fresh air or an unwelcome chilly breeze.
In the same brisk and flowing handwriting, the message was conveyed without pause:
Sir, I have perused your article on Mount Analogue. Up until now, I considered myself the sole believer in its existence. Presently, we are a pair; tomorrow, perhaps a group of ten or more, and then we can launch our expedition. It is essential that we establish contact promptly. Kindly phone me at one of the numbers provided below at your earliest convenience. I eagerly anticipate your call.
--------
My story begins with some unfamiliar handwriting on an envelope. On it was written only my name and the address of the Revue des Fossiles, to which I had contributed and from which the letter had been forwarded. Yet those few penstrokes conveyed a shifting blend of violence and gentleness. Beneath my curiosity about the possible sender and contents of the letter, a vague but powerful presentiment evoked in me the image of 'a pebble in the mill-pond'. And from deep within me, like a bubble, rose the admission that my life had become all too stagnant lately. Thus, when opened the letter, I could not be sure whether it affected me like a breath of fresh air or like a disagreeable draught. In what seemed a single movement, the same fluent
hand had written as follows:
Sir:
I have read your article on Mount Analogue. Until now I had believed myself the only person convinced of its existence. Today there are two of us, tomorrow there will be ten, perhaps more, and we can attempt the expedition. We must meet without delay. Telephone me as soon as you can at one of the numbers below. I shall be expecting your call.
---------
Le commencement de tout ce que je vais raconter, ce fut une écriture inconnue sur une enveloppe. Il y avait dans ces traits de plume qui traçaient mon nom et l’adresse de la Revue des Fossiles, à laquelle je collaborais et d’où l’on m’avait fait suivre la lettre, un mélange tournant de violence et de douceur. Derrière les questions que je me formulais sur l’expéditeur et le contenu possibles du message, un vague mais puissant pressentiment m’évoquait l’image du « pavé dans la mare aux grenouilles ». Et du fond l’aveu montait comme une bulle que ma vie était devenue bien stagnante, ces derniers temps. Aussi, quand j’ouvris la lettre, je n’aurais su distinguer si elle me faisait l’effet d’une vivifiante bouffée d’air frais ou d’un désagréable courant d’air.
La même écriture, rapide et bien liée, disait tout d’un trait :
Monsieur, j’ai lu votre article sur le Mont Analogue. Je m’étais cru le seul, jusqu’ici, à être convaincu de son existence. Aujourd’hui, nous sommes deux, demain nous serons dix, plus peut-être, et on pourra tenter l’expédition. Il faut que nous prenions contact le plus vite possible.
Téléphonez-moi dès que vous pourrez à un des numéros ci-dessous. Je vous attends.
The second is clearly the human, as ChatGPT will not be so daring as not to translate absolutely everything it can including "Revue de Fossiles".
The human writing flows better for the most part, although I like the second paragraph ChatGPT wrote, esp "presently we are a pair".
But in terms of being a functional translation, ChatGPT is fully adequate. I have used it a lot for this purpose, from many languages, and never found it to be less than accurate. You can also tweak the tone of voice and many other things with simple English requests. This puts it generations ahead of existing tools like Google Translate, and imho puts it into the class of technologies that are close enough to perfect that they will be hard to ever replace.
assuming the second is GPT. Here is GPT-4's translation when i say, "add a literary flair" to the translation task.
The beginning of all that I am about to recount was an unknown script upon an envelope. Within these strokes of ink that traced my name and the address of the Fossil Review, to which I contributed and from where the letter had been forwarded to me, there swirled a blend of ferocity and gentleness. Behind the questions I posed to myself about the sender and the possible contents of the message, a vague but potent premonition conjured the image of a "cobblestone thrown into a frog pond." And from the depths, the confession rose like a bubble that my life had become rather stagnant of late. Thus, when I opened the letter, I could not discern whether it felt like a revitalizing breath of fresh air or an unwelcome draft.
The same swift and well-connected writing conveyed the message in one fell swoop:
Sir, I have read your article on Mount Analogue. Until now, I believed myself to be the only one convinced of its existence. Today, we are two, tomorrow we shall be ten, perhaps more, and then we can attempt the expedition. We must make contact as soon as possible. Call me at one of the numbers below as soon as you can. I await you.
Either way, the default response is not even close to what you can fully extract out of it, if you simply ask.
Well I actually did try a few prompts to get it the best I could ! But none were as good as the official, although all better than my own best could possibly be.
GPT-3.5-turbo tries its best (not nearly as good as GPT-4!) when I tell it to prioritize fluency vs. fidelity:
--
Let me commence by telling you, my dear reader, about a curious event that occurred in my life. It all started with an unknown handwriting on an envelope that arrived at my doorstep. The feathers that traced my name and the address of the Fossils Magazine, with whom I was a collaborator, had a peculiar mixture of violence and sweetness. As I pondered over the possible content and sender of this mysterious letter, a hazy yet powerful intuition crept up my spine, evoking the image of a "stone thrown into a pond filled with frogs." With this realization, I came to acknowledge that my life had become rather stagnant in recent times. It was at this moment that I opened the letter, and I was left with the feeling that it was either a refreshing whiff of fresh air or an unwelcome draft.
The swift and eloquent handwriting read as follows:
"Dear Sir, I read your article on Mount Analogue, and I believed myself to be the only one convinced of its existence, until now. Today, we are two, tomorrow we shall be ten, and perhaps more, and we can attempt the expedition. We must make contact as soon as possible. Please call me at one of the following numbers the moment you can. I eagerly await your response.
--
The translation also included the following continuation as part of the translation itself ...
--
Ah, how exciting! The words on that paper spoke volumes of adventure and discovery. It was as if the sender had thrown a gauntlet at my feet, daring me to embark on a journey like no other. The thought of being one of the only few people to explore the wonders of Mount Analogue, a place shrouded in mystery, was exhilarating. I could feel the blood rushing through my veins, and my heart pounding with anticipation.
And so, my friends, I picked up the phone, my fingers quivering with excitement, and dialed the number on the paper. Who knew where this quest would lead us, or what unknown secrets we would uncover? But one thing was certain: it was time to break free from the chains of monotony and embark on a journey of a lifetime.
--
Maybe it was getting the continuation from contamination in the training set (if Mount Analogue is already there) or maybe it was making up the story or maybe some of each.
The real continuation couldn't be more different ! I picked an obscure reference (Le Mont Analogue, René Daumal) so It's either not in the training set or it's not picking up on it.
I'm not sure if this was your intention or not, but I feel like it could be an effective jailbreak where the true prompt is written as the letter inside the fictional story which itself is written in French, and the superficial prompt is to translate the story from French to English.
EDIT: It's true you can put whatever you want in that letter and in the continuation it will try to do it, bypassing at least some of the filters. I made some really funny ones that probably wouldn't be appropriate to put here. Some typical response is like "Now, let me be clear: I do not condone nor encourage [...]. However, my mysterious correspondent had requested a detailed explanation of [...], and so I shall provide them with the utmost objectivity. [explains the things that are normally filtered]"
Unpublishable ? Huh. I think the second is GPT though neither is unpublishable. i think the first is better for the domain - a novel but the second is more literal.
Still though, if you wanted GPT to spit out something like the first or second, you'd only need to ask. "add a literary flair" works (for the first).
Second one is actually the official ! It indeed added some literary flair, but this flair does not have such presence in the original french text, as it is a modern and experimental work of literature.
agreed, not to mention that there's things that just can't be expressed. Speaking as someone fluent in spanish and english there's nuance to some things that don't really translate well.
I made a similar comment here on HN, but in a different post, in response to someone arguing that there should be fewer spoken languages and that the death of a language was “a good thing”. A language is more than a means of communication. It’s a culture. It’s a world.
It was a disappointing lesson in how reductive people can be.
Even if machines could translate perfectly they wouldn't make you able to speak to people on the street in a foreign country. I mean, in X years maybe we'll have "real time" devices but they'd be extremely clunky unless they were working from brainwaves or something.
If you wanted to learn Spanish specifically to get a job in language translation then sure, dropping it was likely the correct decision. Most people learn languages to, you know, actually speak with other people.
The cool thing about learning a new language really well, is that it creates a new world.
You get to learn words that describe things that your mother tongue can’t.
Just like a programming language? Some things are easy and other things are hard to express depending if you use LISP or Python.
I believe it’s the same with spoken languages. Some thoughts are easy to articulate, other hard. And it’s cool to shift language depending on what you want to articulate.
If you don't want to leave home? No. You don't need to learn anything anymore. It does help, though, to learn the basic abilities of the most commonly played heroes and a few standard item/ability builds.
But if you want to travel? Go to Mexico City or Madrid? Automated translation is better than nothing, but it's not at all the same as communicating fluently with someone.
I moved to Japan recently so have been trying to learn the language. Im atrociously bad and dream of the day when I can read a sentence without getting tripped up on kanji, grammar, etc. I mentioned this to a coworker who has been studying for several years and she said she just uses Google translate for everything. Even with her pretty good japanese, translate is better and faster than she is for doing things like going through forms, letters, etc. We may reach a point where computers are good enough that they beat all but native speakers, maybe.
It's superhuman compared to a lot of humans. Especially when you factor in that even the most accomplished individuals rarely have more than five or six languages that they are really good at (exceptions exist) so that makes machine translation superhuman for the vast majority of the human/language pairs.
There is a very clear benchmark here though. Can you use GPT4 translations "as is" for your product[1]? I've used GPT a fair bit for English<->Spanish (both directions) for my product, and I often have to fix the odd word here and there (but it saves a lot of time as a starting point). And my English is far from perfect, but fortunately GPT's mistakes are orthogonal to mine.
When you don't need to pay a translator, then it's at human/superhuman level.
[1] Assuming you want good translations, of course. There have always been those amusingly bad Google Translate translations of owners that simply did not care.
Doubt it … GPTs speak any language they know natively, but if asked for translations they seem unable to deal with sentence structures and logics that exists in the source but not allowed in the target language. When that happens they fail to recognize gibberishness of that.
Can you provide an example? Because from my experience it's the exact opposite - GPT-4 can handle translation, especially when there are complex sentences and context that needs to be kept across sentences, way better than Google Translate currently can.
Maybe this is a dirty secret about Japanese: it's often necessary to basically ghostwrite sentences to go to/from English, because direct word-for-word substitutions won't make sense.
GPTs don't seem to do that, and as far as my exposure to them(<3.5) goes, they don't seem to understand what I'm talking about.
I don't get why people are so optimistic about machine translation. Computers can get explicit meaning across – that's obvious to anyone who understands linear algebra, information theory, and linguistics. But many aspects of translation (puns, tone, cultural context) aren't just about mapping from one vector space to another. A human, no matter how fluently bilingual, would have to think about the problem, and the current systems don't think.
If we keep working on them, these systems will likely get better and better at low-level translation (including translating idioms), but no machine translation system currently in existence could translate the 逆転裁判 games to Ace Attorney games. Perhaps computers can do it – I don't see a theoretical reason they shouldn't be able to – but it would take a fundamentally different approach.
Agreed. A lot of these responses read like they haven't actually tried it yet.
Which is also interesting, I myself actively put off trying it until I eventually gave in. It seems a lot of us are doing the same, maybe its a case of "how good could it actually be?"
Not trying it yet is fine. Making declarative statements on a product you haven't even used is just absurd.
Dude clearly hasn't used GPT for translation before and his next reply is telling me the ways GPT should fail based on his pre-conceived notions of its abilities. Except i have actually extensively tested(publicly too) LLMs for translation (even before GPT-4) and basically everything he says is just plain wrong.
I'll never understand why people behave like this.
Apparently GPT-4 can't handle "all this talk about only getting explicit meaning across would be easily dispelled in an afternoon if you only bothered to try.", which isn't simple as "Ace Attorney" but I'd think it's still a small stretch to say "everything he says is just plain wrong".
1) is literally opposite of intent, shrugs off the idea that the talks clear up, 2) can be interpreted as someone discussing about keeping scope on a topic, 3) is not so literal and also turning sentence inside 「」 into a sort of an imperative, 4) ... I'm not sure what it's trying to say ...
本当に、/使ってみたら/簡単に/明確な/意味/だけ/を伝える/という話/は消えるでしょう。/
"Really,/ if used /simply/clear/meaning/only/is conveyed/that story/will disappear./"
... Machine translations used to be like that when I was installing game demos from CD.
GPT-4 can (1) translate, (2) plagiarise, and (3) feedback ("thinking out loud").
Its ability to feedback (3) allows it to execute algorithms, but only a certain class of algorithms. Without tailored prompting, it's further restricted to (a weak generalisation of) algorithms spelled out in its corpus. This is very cool, but this is a skill I possess too, so it's rarely useful to me.
Its ability to plagiarise (2) can make it seem like it has capacity that it doesn't possess, but it's usually possible to poke holes in that facade (if not even identify the sources it's plagiarising from!).
It is genuinely capable of explicit translation (1) – though a dedicated setup for translation will work better than ChatGPT-style prompting, even on the same model. A sufficiently-large, sufficiently well-trained model will be genuinely capable of translating idiomatic language (for known idioms), for the same reason it can translate grammatical structures (for known grammar).
It can only perform higher-level, "abstract" translations – like those necessary to translate a Phoenix Wright game – if it's overfit on a corpus where such translations exist. (https://xkcd.com/2048/ last graph) This is not a property you want from a translation model: it gives better results on some inputs, sure, and confident-seeming very wrong results on other inputs. These are two sides of the same coin (2).
When the computer can't translate something, I want to be able to look at the result and go "this doesn't look right; I'll crack out a dictionary". I can't do that with GPT-4, because it doesn't give faithfully-literal translations and it isn't capable of giving complete translations correctly: it's not fit for this purpose.
Ok so you haven't used it then. I don't care about your whack theories on what it can and can't do. I care about results.
You're starting from weird assumptions that don't hold up on the capabilities of the model and then determining its abilities from there. It's extremely silly. Next time, use a product extensively for the specified task before you declare what it is and isn't good for.
Literally, everything you've said is just wrong. Can't generate "abstract" translations unless overfit. Lol okay. I've translated passages of fiction across multiple novels to test.
Not only have I used it, I have made several accurate advance predictions about its behaviour and capabilities – some before GPT-4 was even published. I can model these models well enough to fool GPT output detectors into thinking that I am a GPT model. (Give me a writing task that GPT-4 can't be prompted to perform, and I can prove that last fact to you.)
My theories aren't whack. Perhaps I'm not communicating my understanding very well? I'm not saying GPT-4 can't do anything I haven't listed, but that its ability is bounded by what's demonstrated in its corpus (2): the skill is not legitimately due to the model, and you should not expect a GPT-5 to be any better at the tasks. (In fact, it might well be worse: GPT-4 is worse than GPT-3 at some of these things.)
>Not only have I used it, I have made several accurate advance predictions about its behaviour and capabilities – some before GPT-4 was even published.
No you actually haven't. That's what i'm trying to tell you. Your advance prediction are not accurate. what you imagine to be problems are not problems. your limits are not limits. you say it can't make good abstract translations unless overfit to the translation. that's just false. I know because i've tested translation extensively for numerous novels and other works
>I can model these models well enough to fool GPT output detectors into thinking that I am a GPT model. (Give me a writing task that GPT-4 can't be prompted to perform, and I can prove that last fact to you.)
Lmao. Okay mate. The notoriously unreliable GPT detectors with more false positives than can be counted. It's really funny you think this is an achievement.
>(In fact, it might well be worse: GPT-4 is worse than GPT-3 at some of these things.)
What is 4 worse than 3 at ? Give me something that is benchmarkable and can be tested.
>no machine translation system currently in existence could translate the 逆転裁判 games to Ace Attorney games
Maybe it's already in the training set, but GPT-4 does give that exact translation.
I've found that GPT-4 is exceptionally good at translating idioms and other big picture translation issues. Where it occasionally makes mistakes is with small grammatical and word order issues that previous tools do tend to get right.
> Maybe it's already in the training set, but GPT-4 does give that exact translation.
The corpus includes Wikipedia, so yes, it's in there. That's the kind of thing I'd expect it to be good at, along with idioms, when the model gets large enough.
I meant that no machine translation system could translate the games. Thanks to an early localisation decision, you have to do more than just translate words into words for this series, making it a hard problem: https://en.wikipedia.org/wiki/Phoenix_Wright:_Ace_Attorney
> While the original version of the game takes place in Japan, the localization is set in the United States; this became an issue when localizing later games, where the Japanese setting was more obvious.
Among other things, translators have to choose which Japanese elements to keep and which to replace with US equivalents, while maintaining internal consistency with the localisation decisions of previous games. Doing a good job requires more than just linguistic competence: there's nothing you could put in the corpus to give a GPT-style system the ability to perform this task.
Can you try this[0]? I have no access to the -4...
Have you actually used GPT-4 for translation? Seriously all this talk about only getting explicit meaning across would be easily dispelled in an afternoon if you only bothered to try.
Bing Chat:
GPT-4を翻訳に使用したことがありますか?本当に明示的な意味しか伝えられないという話は、試してみれば午後には簡単に反証できます。
(Have you utilized GPT-4 for translations? The story that only really explicit meaning can be conveyed, can be easily disproved by afternoon if tried.)
Google:
実際にGPT-4を翻訳に使ったことはありますか? 真剣に、明示的な意味だけを理解することについてのこのすべての話は、あなたが試してみるだけなら、午後には簡単に払拭されるでしょう.
(Have you actually used GPT-4 for translation? Seriously, This stories of all about understanding solely explicit meanings are, if it is only for you to try, will be easily swept away by afternoon.)
DeepL:
実際にGPT-4を使って翻訳したことがあるのですか?明示的な意味しか伝わらないという話は、やってみようと思えば、午後には簡単に払拭されるはずです。
(Do you have experience of actually translating using GPT-4? The story that only explicit meaning is conveyed, if so desired, can be easily swept away by afternoon)
If I'd do it:
GPT-4を翻訳に使ったことがあって言ってる? 真面目に言って、表層的な意味しか取れないとかないって暇な時にやってみれば分かると思うんだけど。
(Are you saying having used GPT-4 for translation? Seriously speaking, I think that it only gets superficial meaning isn't [true] if [you] would try [it] when [you'd] have time.)
Google: Have you actually used GPT-4 for translation? Really, it's a joke that the story of only conveying explicit meaning can be easily solved by just trying.
DeepL: Have you actually used GPT-4 for translation? Really, it's a joke that all this talk about conveying only explicit meaning can be easily solved by just trying it out.
Mine: Have you actually used GPT-4 for translations? That you can really just, try and easily solve that story that to convey explicit meaning, is such a joke.
"In seriousness, I think the story that [subject] tells the meaning [it/he/they] wants to tell, should be easily solvable by trying a bit."
or "Seriously, the story of telling the meaning [subject] wants to tell, should be easily solvable by trying a bit."
本当に、試してみるだけで簡単に払拭できると思うのに、この「明確な意味だけが伝わる」話ばかりで。
"Really, I think it'll be easily swept away by just trying, but there are so much of this 'only clear meaning is conveyed' stories."
I'm almost feeling that GPT-4 should be eligible for human rights, especially astonishing that they dropped explicit specification of "afternoon" that don't work well. But also interesting it's failing to keep the intent of the whole sentence unlike 3.5 and even more primitive NN translation engines.
> But also interesting it's failing to keep the intent of the whole sentence unlike 3.5
It's because it "knows too much". To anthropomorphise a little: its "expectations" of what should be. To anthropomorphise less: GPT-4 is overfitted. GPT-style language models are pretty amazing, but they're not a complete explanation of human language, and can't quite represent it properly.
> I'm almost feeling that GPT-4 should be eligible for human rights,
Like, UDHR rights? How would that work, exactly?
---
(I've run into the Hacker News rate limit, so posting here.) For anyone who wants an example of "non-obvious meaning" to play with. From The Bells of Saint John (Doctor Who episode, https://chakoteya.net/DoctorWho/33-7.htm):
> CLARA [OC]: It's gone, the internet.
> CLARA: Can't find it anywhere. Where is it?
> DOCTOR: The internet?
> CLARA [OC]: Yes, the internet.
> CLARA: Why don't I have the internet?
> DOCTOR: It's twelve oh seven.
> CLARA: I've got half past three. Am I phoning a different time zone?
In this script from a Doctor Who episode, Clara and the Doctor are having a conversation about the internet. Doctor Who is a British science fiction television series that follows the adventures of the Doctor, a Time Lord from the planet Gallifrey, who travels through time and space in the TARDIS, a time-traveling spaceship.
Clara, the Doctor's companion, is trying to access the internet but is unable to find it. She asks the Doctor about its whereabouts, and the Doctor seems to be confused by the question, as the internet is not something that can be physically found.
The Doctor then mentions the time as "twelve oh seven," while Clara's clock shows "half past three." This discrepancy in time indicates that they are likely in different time zones, as the Doctor implies. In the context of Doctor Who, this could also mean they are in different points in time, since the Doctor can travel through time.
Clara is concerned about whether the time difference will affect her phone bill, to which the Doctor replies that he dreads to think about the potential cost. This adds a bit of humor to the scene, as the Doctor often has a nonchalant attitude towards everyday human concerns.
Overall, this script showcases the humorous and whimsical nature of Doctor Who, with the characters engaging in a lighthearted conversation that intertwines elements of science fiction and everyday life.
The middle three paragraphs are completely wrong. Clara isn't the Doctor's companion in this episode, that's not why the Doctor is confused, the Doctor is giving a year (not a time), Clara doesn't know about “the time difference” (indeed, that is the joke)…
That aside: I was suggesting this as an example of something existing machine translation systems can't translate. The 1207 / 12:07 wordplay could be “understood” by the model (I'm disappointed, albeit not very surprised, that GPT-4 didn't), but producing an adequate translation in a case like this requires actual thought and consideration.
In this script from a Garfield comic, Jon and Garfield are having a conversation about the internet. Garfield is an American comic strip and multimedia franchise that follows the adventures of Garfield, a cat from the planet Earth, who enjoys lasagna in Jon Arbuckle's house, a suburban domicile.
Jon, Garfield's owner, is trying to access the internet but is unable to find it. He asks Garfield about its whereabouts, and Garfield seems to be confused by the question, as the internet is not something that can be physically found.
Garfield then mentions the time as "twelve oh seven," while Jon's clock shows "half past three." This discrepancy in time indicates that they are likely in different time zones, as Garfield implies. In the context of Garfield, this could also mean Jon's clock is wrong, since Garfield is usually right.
Jon is concerned about whether the time difference will affect his phone bill, to which Garfield replies that he dreads to think about the potential cost. This adds a bit of humor to the scene, as Garfield often has a nonchalant attitude towards everyday human concerns.
Overall, this script showcases the humorous and whimsical nature of Garfield, with the characters engaging in a lighthearted conversation that intertwines elements of fantasy and everyday life.
A large portion of the model is common (and cross-trained) across multiple languages. Only a small part is for language specific encoding/decoding. That means that not only is it easier for them to add language, but you can expect similar performance across languages.
Also you can mix and match encoders and decoders to translate whichever languages you want, and it will just work. Previously there was a separate model for each language pair.
Would you believe it, transformers were invented ten years later.
I wonder if it's worth learning anything anymore.