Vocabulary dictionary

Kanji dictionary

Grammar dictionary

Sentence lookup

test
 

Forums - How accurate is ChatGPT?

Top > 会話 / General discussion > Anything Goes



avatar
パン
Level: 369

Sometimes I use ChatGPT to break down difficult sentences and to understand the nuances of synonyms. As for the sentence breakdown, I think the translation and explanation is pretty good and it translates much better than translator apps but when It comes to nuances of synonyms, I'm never sure whether ChatGPT explains it correctly or if it just makes things up sometimes. I find it time consuming to search the internet for explanations so I tend to just ask ChatGPT. Does anyone know if I can trust it? Or is it better if I just use Hinative?

1
3 months ago
Report Content
avatar

I cant say with certainty if it is better or not as i have not used it alot for translation work (or at all for that matter), so i wont say it is bad. I do know however that AI has had a tendency to hallucinate information from time to time.


Long story cut shorter, i would not use ChatGPT alone. I would not trust it 100% and would rather use it as a rough indicator or a springboard. I mean..... If ChatGPT can mess up preparing legal advice for english from english sources, chances are it can mess up japanese to english aswell.

2
3 months ago
Report Content
avatar
パン
Level: 369

I see! Thank you. I often compare the ChatGPT answers with Hinatives(or other platforms) answers because I trust them more.

1
3 months ago
Report Content
avatar
ペルセフォネ
Level: 134

My comment is similar to ヂワルワ ヂーキー's point: AI tends to hallucinate information. While it may sometimes provide accurate translation information, it is not guaranteed. It can be used to proofread if you have nobody to help you, but I wouldn't personally trust it with helping me learn a language, especially one as difficult as Japanese.

As a side note for how dangerous ChatGPT's hallucinations can be, a lawyer got sanctioned after ChatGPT generated fraudulent cases. Granted, the lawyer did then try to cover up the fact that the cases were fraudulent, but the fact of the matter is that ChatGPT started a chain of events that led to a lawyer being sanctioned.

-クラウディア

3
3 months ago
Report Content
avatar

ChatGPT is as good as its training data. If it has seen your text before, it will give you the translation it was trained on. But if there is something novel in there it might skip over it or substitute something completely unrelated. It’s best not to trust it.

5
3 months ago
Report Content
avatar
パン
Level: 369

Thats interesting Thank you all for your answers! Do you think that AIs like ChatGPT will get so smart someday that you can trust them at least 90%? Or is this rather unlikely to happen?

0
3 months ago
Report Content
avatar

Think of it like this: Our AI are rather like humans. If they learn good that is great, they will get things wrong if the source is wrong.... And it might try to gaslight you because maybe it has performance anxiety and thinks pretending it is right might help it. In that sense it is more human than AI in movies. If it were me, i would either not use it, or use it as another human with giving it a sentence and double checking the answer. In the end it is up to you tho.

1
3 months ago
Report Content
avatar
ロウ (Row)
Level: 476

ChatGPT is made to give nice sounding answers more than to be correct. If people tell it something completely wrong it will repeat it. I personally don't ask it Japanese questions because I don't trust it to know or give an accurate explanation. Since it doesn't even know math I doubt it's language knowledge. You can ask it but you just have to remember like everyone said it's like asking somebody else learning it, you have to take what it says with a grain of salt

3
3 months ago
Report Content
avatar
マイコー
Level: 283

No one else has pointed this out, but I feel that the biggest problem with chatgpt is the *tone* of the messaging. We humans have 1000 different ways to expressing doubt or certainty when using language, and LLMs have 0 of that. So everything is presented with wording that suggests "What I'm saying is a fact".

I'm not suggesting that you do this just for this one thing, but if you ever listen to the discord learning events that I teach, I (just like most people on this earth) will use hedging language all over the place when I am not 100% sure about what I am teaching/saying.

5
3 months ago
Report Content
avatar

Yes, ChatGPT speaks with authority that it doesn’t have.

1
3 months ago
Report Content
avatar
Thats interesting
Thank you all for your answers! Do you think that AIs like ChatGPT will get so smart someday that you can trust them at least 90%? Or is this rather unlikely to happen?

It happens that a new paper just came out just today: LLMs Will Always Hallucinate, and We Need to Live With This.

Key points from the paper.

  • Hallucinations in LLMs are not just mistakes, but an inherent property. They arise from undecidable problems in training and usage process, and can't be fully eliminated through architectural improvements or data cleaning.
  • The authors use computational theory and Gödel's incompleteness theorems to explain hallucinations. They argue that LLM structure inherently leads to some inputs causing the model to generate false or nonsensical information.
  • The complete elimination of hallucinations is impossible due to undecidable problems in LLM foundations. No amount of tweaks or fact-checking can fully solve this issue. It is a fundamental limitation of the current LLM approach.
    4
    3 months ago
    Report Content
    avatar
    ペルセフォネ
    Level: 134
    Thats interesting
    Thank you all for your answers! Do you think that AIs like ChatGPT will get so smart someday that you can trust them at least 90%? Or is this rather unlikely to happen?

    It happens that a new paper just came out just today: LLMs Will Always Hallucinate, and We Need to Live With This.

    Key points from the paper.

    • Hallucinations in LLMs are not just mistakes, but an inherent property. They arise from undecidable problems in training and usage process, and can't be fully eliminated through architectural improvements or data cleaning.
    • The authors use computational theory and Gödel's incompleteness theorems to explain hallucinations. They argue that LLM structure inherently leads to some inputs causing the model to generate false or nonsensical information.
    • The complete elimination of hallucinations is impossible due to undecidable problems in LLM foundations. No amount of tweaks or fact-checking can fully solve this issue. It is a fundamental limitation of the current LLM approach.

      ありがとう。とてもそう。このをもうすぐみます。

      -クラウディア

      1
      3 months ago
      Report Content
      Getting the posts




      Top > 会話 / General discussion > Anything Goes


      Loading the list
      Lv.

      Sorry, there was an error on renshuu! If it's OK, please describe what you were doing. This will help us fix the issue.

      Characters to show:





      Use your mouse or finger to write characters in the box.
      ■ Katakana ■ Hiragana