ChatGPT Dumbing Down New Explanation: the World is Changed by AI, Not the Same as it Was in Training
There's a new academic explanation for why ChatGPT is dumb. A study from the University of California, Santa Cruz, points out that large models perform significantly better on tasks before the cutoff of training data.
The paper focuses on the problem of 'task contamination', where large models have seen many examples of tasks during the training period, giving the false impression that the AI has zero or few sample capabilities.
Some scholars have also pointed out from another perspective that large models are trained with frozen parameters and people keep proposing new tasks i.e. the input distribution keeps changing. If the model cannot constantly adapt to this change, it manifests itself as a slow degradation of capability.
People think the AI can answer just one question, when in fact they've seen most common tasks during training. Over time, people started asking more new questions and the AI didn't perform as well.