TIL: Perfect Fit

Oh boy…

Vincent Warmerdam koaning.io
2022-04-22

This TIL is mostly meant as a reminder that I shouldn’t forget this tweet by Sebiastian Raschka. Credits should go to him for sharing, I just want to make sure that I don’t forget it.

Here’s the story, there’s a paper

.. and it comes with a pretty big claim.

We develop a closed-form equation to compute probably good optimal scale factors. Classification is performed at the CPU level orders of magnitude faster than other methods. We report results on AFHQ dataset, Four Shapes, MNIST and CIFAR10 achieving 100% accuracy on all tasks.

Here’s the thing though: CIFAR10 and MNIST contain label errors. So how is 100% even possible?

It seems that there’s a data leak in the code that helps explain what happened.

There’s also a thread on reddit that discusses this paper and it’s suggested that this paper was written in part by GPT-3. This excerpt in particular;

Ultimately, we do classification on CIFAR10 using truncated signatures of order 2, image size 32 × 32 and RGB color samples. We compute an element-wise mean representative of each class using 10 train samples per class and tune the weights using Definition 4 with a validation set of 100 train instances (without augmentation) per class. We then compute scores on the CIFAR10 test set of 10, 000 samples and achieve 100% accuracy.

I’ll be keeping an eye on this because it feels like there might be story unraveling soon. Either way, it’s a good reminder not to blindly trust everything that you find on arxiv.