Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

barsoap ,

That paper is yet to be peer reviewed or released.

Never doing either (release as in submit to journal) isn't uncommon in maths, physics, and CS. Not to say that it won't be released but it's not a proper standard to measure papers by.

I think you are jumping into conclusion with that statement. How much can you dilute the data until it breaks again?

Quoth:

If each linear model is instead fit to the generate targets of all the preceding linear models i.e. data
accumulate, then the test squared error has a finite upper bound, independent of the number
of iterations. This suggests that data accumulation might be a robust solution for mitigating
model collapse.

Emphasis on "finite upper bound, independent of the number of iterations" by doing nothing more than keeping the non-synthetic data around each time you ingest new synthetic data. This is an empirical study so of course it's not proof you'll have to wait for theorists to have their turn for that one, but it's darn convincing and should henceforth be the null hypothesis.

Btw did you know that noone ever proved (or at least hadn't last I checked) that reversing, determinising, reversing, and determinising again a DFA minimises it? Not proven yet widely accepted as true, crazy, isn't it? But, wait, no, people actually proved it on a napkin. It's not interesting enough to do a paper about.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines