AI cannot push the frontier forward

frontier
people don’t notice the big changes happeniong, incremental progress adds111 up to a lot, over a long time Disagree on it being incremental. I need to write something about this. My argument is in short - that LLMs can synthesize anything “inside the frontier” eg. writing a poem (there’re enough examples to work with to make a novel poem) or implementing something that’s well documented / has loads of examples - but it can’t “push” the frontier forward (eg. can’t synthesize a large codebase as there are no examples to work from); that may change in future but for it to change it’d require hypothesis forming AND testing AND a self-iterating feedback mechanism to all be part of the agentic loop. Self researching agents - not a thing without ability to synthesize beyond what’s int he training data Our training data comes from our environment, and training is continuous There won’t be an “AI takeover”