There would still be a point to encourage better predictions from the public information through better modeling. We aren't always using the optimal models to predict. One example: LLMs are "just" predicting the next token given the public information of the tokens that came before, but they work considerably better at making that prediction than the models that came before them.
I've seen it so this too. I had it keeping a running tally over many turns and occasionally it would say something like: "... bringing the total to 304.. 306, no 303. Haha, just kidding I know it's really 310." With the last number being the right one. I'm curious if it's an organic behavior or a taught one. It could be self learned through reinforcement learning, a way to correct itself since it doesn't have access to a backspace key.
The default outputs are considerably shorter even in thinking mode. Something that helped me get the thinking mode back to an acceptable state was to switch to the Nerd personality and in the traits customization setting tell it to be complete and add extra relevant details. With those additions it compares favorably to o3 on my recent chat history and even improved some cases. I prefer to scan a longer output than have the LLM guess what to omit. But I know many people have complained about verbosity so I can understand why they may have moved to less verbiage.
It's likely that the commenter has read less than 5 million posts worth of text though. So perhaps this still points to a lack of diversity in content.
You got me wondering. Supposing the average post is 10 words, and a typical page of text is 250 words, that would only be ~50 pages of text a day over the last 10 years. Which I don't think I manage, but over 20 years I am probably in that window.
I saw a very similar timely appeal here on Hacker News a few years ago and taught my son with this book at the age of 4. It has become my go-to comparison when prompting chat bots on what I want in a teaching material for other subjects. I listened to the entire article posted here and it makes me wonder if schools are getting something as foundational as reading wrong how can we trust the attention to research on anything else they're teaching? Don't get me wrong, I'm not going to pull my kid out of school but I'll dig a little deeper into how well he's learning. For math, we've been doing the Beast Academy books. It has gone... Okay. I like that they approach problems from many different ways which simulate the many different ways math is hidden in our interactions with the world. For my younger son I've recently started Teaching Your Child... because of how well it went for his brother but for math I may try something else to have a new data point. Something that occurred to me listening to the article is I wonder if certain skills are learned much faster with one on one instruction like the book has you do. Our schools pretty much never teach that way out of efficiency, though home schools often do. It may not be true for most subjects though or home school students would be so far ahead by college and that's not the impression I have.
reply