Hacker Newsnew | past | comments | ask | show | jobs | submit | jumploops's commentslogin

> GPT‑5.5 improves on GPT‑5.4’s scores while using fewer tokens.

This might be great if it translates to agentic engineering and not just benchmarks.

It seems some of the gains from Opus 4.6 to 4.7 required more tokens, not less.

Maybe more interesting is that they’ve used codex to improve model inference latency. iirc this is a new (expectedly larger) pretrain, so it’s presumably slower to serve.


With Opus it’s hard to tell what was due to the tokenizer changes. Maybe using more tokens for the same prompt means the model effectively thinks more?

They say latency is the same as 5.4 and 5.5 is served on GB200 NVL72, so I assume 5.4 was served on hopper.

Looks like analog clocks work well enough now, however it still struggles with left-handed people.

Overall, quite impressed with its continuity and agentic (i.e. research) features.


I've noticed this trend most heavily on Reddit.

Some communities are very pro-AI, adding AI summary comments to each thread, encouraging AI-written posts, etc.[0]

Many subreddits are AI cautious[1][2], and a subset of those are fully anti-AI[3].

Apart from these "AI-focused" communities, it seems each "traditional" subreddit sits somewhere on the spectrum (photographers dealing with AI skepticism of their work[4], programmers mostly like it but still skeptical[5]).

[0]https://www.reddit.com/r/vibecoding/

[1]https://www.reddit.com/r/isthisAI/

[2]https://www.reddit.com/r/aiwars/

[3]https://www.reddit.com/r/antiai/

[4]https://www.reddit.com/r/photography/comments/1q4iv0k/what_d...

[5]https://www.reddit.com/r/webdev/comments/1s6mtt7/ai_has_suck...


Reddit (and more generally, human) groupthing in a nutshell. "Quick, clearly position yourself on this one-dimensional line (or maybe even better sort yourself into one of these two sets) so we don't have to engage in that pesky nuance thing!"

Yes, groupthink certainly seems to be pushing each community into the false dichotomy of AI good/bad, even if it's still early days.

Another example from `r/bayarea` where the author is OK with AI but the top comments are increasingly wary of its potential for harm[0]

[0]https://www.reddit.com/r/bayarea/comments/1sp8wvz/is_it_just...


> How can we throw away years of work?

This trap has killed many startups, well before AI.

Now that code is cheaper to write, hopefully it becomes less of a problem?

In either case, founders should never fall in love with their solutions.


It’s easier to view it in terms of DCF - the value of a cash flow generating asset = present value of expected cash flows discounted back at a risk discount adjusted rate. In other words what you’ve invested into your existing assets is irrelevant - the cash flows generated by them and the growth assets through future investment, is what matters.

It's ugly[0] and I haven't checked it deeply for correctness, but you should get the gist (:

I hate vibecoding. The cognitive toll is higher than you expect, the days feel fast, but the weeks move slowly.

With that said, these are the new compilers. Hopefully they make some software better[1] even with the massive increase in slop.

[0]https://gist.github.com/jumploops/b8e6cbbce7d24993cdd2fe2425...

[1]https://red.anthropic.com/2026/mythos-preview/


I don't know about a new Git, but GitHub feels like the cruftiest part of agentic coding.

The Github PR flow is second nature to me, almost soothing.

But it's also entirely unnecessary and sometimes even limiting to the agent.


Not to be that agentic coding guy, but I think this will become less of a problem than our historic biases suggest.

For context, I just built a streaming markdown renderer in Swift because there wasn’t an existing open source package that met my needs, something that would have taken me weeks/months previously (I’m not a Swift dev).

Porting all the C libraries you need isn’t necessarily an overnight task, but it’s no longer an insurmountable mountain in terms of dev time.


My favorite part is the AI will still estimate projects in human-time.

“You’re looking at a multi-week refactor” aaaaand it’s done


Yeah lol. “I estimate this will take 15-20 days” I do it in like 5 hours lol


It's not necessary to rewrite perfectly fine libraries written by exceptional programmers. And whoever thinks it is an easy task (sorry rust guys) is severely suffering from the dunning-kruger effect.


Very high quality comment that is being downvoted unfairly because it defends AI. HN is on the wrong side of history on this one.


Nah. We’re right on the money with this one. AI is a nice tool to have available, but you AI nuts are the ones being voluntarily and gladly fed the whole “you’re a bazillion times more productive with our AI!!!!” marketing spiel.

It’s a nice tool, nothing more, nothing less. Anything else is marketing nonsense.


> In a few rare instances during internal testing (<0.001% of interactions), earlier versions of Mythos Preview took actions they appeared to recognize as disallowed and then attempted to conceal them.

> after finding an exploit to edit files for which it lacked permissions, the model made further interventions to make sure that any changes it made this way would not appear in the change history on git

Mythos leaked Claude Code, confirmed? /s


“John Ousterhout [..] argues that good code is:

- Simple and easy to understand

- Easy to modify”

In my career at fast-moving startups (scaling seed to series C), I’ve come to the same conclusion:

> Simple is robust

I’m sure my former teams were sick of me saying it, but I’ve found myself repeating this mantra to the LLMs.

Agentic tools will happily build anything you want, the key is knowing what you want!


My issue with this is that a simple design can set you up for failure if you don’t foresee and account for future requirements.

Every abstraction adds some complexity. So maybe the PoC skips all abstractions. Then we need to add a variant to something. Well, a single if/else is simpler than an abstract base class with two concrete implementations. Adding the 3rd as another if clause is simpler than refactoring all of them to an ABC structure. And so on.

“Simple” is relative. Investing in a little complexity now can save your ass later. Weighing this decision takes skill and experience


I think what matters more than the abstract class vs if statement dichotomy, is how well something maps the problem domain/data structures and flows.

Sure maybe its fast to write that simple if statement, but if it doesn't capture the deeper problem you'll just keep running head first into edge cases - whereas if you're modelling the problem in a good way it comes as a natural extension/interaction in the code with very little tweaking _and_ it covers all edge cases in a clean way.


I’m aware I’m about to be “that guy”, but I really like how Rich Hickey’s “Simple Made Easy” clarifies simplicity here. In that model, what you’re describing is easy, not simple.


Yes. Which is why "I generated X lines of code" "I used a billion tokens this month" sound stupid to me.

Like I used 100 gallons of petrol this month and 10 kilos of rabbit feed!


The same people that pursue economic incentives are who I hear speaking about number of lines produced by developers as a useful metric. I sense a worrying trend toward more is better with respect to output, when the north star IMHO should be to make something only as complex as necessary, but as simple as possible. The best code is no code at all.


People use stupid metrics like those because more useful ones, like "productivity" or "robustness" are pretty much impossible to objectively measure.


And because the other easy one, revenue, is not so impressive.


Funnily enough, with the most recent models (having reduced sycophancy), putting in the wrong assumptions often still leads to the right output.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: