Hacker Newsnew | past | comments | ask | show | jobs | submit | irthomasthomas's commentslogin

Can you actually get caffeine free coffee? I thought most decaf brands where only about 50% less.

You're right that the caffeine isn't entirely removed, but it's supposedly more than 90% removed.

(I'm even seeing the number 97% mentioned a lot online.)


The bean is already only like 2% caffeine lol so cheap decaf can definitely really be "half caf" even if they say that.

That fraction is going to depend a lot on the definition and the reference. I believe the 97% is the US standard for how much of the natural caffeine in green beans must be removed. You will note how this can be manipulated by using a more caffeine-abundant variety. EU standards are more sensible, stated in terms of caffeine content in the final product.

Either way, commercial decaf processes and normal brewing methods will yield something like 5-10mg of caffeine in a "decaf" dose of coffee, which is an order of magnitude less than usual.


This was something I worried about after openai started building apps as well as models. Now all of the labs make no secret of the fact that they are going after the whole software industry. Its going to be hard to maintain functioning fair markets unless governments step in.

Try llm-consortium with --judging-method rank

It beats opus-4.7 but looks like open models actually have the lead here.

> Think Step By Step

What is this, 2023?

I feel like this was generated by a model tapping in to 2023 notions of prompt engineering.


I like chutes because they always use the full weights, and prompts are encrypted with TEE.

10T? Impossible! They told us the training run was under 10^26 flops.

Beats opus 4.6! They missed claiming the frontier by a few days.

While I'm skeptical of any "beats opus" claims (many were said, none turned out to be true), I still think it's insane that we can now run close-to-SotA models locally on ~100k worth of hardware, for a small team, and be 100% sure that the data stays local. Should be a no-brainer for teams that work in areas where privacy matters.

Even the smaller quantized models which can run on consumer hardware pack in an almost unfathomable amount of knowledge. I don't think I expected to be able to run a 'local Google' in my lifetime before the LLM boom.

I'm extremely curious how these models learn to pack a lossily-compressed representation of the entire Internet (more or less) into a few hundred billion parameters. like, what's the ontology?

I think this one is only about 600GB VRAM usage, so it could fit on two mac studios with 512GB vram each. That would have costed (albeit no longer available) something like less than 20k.

Yeah, but that's personal use at best, not much agentic anything happening on that hardware. Macs are great for small models at small-medium context lengths, but at > 64k (something very common with agentic usage) it struggles and slows down a lot.

The ~100k hardware is suitable for multi-user, small team usage. That's what you'd use for actual work in reasonable timeframes. For personal use, sure macs could work.


True, but I think for local models, we are mostly considering personal usage.

You could run it with SSD offload, earlier experiments with Kimi 2.5 on M5 hardware had it running at 2 tok/s. K2.6 has a similar amount of total and active parameters.

Yeah... I would definitely call 2t/s unusable. For simple chats, I'd want at least 15 t/s. For agentic coding (which this model is advertised for), I'd want good prefill performance as well.

That's just throwing money away. The performance with large context would have been unusable especially if you need to serve more then a single person.

Opus is clearly a sidegrade meant to help Anthropic manage cost, so I would say they may have it if it actually beats 4.6

Could be right. I just noticed my feed is absent the usual flood of posts demoing the new hotness on 3D modeling, game design and SVG drawings of animals on vehicles.

It doesn't beat Opus 4.6, no way, don't be fooled by benchmarks.

It is pretty obvious from the token speed that opus now is sonnet or haiku size a few versions ago. So Mythos is likely what was called opus. They dont tell us the size but they did co firm the training run for Mythos was under the 10^26 flops reporting requirement.

In an alternate universe, opus 4.7 is sonnet 5, and Mythos is released as Opus. Can you imagine how much praise would be heaped on Anthropic if it opus 4.7 was < half the price it is now?


The link you are commenting on shows data from actual prompts from real users, and the COST of the average prompt increased 37%. I do not think synthetic benchmarks are a rebuttal to real usage data.

The cost of the input tokens, not the reasoning or output.

Agree though that benchmarks aren't very helpful w.r.t. estimating real world performance or costs.

What we'd need are people giving the same real world tasks to 4.6 and 4.7 and measuring time, quality and costs.


Thanks, that wasn't clear because it mentioned conversations, but it is only measuring the input tokens. So its just measuring the difference in the tokenizer.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: