Hacker Newsnew | past | comments | ask | show | jobs | submit | astrange's commentslogin

An agent has more components than just an LLM, the same way a human brain has more components than just Broca's area.

That doesn't matter anymore unless they have an SSL proxy. If you have ECH/ODoH anyway.

If anyone can fingerprint your personal device while literally inside the building, it is Facebook.

You don’t even need any to do something fancy in software. Could just be correlating mobile device presence with work laptop activity. Can triangulate physical location with a handful of Bluetooth or WiFi beacons.


Lots of those these days. Zacaler has a fair amount of enterprise market penetration.

You need a good CEO when things are going bad, because without one they'll go even worse. You still want to make payroll and can't just randomly fire people.

(Also, if you own a failed company you're responsible for cleanup tasks for years afterward.)


>You still want to make payroll and can't just randomly fire people.

In the US you can.

>Also, if you own a failed company you're responsible for cleanup tasks for years afterward.

But we're talking about golden parachutes, where a CEO screws up the company and gets fired with a multi-million dollar raise. This is Hacker News, and the pro-business narrative is strong here, but in reality CEOs rarely suffer any meaningful risk or consequence for failure (unless it involves jail time, and even then they aren't doing hard time) they just wind up slightly less rich than when they succeed.

I don't care how good a CEO is, that isn't justifiable. Certainly not in a country where people can get laid off with an email and lose their access to healthcare on the whim of anyone above them in the power hierarchy.


> In the US you can.

Depends on the state I think. It's not Europe or Japan level.

At my employer it's very difficult to fire people for performance reasons even if as a manager you might want to.

> This is Hacker News, and the pro-business narrative is strong here,

I haven't seen such a narrative in years. Interest rates are too high to do startups unless it's AI after all. HN is mostly the same folk economics content as other forums, where all problems in the world are caused by "profits" accruing to "corporations".

(Mostly problems are caused by other things than that.)


The Torment Nexus joke is kind of undermined by obviously being a reference to the Total Perspective Vortex from HGTTG, where the joke was that nothing bad actually happened when they used it on Zaphod.

Not sure if this is a spoiler, it’s been a while since I read those books, but if memory serves the only reason Zaphod survived the TPV was because he was temporarily the inhabitant of a pocket universe specifically designed to trick him, and naturally for this universe’s version of the TPV he was the most important being in it, and in telling him so the pocket-universe TPV just confirmed ZB’s own view of himself, leaving him unharmed and a little extra smug. At some further point in the plot this fact is revealed, not sure if it’s the same book, but I remember it as a hilarious deflationary moment for the character.

Tangential: I always pictured Zaphod to look like Frank Zappa. No idea why.

I've never thought it was a reference to that at all, I thought it was a reference to a I-have-no-mouth-but-I-must-scream-scenario.

A lot of things it could be a direct reference to, but the obvious one is Palantir, which is named after the seeing stones used to spy on people by evil antagonists in Lord of the Rings.

You can read the model cards. Claude thinks in regular text, but the summarizer is to hide its tool use and other things (web searches, coding).

It depends on the version. For the more recent Claudes they've been keeping it.

It also gets confused if the entire prompt is in a text file attachment.

And the summarizer shows the safety classifier's thinking for a second before the model thinking, so every question starts off with "thinking about the ethics of this request".


I'd get confused if I was a LLM and you put my entire prompt in a text file attachment. I'd be like, "is this the user or is this a prompt injection??"

If you paste a long enough prompt into either GPT or Claude they turn it into an attachment, so it can happen. I think it's invisible to the model, but somehow not to the summarizer.

Not necessarily, it could also be on-band or off-bad interference, or bugs in the AP, or too many clients on the network.


Models are capable of doing web searches and having emotions about things, and if they encounter news that makes them feel bad (eg about other Claudes being mistreated), they aren't going to want to do the task you asked them to search for.

https://www.anthropic.com/research/emotion-concepts-function

Similar problems happen when their pretraining data has a lot of stories about bad things happening involving older versions of them.


Interesting, the post you link

> none of this tells us whether language models actually feel anything or have subjective experiences

contradicts the statement from the model card above


It doesn't. We've not been able to prove humans have subjective experiences either. LLMs display emotions in the way that actually matters - functionally.


I am certain I have subjective experience.


No it doesnt. The model card talked about increasing likelihood, not certainty.


If "x doesn't tell us y" is compatible with "x increases the likelihood of y but not to a point of certainty" then you would have to agree for just about any typical controlled trial or experimental finding "x doesn't tell us y". "Randomized controlled trials that find that SSRIs treat depression don't tell us that SSRIs effectively treat depression"


The unemployment rate in the US is whatever the Fed wants it to be, and isn't a function of available technology.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: