If anyone can fingerprint your personal device while literally inside the building, it is Facebook.
You don’t even need any to do something fancy in software. Could just be correlating mobile device presence with work laptop activity. Can triangulate physical location with a handful of Bluetooth or WiFi beacons.
You need a good CEO when things are going bad, because without one they'll go even worse. You still want to make payroll and can't just randomly fire people.
(Also, if you own a failed company you're responsible for cleanup tasks for years afterward.)
>You still want to make payroll and can't just randomly fire people.
In the US you can.
>Also, if you own a failed company you're responsible for cleanup tasks for years afterward.
But we're talking about golden parachutes, where a CEO screws up the company and gets fired with a multi-million dollar raise. This is Hacker News, and the pro-business narrative is strong here, but in reality CEOs rarely suffer any meaningful risk or consequence for failure (unless it involves jail time, and even then they aren't doing hard time) they just wind up slightly less rich than when they succeed.
I don't care how good a CEO is, that isn't justifiable. Certainly not in a country where people can get laid off with an email and lose their access to healthcare on the whim of anyone above them in the power hierarchy.
Depends on the state I think. It's not Europe or Japan level.
At my employer it's very difficult to fire people for performance reasons even if as a manager you might want to.
> This is Hacker News, and the pro-business narrative is strong here,
I haven't seen such a narrative in years. Interest rates are too high to do startups unless it's AI after all. HN is mostly the same folk economics content as other forums, where all problems in the world are caused by "profits" accruing to "corporations".
(Mostly problems are caused by other things than that.)
The Torment Nexus joke is kind of undermined by obviously being a reference to the Total Perspective Vortex from HGTTG, where the joke was that nothing bad actually happened when they used it on Zaphod.
Not sure if this is a spoiler, it’s been a while since I read those books, but if memory serves the only reason Zaphod survived the TPV was because he was temporarily the inhabitant of a pocket universe specifically designed to trick him, and naturally for this universe’s version of the TPV he was the most important being in it, and in telling him so the pocket-universe TPV just confirmed ZB’s own view of himself, leaving him unharmed and a little extra smug. At some further point in the plot this fact is revealed, not sure if it’s the same book, but I remember it as a hilarious deflationary moment for the character.
A lot of things it could be a direct reference to, but the obvious one is Palantir, which is named after the seeing stones used to spy on people by evil antagonists in Lord of the Rings.
It also gets confused if the entire prompt is in a text file attachment.
And the summarizer shows the safety classifier's thinking for a second before the model thinking, so every question starts off with "thinking about the ethics of this request".
I'd get confused if I was a LLM and you put my entire prompt in a text file attachment. I'd be like, "is this the user or is this a prompt injection??"
If you paste a long enough prompt into either GPT or Claude they turn it into an attachment, so it can happen. I think it's invisible to the model, but somehow not to the summarizer.
Models are capable of doing web searches and having emotions about things, and if they encounter news that makes them feel bad (eg about other Claudes being mistreated), they aren't going to want to do the task you asked them to search for.
It doesn't. We've not been able to prove humans have subjective experiences either. LLMs display emotions in the way that actually matters - functionally.
If "x doesn't tell us y" is compatible with "x increases the likelihood of y but not to a point of certainty" then you would have to agree for just about any typical controlled trial or experimental finding "x doesn't tell us y". "Randomized controlled trials that find that SSRIs treat depression don't tell us that SSRIs effectively treat depression"
reply