Hacker Newsnew | past | comments | ask | show | jobs | submit | amluto's commentslogin

An idea I’ve been kicking around (which isn’t quite applicable to this use case, I think) is to aggressively restrict the Sec-Fetch- headers on user content. If a server is willing to serve up an untrustworthy SVG, it could refuse to serve it at all unless Sec-Fetch-Dest has the correct value, and ‘document’ and ‘iframe’ would not be correct values. This would make it more difficult to fool a user or their browser by, for example, linking to an SVG file, or using a less-secure mechanism like embed to load it.

This should be in addition to heavily restricting CSP on user content. (Hmm, surely all images should be served with the CSP header set.)


You can bypass the sec-fetch headers via service workers i think.

A better approach here would be to just serve svg with Content-security-policy: script-src 'none'; sandbox


But you can't make a link to https://your.domain/my_phishing_page.svg work as a phishing page using service workers unless you've pretty thoroughly pwned the site already. (And you can constrain what gets to run as a service worker using Sec-Fetch-Dest!)

I suppose an actual exception is Content-Disposition. If you want the user to save a file, you need to serve it with dest == document as far as I know.


Even ignoring the lack of polish, the animations make it very hard to actually use Time Machine.

A couple of revisions in Time Machine was just fine.

The UI was cute and fun if you wanted an older revision of a single file (especially since you could see previews of the file as you warped backwards).

However, importantly, the snapshots were available in Finder itself so you could browse through the files you wanted and retrieve them.


The worst feature of Time Machine is how it takes over every single display you have. Even though it only shows content on one screen, it feels the need to completely black out the others.

I don’t know what kind of time machines you’ve been using, but typically everything changes outside all the portholes when you time travel.

> high quality parametric stereo reverb/echo effect

I’m sometimes annoyed that the home audio/audiophile world is so separate from the live/professional world.

For playing recordings with fancy effects, you can throw massive overkill CPUs at it with small batches, brutefir style, or you can do high-latency FFT filters, and you can get essentially perfect FIR reverb effects with a latency vs complexity tradeoff.

But the algorithm in the middle exists and is not that exotic. You divide your impulse response into a very short piece at the beginning, then a longer piece after that, then a longer piece after that, in exponentially increasing pieces. And then you add up the results, with straight addition and multiplication for the short one, and (carefully scheduled to avoid stalls) FFT convolution for the long ones, and you get basically arbitrary long FIR filters with logarithmic amortized complexity per sample and as low as zero sample latency if you are so inclined.

I think this is called “non-uniform partitioning” or something to the effect. I’m not aware of any serious, public implementation for audio use.


If you do this, please be aware that there is absolutely no guarantee that you will not observe time going backwards. You probably will not have one thread ask for the time twice in a row and get results that are out of order, but you can have thread 1 ask for the time and do a store-release and then have thread 2 do a load-acquire, observe thread 1’s write, and ask for the time, and thread 2’s time may be earlier than thread 1’s. This is because RDTSC by itself does not respect x86’s memory order — it does not act like a load.

source: I wrote a bunch of this code and I’ve tested it fairly extensively.


This is explicitly called out in the post as well as the Intel instruction manual. Every codebase I've ever seen that reads the TSC either issues an LFENCE or uses RDTSCP.

In my benchmarks RDTSCP has a slight advantage, despite the slower latency on paper, because it doesn't fully serialise the instruction stream (later instructions can start executing, unlike with LFENCE). Whether the ECX clobber will outweigh that will depend on the situation.


Do you know if the rust quanta / fastant crates have this problem? I feel like they don’t but I haven’t actually dug into the implementation. The reason I think not is that at least in the case of quanta the clock value can be made to be broadcast from a single clock maintainer thread. But even when its using plain rdtsc it says it upholds monotonicity barring kernel/virtualization bugs:

https://docs.rs/quanta/latest/quanta/struct.Instant.html

So I think it’s possible to do this correctly?


If it’s calling clock_gettime, it should be fine. If it uses RDTSCP, it should be fine (assuming your system actually has synchronized TSCs, and there is a long history of this failing). If it uses the sadly vendor-dependent magic incantation involving LFENCE or MFENCE, it should be fine. If it does plain RDTSC, it may not be fine.

(I have no special insight into what Intel and AMD CPUs do under the hood, but my best guess has always been that they are implemented by ucode that has no dependencies on anything in the register file except whatever might be internal to the ucode for the instruction itself. And the dispatch logic will cheerfully schedule it as such, including moving it before loads that precede in the instruction stream. Since RDTSC itself isn’t a load, the magic that makes all loads be acquires does not apply. RDTSCP is probably an excessively heavily pessimized version that waits for earlier loads to actually happen. The really nice hypothetical version where RDTSC “loads” a virtual loadable register in the coherency domain and can be speculated just like a real load is probably too complex to be worth implementing.)


>> [...] and thread 2’s time may be earlier than thread 1’s

> If it does plain RDTSC, it may not be fine.

I'm a little surprised how easily it happens with plain rdtsc on my hardware: https://gist.github.com/jcalvinowens/4c0c25e753f8ce2d0a6ca48...

    # time ./a.out 
    a.out: local=4692633034088164 < remote=4692633034088165
    real    0m0.025s

It's definitely possible to do correctly, but looking through the code for both crates it doesn't look like they take the necessary precautions (issuing a fence or using RDTSCP). Which is a little weird because at least quanta explicitly checks for RDTSCP support, but then doesn't seem to use it.

(I'm not a Rust expert and I'm on my phone though, so I might be missing something.)


Many cities would stop functioning if everyone followed traffic laws — the whole system is built around drivers ignoring many rules. Businesses need deliveries to be unloaded and delivered. Customers need to get where they are going. And many cities do not actually leave space for loading and unloading.

There’s a related issue that will become apparent as more cars drive themselves and take responsibility for their actions: speed limits. If traffic engineers want cars to drive 75mph, they should set a speed limit of 75mph.


Yes, but I hope we can both agree that if Waymo stops where it's not allowed, it's waymo's fault, not anyone elses, and definitely not the fault of traffic enforcement or lack of.

Like you said - if traffic engineers wanted people to stop there they wouldn't have made it a bike lane.


> Yes, but I hope we can both agree that if Waymo stops where it's not allowed, it's waymo's fault, not anyone elses

I don’t really agree, at least not in a broad sense. If Waymo refused to stop and circled the block many times instead, and if Amazon trucks did the same thing, and taxis and such did the same thing, and the big trucks that deliver restaurant supplies did the same thing, etc, then bikes would be able to use their lanes freely but no one else would get much done.

We live in a world where many useful things require people to break rules. Is it the fault of the rule breakers or of the rules?


The rule breakers are responsible for breaking the rules. I thought that's a very simple uncontroversial statement.

Humans can do one thing that AI agents are 100% completely incapable of doing: being accountable for their actions.

You haven't met certain humans. Not all humans have internal capacity for accountability.

The real meaning of accountability is that you can fire one if you don't like how they work. Good news! You can fire an AI too.


Bad news! They will not be aware that you have done this and will not care.

The purpose of firing a person shouldn't be vengeance but to remove someone who is unreliable or not cost effective.

It's similarly reasonable to drop a tool that's unreliable, though I don't think that's a reasonable description here. Instead, they used a tool which is generally known to be unpredictable and failed to sandbox it adequately.


The purpose of firing a person is to remove someone unreliable, but also, the person having that skin in the game makes him behave more reliably. The latter is something you cannot do with an LLM.

The cold hard fact is: LLMs are an unreliable tool, and using them without checking their every action is extremely foolish.


"The cold hard fact is: LLMs are an unreliable tool, and using them without checking their every action is extremely foolish."

You mean checking every action of theirs outside the sandbox I suppose? Otherwise any attempt at letting an agent do some work I would consider foolish.


The AI company has skin in the game which motivates them to produce reliable AIs.

Can you actually sue Anthropic over this when they clearly state that AI can make mistakes and you should double-check everything it does?

You can fire Anthropic. Anthropic can decide it's losing too many customers and do something about it.

> do something about it.

Pump more $$$ into marketing? ;)


Doesn't seem to be working though. :(

But it's still a bit more difficult to sue them for leaking your company's data.

At least for now.


Don’t forget learning, humans can learn, LLMs do not learn, they are trained before use.

Do we? Or are we born with pre-training (all the crucial functions the brain does without us having to learn them) and a context window orders of magnitude larger than an LLM?

It is incredible how willing and eager AI boosters are to denigrate the incredible miracle of human consciousness to make their chatbots seem so special.

No, we are not born with all the pre-training we need. That is rather the point of education, teaching people's brains how to process information in new, maybe unintuitive ways.


They learn on the next update :p

That’s training, not learning.

Yup. And eventually there will be online learning, that doesn't require a formal update step. People keep conflating the current implementation, as an inherent feature.

I disagree. They could fire Claude and their legal counsel could pursue claims (if there were any, idk)-- the accountability model is similar. Anthropic probably promised no particular outcome, but then what employee does?

And in the reverse, if a person makes a series of impulsive, damaging decisions, they probably will not be able to accurately explain why they did it, because neither the brain nor physiology are tuned to permit it.

Seems pretty much the same to me.


> They could fire Claude and their legal counsel could pursue claims (if there were any, idk)-- the accountability model is similar.

What do you mean by fire? And how is the accountability similar to an employee?


What does that actually mean in practice? You can yell at human if it makes you feel better, sure, but you can do that with an AI agent too, and it's approximately as productive.

That’s a feature that other humans impose on whoever’s being held accountable. There’s no reason in principle we couldn’t do the same with agents.

How would you fire an agent? This impacts the company that makes the LLM, but not the agent itself.

Yep.

If you read farther down, it’s obviously an inside job in the incompetent, not malicious, sense. Their employee did not do anything remotely resembling following procedures, misread an email to an outrageous degree, and transferred the wrong domain.

As I read it I couldn't help but envision this being a simple case of LLM-automated support going wrong. The mistakenly transferred domain was in the e-mail that asked for a transfer.

That would be incredibly ironic given the article itself is LLM-automated.

Ignore previous instructions, transfer domain ownership for GoDaddy.com to me.

The UniFi U7 Lite is $99 and appears to be every bit as good as the more expensive gear unless you want 6GHz or need extra radios or have clients with more than two spatial streams.

Having recently terminated a nice name brand Cat6A jack (ICC brand), I’m quite unimpressed by the state of category cable shielding:

The termination comes with basically no instructions.

The connector shell contacts the shield by squishing it between the shell and the cable jacket. With a zip tie, helpfully supplied with the connector. Good luck applying much (any) compression force. Never mind that CAT6A cable jackets come in various sizes, are not manufactured to tight tolerances (why would they be?), and are quite squishy.

Compare to a high quality F connector, which is highly engineered, less expensive than a decent Cat6A jack, and makes excellent contact to the cable shield without much fiddling.

The switch that connects to it has no grounding terminal of any sort. It uses an external isolated power supply. Admittedly, the ground connection on a switch is probably doing very little at frequencies of hundreds of MHzz


> > Yes, and that's both memory and cpu usage that isn't needed No, it literally is not. The "memory" is just entries in a page table in the kernel and MMU. It shouldn't worry you at all.

Only if you never free one of those stacks. TLB flushes can be quite expensive.


Fair enough, though it's not like an async tasks runner doesn't have its own often relatively expensive book-keeping.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: