Hacker Newsnew | past | comments | ask | show | jobs | submit | SpicyLemonZest's commentslogin

If you have the capability and discipline to perform a T-bill ladder, you've gotta understand that you're not the person who general financial advice is targeted to. Being deliberately vague, but I've lost nontrivial amounts of money simply because I got distracted when doing a critical financial task and only remembered to get back to it months later. I think I can safely speculate that the story in the source article would never happen to you because you could easily locate the right account numbers if you found yourself locked out of a financial webapp.

I appreciate the vote of confidence but to be clear I just set it up on auto-reinvest for 24 months. There’s an initial setup every two years but the rest is mindless.

Maybe you’re right though. I maintain a non-trivial amount of data in my password manager to ensure I always have a centralized place to begin the hunt for information.


It's true that Trump was generally a reasonable person in 2011, albeit brash and certainly someone I had many disagreements with. Here's a post he made about Iran a few days ago, reproduced in full to ensure I'm not cherrypicking sentences:

> For those people, fewer in number now than ever before, that are reading The Failing New York Times, or watching Fake News CNN, that think that I am “anxious” to end the War (if you would even call it that!) with Iran, please be advised that I am possibly the least pressured person ever to be in this position. I have all the time in the World, but Iran doesn’t — The clock is ticking! The reason some of the Media is doing so poorly with Subscribers and Viewers is because they no longer have credibility. Iran’s Navy is lying at the bottom of the Sea, their Air Force is demolished, their Anti Aircraft and Radar Weaponry is gone, their leaders are no longer with us, the Blockade is airtight and strong and, from there, it only gets worse — Time is not on their side! A Deal will only be made when it’s appropriate and good for the United States of America, our Allies and, in fact, the rest of the World. President DONALD J. TRUMP

To me this seems pretty stupid. If someone wrote this message in a social media argument I would block them.


I reject the premise. The President is not a king, he isn't presumptively allowed to fire anyone he'd like. The statute establishing the National Science Board (https://www.law.cornell.edu/uscode/text/42/1863) does not give him any such power, so he doesn't have it.

NSB members are executive officers, the statute is silent on removal, and Article II makes presidential removal power the default. Silence means he can fire them.

Article II says no such thing. Humphrey's Executor established a useful compromise between "the Constitution is silent on removal" and "come on, is it really impossible to fire a postmaster?", but Trump has chosen to defect from that compromise so I no longer feel bound to accept it. Until he reinstates all independent agency heads he's purported to fire, I don't accept any removals he performs without explicit authority as legitimate.

if a court overturns or reinterprets that, then it is the law. America is a common law country, not a civil law country. The process of litigation and court precedent is how laws work in a common law country, so I don't see how your framing of the situation is really all that valid.

Common law countries also benefit from common interpretations of laws, which leads to stability for both the citizens who live and businesses that operate in that country. “Calvinball” is technically possible in all common law countries, as long as you move fast enough for the courts to not catch up. So while you are technically correct, I do not want to live in the version of America where your technical correctness is tested to the limit of its bend-but-not-break strength.

"Freedumb" is not as benign as you think it is either. If you're aspiring to meaningful commentary rather than social media upvotes, I would advise avoiding both.

In my opinion the people who are deliberately destroying science and education should be prosecuted for literal textbook treason, not merely mocked.

These are the "fuck your feelings" people whose feelings you're worrying about.


It's the feelings of uninformed people who don't yet know what's going on that I'm worrying about. Saying things like "libruls" and "freedumb" makes it harder to build the coalition which we'll need to prosecute the perpetrators.

Eh, they’re probably too busy molesting their younger siblings to take notice or offense.

> who are deliberately destroying science...

But this is subjective. What you call as "Science" might be pseudoscience for someone else. As an example, some decade back, following and trusting peer reviewed research was "scientific", but even back then I thought it was a stupid, unscientific thing to do. Today the problems with peer review process is pretty widely acknowledged. But back then I would have been considered unscientific to not fully trust peer reviewed research. People also used to say things like "Science is settled" and "Trust the experts", which is the most unscientific thing that one could possibly say.

So since there is a lot of unscientific things that is being called "science" these days, I think this is very subjective.


We're just telling you what the President says! It's his explicit position, which he repeats constantly, that everyone in the government should be personally loyal to him and never do anything he doesn't want them to do. If you'd prefer for his side not to be full of toadies, you'll have to take it up with him; he's making a conscious decision to do things that way.

[flagged]


> Executive branch shouldn't be beholden to the Executive?

No. It should be beholden to the law. And sometimes the law creates independent agencies because that’s the only way to administer a complex, free society.


It's really a quite boring take. The executive branch is not the President's personal property; he's a temporary custodian of it on behalf of the American people, and he has a duty to faithfully execute the laws Congress passed. He has no legitimate power to randomly smash things just because he'd personally like for them to be different.

People acted like this constantly, the 70s are incredibly sanitized in the popular imagination. A government strike force murdered 4 student protesters at Kent State in 1970; Boston had 40 race riots between 1974 and 1976.

I think frontier AI research should be outlawed until such time as there's a broad consensus on how society ought to deal with it. This would have to be coordinated internationally to be effective, but I think that would be achievable if the US sent a credible signal by forcibly shutting down any one of the major labs.

Even supposing we could somehow get the political will to do this, how would you write such a law? What counts as “AI frontier research”? How would you write a regulation around that that isn’t trivial to bypass without banning general computing itself?

As I said in a sibling comment, we're fortunate that training modern AIs requires large quantities of specialized compute. We just have to restrict GPU sales and outlaw GPU farms. I don't deny that it would be a seismic, controversial change, but I don't think it's terribly hard to implement if we can reach a consensus that we want to implement it.

This is never going to happen. Is something can be done, it will be done.

>If something can be done, it will be done.

What does this mean? It's obviously false on its face.


It means that if something is physically possible, someone will be doing it, regardless of legal, moral, or social barriers. False on its face? Not that long ago, global public opinion was mortified at the news, that newborn twins in China have been genetically modified. I am old enough to remember the outrage in the late 90s as the world watched the first cloned sheep grow up, get sick, and die. It was possible to do, so someone had done it.

The point is - with the use of law, morality, social pressure, we can moderate the frequency and scale of some phenomena, but we cannot stop it. I think this idea is what prevents some bans. "If the Chinese can do it, and we stop ourselves from doing it, they will gain an advantage and we would lose". Substitute "the Chinese" with whoever is the opponent at any given point in time and you have a rather plausible explanation for why things were the way they were.


There were historical worries about whether a ban would be feasible, but frontier AI research as we understand it today requires large amounts of specialized compute. Even if we couldn't or wouldn't destroy the chips, we could imprison anyone who tries to start a large training run, the same way we imprison anyone who tries to buy enriched uranium.

Yes, that is true, but it's not my point. I am not saying it'd be impossible to find people who are doing it. My point is that there will always be a group of people, who'd be willing to do potentially dangerous things as long as those things are possible and are believed to provide some sort of advantage. For that reason, those people would either be in decision making positions or have a good enough offer to decision makers. Speaking of uranium - I don't think AI is anything like it (although the AI industry propaganda really wants us to believe that), but even there we have examples of countries that were pursuing nuclear weapons both successfully and unsuccessfully as well as countries that could have them, but choose not to. So the ban itself isn't necessarily the main point here.

All three of these problems are thoroughly solved by widely available tools.

They are? Is your LLM ready to run your organization without further input from you or anyone? Do you realize that "memory" requires eating your hilariously small context window?

Have you not had a discussion with Opus where it insists it is correct about something it is objectively wrong about for several turns?


That seems like an unreasonably high standard. I like to think that I have memory, agency, and self awareness, but I'm not ready to run my organization without further input from anyone.

> Do you realize that "memory" requires eating your hilariously small context window?

I do! LLMs are structured differently than humans, so the component we call "memory" corresponds to what humans call "short-term memory"; practical long-term memory for an LLM looks much more like what a human would call "let me write this down". But you can and commercially available systems do load it into context on demand when it's needed for some problem or another.


>memory, agency, and self awareness

The LLM only currently has the illusion of these things. Hence the bubble.

I know that you (or anyone) as a human being don't have the illusion of these things.

This is not like the car replacing the horse for transportation. The LLM as-is cannot fundamentally replace the person. They require the agency of a human to take turns at all, and even more so to enact change in the world.

Your LLM does not actively engage in the world because it does not experience anything. It only responds to queries. We can do a lot with that, but it's not intelligence. It can't say oh hey SpicyLemonZest, I was thinking and had an idea the other day. Because it has nothing between each query.


The "alignment problem" as traditionally understood assumed a different path to AI development, where the best AIs wouldn't primarily operate on a substrate of human language. If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem, and it seems no less likely today than it did in 2023.

> If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem

That's massively moving the goalposts on what counts as "an existential problem." The original framing was not economic dislocation but actual existence, i.e. existential. This new framing is a retreat to a way-of-life argument.

And I'm still calling baloney! The "AI will kill us all" argument backfired on Altman et al, so now we have an "it'll take over all the jobs" pitch. But it's all smoke and mirrors for investors. We have no good reason to expect current AI methods will lead to an AGI that can not only do most human labour, but do so economically competitively.


I don't understand how you can consider the AI industry to be in any sense retreating from prior claims. The existential problem remains an active near-future risk; you're hearing a lot about the jobs problem because it's already here, now, today. Do you not remember how much less capable AI systems were in 2023, and how implausible it seemed that they could become as good as they are now without new theoretical breakthroughs?

They didn't expect anything else and aren't surprised. "The X industry is discovering..." is one of those stock phrases that people just kinda deploy willy-nilly; the article contains no argument that anyone in the AI industry didn't know or didn't expect this.

Presumably these companies on the verge of an IPO don’t want the public to hate them or their product. It wasn’t exactly a calculated maneuver - they made a decision to leverage fear-based marketing and it backfired.

It wasn't any sort of maneuver! It's what they genuinely believe! Both OpenAI and Anthropic have been telling people about the existential risk of powerful AI since the very day they were founded; OpenAI has been at it since 2015, 8 years before they had any meaningful product to market.

Sam Altman still says, after being the victim of anti-AI violence, that "the fear and anxiety about AI is justified" and "it will not all go well".

People simply refuse to believe that AI companies are serious about this, and get twisted into knots trying to understand why AI companies would choose this messaging under the premise that they can't be serious.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: