Hacker Newsnew | past | comments | ask | show | jobs | submit | simianwords's commentslogin

Wishful thinking and you sound flat out scared.

Making companies rich is how society functions. It’s not zero sum. LLMs aren’t going anywhere.

In 5 years LLMs are here to stay and their usage will increase and not decrease. You aren’t able to face this fact and instead hide behind slurs like shills and astrotrufers.


> Wishful thinking and you sound flat out scared.

The spin doctor's favorite move: Make your opponents look like they're irrational.


If you think so: let’s have a bet. Do you think LLMs will increase or decrease in 5 years? You can make me look stupid in 5 years

Why wait?

Bitcoin "increased" over multiple five year horizons. That doesn't mean it solves a problem or actually has a concrete use case beyond gambling, which is precisely the point of the article - that you appear to have missed.

>That doesn't mean it solves a problem or actually has a concrete use case beyond gambling

The ability to move funds without government authorization serves as a valuable hedge to many.


The asset is too volatile to do that while guaranteeing preservation of value.

No matter. Miss me with these 3 month old LLM accounts shilling the latest tech snake oil.


You're absolutely right it's volatile. But people are happy to kick some money into that asset and let it bounce up and down. Something is better than nothing for this use case. If they ever want to cash out under non-rushed circumstances, well it's volatile so just wait 6-12mo for a reasonable high.

Lots of economies and industries are volatile too, or subject to volatile politics.


> You're absolutely right

This got a smirk out of me.


What you describe is paramount to gambling.

>What you describe is paramount to gambling.

Lolwut? What are you talking about?

Parking somme $$ in bitcoin in case you have government problems is no different than shoving money into land or gold in case you have stock market problems. The return is beside the point, it could be crap for all you care. The point is at least all your assets won't go to shit at the same time. It's damn near the opposite of gambling.


People who point out the immaturity of arguments like this and the simplicity of the worldview that espouses them aren’t “spin doctors”. The author has very clearly not thought deeply about what the real issues are: authoritarian governments w/surveillance (in 3-5 years a powerful LLM literally watching every single camera and signal feed on earth will be cheap enough that it will be done), alignment (the technical and non technical aspects of this), etc. you can choose from a huge list of real problems and do what adults do which is: see the writing on the wall, remember that the world doesn’t operate with the naïveté of an undergrad who has just stumbled upon a new topic and forms all of their opinions from poorly written newspaper headlines, roll up their sleeves, and contribute real solutions.

Anyone who thinks any force on earth can stop this train is just really out of touch. It’s coming so make the best of it and contribute to making this ship land as softly as we can.

> As Lopatto points out: “Normal people aren’t running around like chickens with their heads cut off, trying to automate every single part of their lives.“ Their biggest exposure to AI is using a tool like ChatGPT as a more verbose Google, or perhaps occasionally formatting an event itinerary. This is cool, and even useful, but at the moment it is probably less positively impactful in their lives than, say, the arrival of the iPod in the early 2000s.

If this was written in 2022 it would be half coherent. To write that paragraph with a straight face today requires an impressive amount of ignorance


Is the movement to pause or halt datacenter construction filled with naïve children or sleeve-roller-uppers?

Naive children unless they are simply trying to push better regulations and/or pressuring the builders to meet certain criteria that right now the law may not enforce. Data center construction is far from some evil thing it’s just that done poorly it can really screw people over. Easily should be a win win

> Generative AI has no shortage of ways that it might, with care, be shaped into genuinely useful products, but this shaping needs to actually happen before the hyper-scalers earn the right to continually harass the psyche of billions of people with breathless pronouncements. Most people don’t care that GPT 5.5, released late last week, underperformed Opus 4.7 on SWE-Bench Pro. They want the AI companies to let them know when they have a product that will actually and notably improve their lives, and until then, they want these companies to leave them alone and try their best not to crash the economy .

I don’t want to be snarky or dismiss without giving it though but this line of thinking is childish and shows lack of understanding of how products enter market. Companies that invent product generally can not predict all the uses. That’s what the free market exists for - push your product and see how people use them, get feedback and improve. I find that beautiful.

Does the author want planned RCTs to understand how they might be useful and slowly release them only every dimension of utility is confirmed? This reeks of euopilled mindset - a perverse parental kind of mindset where the state decides how products may be used.

The Silicon Valley doesn’t work like this. They release products and the second or third order effects can not be predicted. Who knew NVIDIA might be used for LLMs when they first started producing chips for gaming?

The author (I assume) is feeling some latent anxiety over the extreme speed in which things are progressing. They are grieving by finding reasons to stall this progress by falsely claiming that people don’t care about LLMs until there is confirmed proof that LLMs can help them improve their lives.

But this is fundamentally a childish mindset, and flat out wrong. People do care - there are always enthusiasts who are top of the game and who the broader public follow.


> I don’t want to be snarky or dismiss without giving it though but this line of thinking is childish and shows lack of understanding of how products enter market.

I don’t want to be snarky but your line of thinking is childish and shows lack of understanding of how products should enter market.

> The Silicon Valley doesn’t work like this. They release products and the second or third order effects can not be predicted.

Unpredictability is a very good argument for slowing down data center construction and AI deployments until the tech can pay for itself.

Now all "order effects" are heavily subsidized with no end in sight, and they needlessly drain too much precious capital out of the rest of the economy. When the "effects" come slower, we'll have a better chance of evaluating them without exposing the economy to the excessive drain and risk it's subjected now.

If you remove the hype and exaggerations, the whole show looks like just another market bubble for extracting money via losing ventures in a negative-sum game.


Yes. As if there is some dichotomy where the evil out of touch Silicon Valley are making a terrible misjudgement of what society “wants”. They have built a powerful technology with enormous economic and geopolitical implications and the author thinks they shouldn’t have because “people didn’t ask for this”. People don’t ask for anything they don’t understand: electricity, the internet, the steam engine — all of them were strange and new and scary and “no one” (in terms of people on the street) asked for them.

what? the post is literally titled "Amateur armed with ChatGPT solves an Erdős problem". stop spreading FUD about unaffordability

They used ChatGPT Pro to solve it. Over 50% of people in the world couldn't afford ChatGPT Pro ($200/mo) even if they spent more than half of their income on it. [1]

What was that about "spreading FUD about unaffordability"?

[1] https://ourworldindata.org/grapher/share-living-with-less-th...


They didn't buy ChatGPT Pro themselves. You could've done the same as the students in the article and get a free subscription if you were interested in this instead of trolling.

> You could've done the same

Please show me the steps to get a $200 subscription for free that works 100% of the time regardless of who you are. I'm listening.


ChatGPT flattened the difference between top .0001 percentile mathematician and an amateur. This is the definition of making intelligence more available.

You are exaggerating the situation by essentially claiming since some people can’t afford 200 dollars this means ChatGPT is not democratising intelligence. It’s a bit strange to claim this because according to you it only becomes affordable when maximal number of people can afford it. It’s a bit childish.

Directionally it is democratising. Are more people able to afford higher level intelligence? Yes.


> ChatGPT flattened the difference between top .0001 percentile mathematician and an amateur

It flattened the difference between a top epsilon percentile mathematician and an amateur with money. It didn't flatten the difference between an amateur with a little money and an amateur with a lot of money. It widened it. That's the part I'm scared about.

You are shrugging this off because it currently isn't that expensive. But we're talking about the massively subsidized price here, which is bound to get orders of magnitude higher when the bubble pops. Models are also likely to get much better. If it gets to a point where the only way to obtain exceptionally high intelligence is with an exceptionally high net worth and vice versa, how is that going to democratize anything?


What you are saying is similar to saying "computers and internet don't democratise intelligence and access to information because some supercomputers exist". Its pedantic and frankly childish.

This is the most pedantic argument ever.

"All men are created equal" is obviously not literally saying all humans are 100% equal. Just like how "ChatGPT equalizes intelligence" is not saying ChatGPT equalizes the intelligence of all humans to a level of 100%.

I'm not going to spell out what I meant by: "ChatGPT equalizes intelligence". You can likely figure it out for yourself, because the problem doesn't have anything to do with your reading comprehension. The problem is more akin to self delusion, you don't want to face reality so you interpret the statement from the most absurdist angle possible.

The admins at HN actually noticed this tendency among people and encoded it into the rules: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."


It is not “absurdist” to call out a baseless claim that doesn’t take into account over half of humanity, a percentage that will grow even further once investor money inevitably runs out. If your response to that is to wave away more than 4 billion people, then you’re not even trying to look like you care about reality, you’re just trying to make yourself feel better with some made-up nonsense.

You seem to be under the misconception that you somehow “own” ChatGPT or are entitled to the insight it provides. You don’t and you aren’t. You are at the mercy of trillion-dollar private companies that owe you nothing. Their products’ intelligence is not your intelligence. Whatever profits you’re seeing from it, it’s currently losing them money. And when that changes, so will your image of them as benefactors of humanity who make intelligence available to all.


> It is not “absurdist” to call out a baseless claim that doesn’t take into account over half of humanity, a percentage that will grow even further once investor money inevitably runs out.

I love the confidence that comes from this claim. You can run open models in your laptop today compare to the best models from 2 years ago. But sure spread your FUd about investor money running out


It is fucking absurdist and pedantic when I hear this drivel coming out of the mouth of a hypocrite. You’re already part of the privileged few. Every single thing that you do from drinking clean water to writing your bullshit on the Internet is the result of your own arguments of distributing technology among the top percentage. And as a recipient of such benefits you should have the intelligence to see that even that much matters. Why don’t you raise your shit against the ass holes who are really making things unequal: Internet service providers and their astronomical fees which don’t equalize the world enough such that homeless people have access to the internet. That’s societies real problem according to your genius logic… so stop your tirade against AI as their are bigger fish to fry.

> You seem to be under the misconception that you somehow “own” ChatGPT or are entitled to the insight it provides.

Right now for the price of a new car I can definitely get enough hardware to run a local LLM to the quality of ChatGPT at my home. And this is just the status quo. The demand for this technology and the projection of improvement in prices predicts a future where you can run one for the price of a new computer. Wake up.

But who the fuck cares? Point being is AI is equalizing intelligence and you’re just throwing in tangents and side branches to try to disentangle the obvious general truth which I will repeat: AI is fucking equalizing intelligence and if you don’t agree, you’re absurd.


You open with an insult directed at the HN community. Then you call me names. Then you lecture me about HN guidelines. Then you post this.

Flagging because this kind of language has no place on HN.


Oh if your so but hurt by this go ahead and call me names if you want. Hypocrite is not really that much of an insult and it's true. You called someone absurd as well.

> Then you lecture me about HN guidelines.

Not a lecture. An example of how it's a well known issue. I'm obviously not a rule follower myself; and your content is not really fit for HN either. Once you flag the entire conversation is over. I don't really care, but if I were you I'd rather end the argument by being right instead of running away and tattle tailing to the authorities. Up to you.

maybe the admins come in and block the convo, delete it, and/or ban me. Who knows. I don't care. The fact of the matter is... I'm right, and you know it. Everything I said here is true, and you're turning to this way to end it because you can't face it.


<meta> You're incredibly rude but at the same time.... 100% right. On first reading it was quite off-putting, but your conclusions are solid. Emotions take over rationality, and people - just like thinking models - reverse-engineer a logical-sounding explanation for their actions, they don't "expose" their internal chain of thought.

Maybe the models are closer to us than we're comfortable to admit.


how can you ask this question with on a post titled "Amateur armed with ChatGPT solves an Erdős problem"???? are you looking for some randomised control trial? omg

We just look at comments from AI boosters and it is self-evident that no intelligence is being equalized.


Idk, going out on a limb and guessing the folks who hang out on erdosproblems.com aren’t run-of-the-mill dumbasses. The prompt, if you look at it, is actually quite clever. Not as clever as the proof. But far from the equalization OP posits.

Why be such an absolutist.

How about I caveat it the way you want:

AI equalizes intelligence in the sense that it closes the gap. Not perfectly, not infinitely, but directionally. The distribution compresses. The floor rises faster than the ceiling, so people who used to be far apart end up operating much closer together.

You can already see it in the Erdős example. The person who wrote that prompt wasn’t some random idiot. It took real cleverness to even set it up that way. But the fact that they could get that far, with assistance, is exactly the point. The distance between “amateur” and “expert” shrinks when the tool fills in large parts of the path.

Now extend that forward. Today it’s one clever person, one problem, one careful interaction. As the tooling improves, that same pattern scales. Better reasoning, better search, better guidance. The amount of lift the tool provides increases, which means the gap continues to narrow.

All the supposed “counterpoints” people bring up are already implied in the claim. “Equalize” here obviously means moving closer to equality. Is it NOT obvious that LLMs don't actually equalize intelligence to a level of 100%? Do I actually need to spell that out? If there was nothing at stake, I wouldn't need to.

But instead people latch onto the most absurd version possible, knock that down, and act like they’ve said something meaningful. It’s the same mindset as that guy demanding a formal paper or citation for an observation you can see unfolding in real time. Not because it’s unclear, but because engaging with the actual claim is uncomfortable. It’s easier to distort it into something extreme and dismiss it than to admit the gap is closing.


I’ll agree the top of the stack may have compressed downwards. But that leaves open the possibilities that (a) the ceiling has risen and (b) the floor isn’t really moving, inasmuch as productively engaging with any tool required baseline intelligence.

More pointedly, I don’t think anyone who opposes AI does so because they want to remain the smart kid in the room.

> If there was nothing at stake, I wouldn't need to

You’re on HN buddy. If you measure stakes by how pedantically you’re challenged, everything will rise to existential terms.


When i said stake, I meant HN is especially vulnerable because the stake is the HN communities identity as programmers. Consistently on HN you see articles on IQ voted up. People take pride in their intelligence and programming skills here... and AI is dismantling their identity piece by piece.

It's more then being the smart kid in the room. The future is pointing to a place where programming is just a one hour tutorial on how to tell AI to do it for you. What happens to you if you're entire identity and career was built on being a programmer as many people are here? THAT is what is at stake.


Directionally it is correct - an amateur wouldn’t be able to do this without ChatGPT. You can’t expect maximal democratisation

God, do people not read my posts? I wrote this: "It also exposes their ACTUAL intelligence which is to say most of HN is not too smart."

These types of people need citations for the time of day. They don't know how to debate or discuss in abstract terms. Reality freezes over if no scientific papers exist on the topic.


> God, do people not read my posts?

I don't know man, maybe you got too equalized yet but the things you say are not very smart. Getting angry over that isn't a good argument either.


Bro nobody is angry. You’re misreading everything. That line you quoted is more of a slight annoyance.

You probably need to look at your own reading comprehension skills before you comment on my intelligence.


> These types of people need citations for the time of day. They don't know how to debate or discuss in abstract terms. Reality freezes over if no scientific papers exist on the topic.

Oh man you have captured the exact emotion I had. These people need randomised control trials to prove any inane thing lmaoo. Reddit brained I tell you


why? I think both generally track. And I won't appreciate category errors such as calling it similar to smoking

>I think both generally track

Well, they simply don't. Everybody is forced to use it at my (>100k employee) company. They are tracking it - if you don't use it every single day, you get flagged. Tons of people hate this.

I use it constantly. Everybody I work with uses it constantly. We use it voluntarily (we are the software folks). And we would all give it up in a heartbeat if there was a way to destroy it.

I hear this sentiment constantly online and from friends at other large companies. You are not observing the general reality.


That's your experience, perhaps. I use it extensively in my work, and some in my personal life. I also think it's a net negative for society, will likely be highly destructive, and I would be happy to give up my use of it in exchange for its elimination.

Yep, anyone familiar with the Red Queen's race will recognize the importance of staying on top of AI advancements.

But that position is orthogonal to whether they like having to participate in the race to begin with.


How many of those billion users are using it because they keep being told to become proficient with AI or risk being left behind? I guess a decent enough slice, as the FUD campaign is getting more persistent everyday.

> I think they should be nuked from orbit

This kind of hyperbloic stance amongst the broad public is sad concerning


It should definitely be concerning to the makers of genAI!

Like I was telling someone else, what we've made here is the ultimate double-edged sword. Use it right and there's great glory. Use it wrong and you're a lifeless, empty husk. In this case, though, people get the far greater dopamine hit from using it wrong.

Algorithmic social media is like this and we already see people rotting out on the infinite scroll. And genAI makes social media look like 70s weed. The question is: on the whole, which side of that double-edged blade is going to be doing the slicing?


Maybe you should consider that they have good reasons to feel that way? That the "broad public" isn't necessarily wrong, stupid, and ignorant on every single topic?

The FUD about LLM's will never get old. The way I know and trust LLM's is the same way a manager would trust their reportees to do good work.

For most tasks, the complexity/time required to verify a task is << the time required to do the task itself. Sure there can be hallucinations on the graph that the LLM made. But LLMs are hallucinating much less than before. And the time to verify is much lower than the time required for a human to do the task.

I wrote a post detailing this argument https://simianwords.bearblog.dev/the-generation-vs-verificat...


FUD ? You are missing the point entierly, and so does your blog post

Are LLM a good dictionary of synonyms ? Perhaps, but is it relevant ? Not at all

Are you biased when a solution is presented to you ? Yes, like all humans.

Is it damageful when said solution is brain-dead ? Obsiously.

Are you failing to understand that most (if not all) manager's work is human centric and, as such, cannot be applied to a non-human ? Obviously ..

You trust a machine's intent. Joke's on you, it has no intent at all, it will breaking that "trust" your pour in it without even realizing-it

You say that LLM does better job than you. Perhaps this says it all ?


Are you asking yourself questions and answering them without seeing my point? Yes

Now that people know what ai slop is, they start having higher expectations from prose because vacuous articles like these might be misconstrued as slop

It's a shame you feel that way as articles like this were extremely common 30 years ago in the Saturday and Sunday papers. And I do miss them.

I wonder if it's a generational thing, where now every essay must focus on one idea instead of taking a meandering path of curiosity to the author's final point.


I agree with you that this is a Parade-level essay, and I think those were always basically fluff: enough to give a sense that something had been learned about two or three seeming-disparate topics from a great distance, but not enough to actually satisfy curiosity about any of them. In the 80s as a pre-teen, those were the first part of the Sunday paper I claimed, but they never did more than whet.

I don't think it's even about staying on one topic. It's just that the topics were too broad for such a short essay, even had it been relatively dense, and then to that problem the author added long phrases where a short phrase would do, as if trying to pad the length to some predetermined mark.


Technology and productivity have reduced more death than any redistribution

What a hilariously moronic comment. In what way is this true? When the cotton gin was invented, slavery absolutely exploded and productivity rose like several 100,000%.

Was this meaningful for humanity? Keeping 20% of the population in bondage because we didn't want to upset the productivity gains of slavers in the south? All progress IS progress after all right?

Government welfare programs have done more to decrease poverty than anything else in the history of human existence. Also one doesn't have to go back in time 200+ years either to see the massive failures of neoliberal economics. Who thinks their lives are better because they have slightly faster phone while they continue to not afford healthcare, can't educate themselves, provide for children, or own homes.

You think 50% of the population seeing their lives materially decrease is meaningful? Good grief, do you honestly care more about trinkets than children? Actually don't answer that for the sake of your soul.


Literally nobody is talking about slavery here, dude. Did you post in the wrong thread?

Neoliberal? I bet you can't even define that what means other than "some thing I don't like and I have to signal my virtue by saying it".


i'm a bit worried for politics in the next 10-20 years because the upper class believe in conspiracy theories like this.

all your welfare and distribution were only possible because technology created the wealth that could be redistributed later. i would suggest researching this with a calm mind.

> Was this meaningful for humanity? Keeping 20% of the population in bondage because we didn't want to upset the productivity gains of slavers in the south? All progress IS progress after all right?

Slavery was actually leading to less overall productivity, abolishing it was crucial to make everyone richer and not just the slaves.

Apart from that, technology (and not redistribution) mostly resulted in much higher life expectancy and much lower poverty levels especially in developing countries like India and China.

You can ask your favourite LLM agent the primary reason the world achieved emancipation from poverty (hint it was because of economic activity and technology)


Who said redistribution?

This money could be invested in universal healthcare, or into AI research for medicine. But hey, I guess replacing developers and generating slop is more beneficial to our society.


yes, replacing developers is better for sustained growth and reduced poverty and not your pet projects.

Cloud did get cheaper. What are you saying?

I just ran a quick gpt check - EC2 Prices have gone down by more than 80% after accounting for performance and inflation over last 20 years.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: