Hacker Newsnew | past | comments | ask | show | jobs | submit | rmwaite's commentslogin

This matches me experience as well. Some of my earliest rsync experiences were with the Cygwin version and I can remember scratching my head and wondering why people raved about this tool that ran so slowly. Imagine my surprise when I tried it on Linux. Night and day!

In my case, once I set the limit to 0 minutes and refreshed the home tab I don't see Shorts recommendations at all. There is still a Shorts tab at the bottom that tells me I have no remaining time (and allows me to trivially override it, sigh). But otherwise this seems to have cleaned up the "feed" that I see in the app of anything Shorts related (for now, at least).


IP addresses were always meant to be globally reachable. Of course, NAT has corrupted this - which is why NAT is a scourge.


And so are firewalls?


firewalls are a choice that the enduser makes.

non-routed prefixes are a limitation imposed by the ISP the the user can't address.


I think AI can add a lot of functionality but on the margins. Making things “work better”. I think AI as a focal point—in that it is The feature is a mistake for most things. But making code completion work better or suggestions more accurate? Things that are largely invisible UI-wise.


I’ve always seen the hubris as an essential component of doing things “you didn’t know you couldn’t do.” A lot of great ideas are discounted as impossible and it takes hubris to fly in the face of that perceived impossibility. I reckon most of the time it doesn’t work out and the pessimism was warranted—but those times it does work out make up for it.


To be honest, if you’re using a tool that stores things as trees and blobs and almost every part of its functionality is influenced by that fact, then you just need to understand trees and blobs. This is like trying to teach someone how to interact with the file system and they are like “whoa whoa whoa, directories? Files? I don’t have time to understand this, I just want to organize my documents.” Actually I take that back, it isn’t /like/ that, it is /exactly/ that.


I see your point but … trees and blobs are an implementation detail that I shouldn’t need to know. This is different from files and directories ( at least directories ) in your example. What I want to know is that I have a graph and am moving references around - I don’t need to know how it’s stored.

The git mental model is more complex than cvs, but strangely enough the docs almost invariably refer to the internal implementation details which shouldn’t be needed to work with it.

I remember when git appeared - the internet was full of guides called ‘git finally explained ‘ , and they all started by explaining the plumbing and the implementation. I think this has stuck, and does not make things easy to understand.

Please note I say all this having been using git for close to 20 years, being familiar with the git codebase, and understanding it very well.

I just think the documentation and ui work very hard towards making it difficult to understand .


Thanks for this, I can’t believe this never occurred to me to try to do.


If you read carefully you will see that they never said AI has a theory of mind.


Then what do we do? lol.


We understand the meaning that we wish to convey and then intelligently choose the best method that we have at our disposal to communicate that.

LLMs find the most likely next word based on its billions of previously scanned word combinations and contexts. It's an entirely different process.


How does this intelligence work? Can you explain how 'meaning' is expressed in neurons, or whatever it is that makes up consciousness?

I don't think we know. Or if we have theories, the error bars are massive.

>LLMs find the most likely next word based on its billions of previously scanned word combinations and contexts. It's an entirely different process.

How is that different than using one's learned vocabulary?


How do you know we understand and LLMs don't? To an outsider they look the same. Indeed, that is the point of solipsism.


Because unlike a human brain, we can actually read the whitepaper on how the process works.

They do not "think", they "language", i.e. large language model.


What is thinking and why do you think that LLM ingesting content is not also reading? Clearly they're absorbing some sort of information from text content, aka reading.


I think you don't understand how llms work. They run on math, the only parallel between an llm and a human is the output.


Are you saying we don't run on math? How much do you know of how the brain functions?

This sort of Socratic questioning shows that no one truly can answer them because no one actually knows about the human mind, or how to distinguish or even define intelligence.


So do neurons.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: