Hacker Newsnew | past | comments | ask | show | jobs | submit | nonethewiser's commentslogin

Thanks for sharing. Maybe not as common as you think. I never heard that before.

It's perfectionism.

I always thought perfectionism meant extremely high achievements (for too great of a cost). But it can also be quitting without any progress because you can't accept anything less than perfect (which may or may not be achievable). Perfectionism can be someone procrastinating on a large task.


Do you think this guy's chances of getting away with it would have increased if he solicited favors from powerful people?

A pardon costs ~1M, just need to still more than that and you're golden.

It's posturing. And a very predictable narrative. Of course the DOJ did the right thing here. But how can we frame it so that the DOJ=bad?

It's not surprising at all. Kids in soveriegn nations are banned from all sorts of things. There is no governing body over sovereign nations that can simply ban land mines. You're talking about countries promising to reduce a common war power.The Ottawa Treaty is a treaty (ie mutually agreed upon rules which can be exited) and not all countries have signed it.

Speak for yourself. I want it banned for kids.

Do you think this is about kids? It's about online identity and government surveillance and control.

Even if you think it is about kids, then take responsibility into your own hands, be a parent and prevent your kids from using it. Or you just want to tell other parents to raise their kids the way you want? Then tell them that, don't hide behind fascist police and justice system to force online ID for adults.


>Even if you think it is about kids, then take responsibility into your own hands, be a parent and prevent your kids from using it.

Common but bad argument. You've misunderstood what the age verification control is for. It's to hold online services accountable for illegally providing services to minors. A parent being negligent doesn't mean Facebook should not be held responsible for breaking the law.


I can't believe that we've managed to escalate past seeing parents as negligent for letting their kids walk home from school or play outside. Is this the new normal? You are negligent if you let your kids ... talk to people online? I uh, am outraged. What if kids start having thoughts their parents don't approve of?

Further, facebook users could chose to use platforms that don't exploit their users. By allowing facebook to benefit from the network effect, they are responsible for kids wanting to be on the platform. They give facebook power, and then facebook uses that power to exploit children. Yet facebook's adult users don't even see the need to defend themselves. To take responsibility.

Some of these laws affect mastodon, so these laws are not a regulation of facebook. What exploitive features of mastodon deserve such a ban? Are children addicted to mastodon's default chronological feed? It seems like it would benefit facebook to establish a regulatory moat that smaller non-ad-driven competitors don't have the resources to comply with. It certainty doesn't seem to have affected their stock.

Also there is reasonable suspicion that meta lobbied for similar laws: https://tboteproject.com/git/hekate/attestation-findings

So much for holding facebook accountable.

Oh, also: https://xkcd.com/743/


> illegally providing services to minors

> breaking the law

what law?


How so? We already have digital ID in Norway. How does providing that information to American corporations further Norway's surveillance goals?

You shouldn't be all or nothing here. To ignore the effect on teens is to be blatantly ignorant of social science itself. To ignore the implications of surveillance is to be ignorant of government surveillance. There is no value at either extreme.

Hacker news feed is algorithmic

its just waaaaaay easier to distribute a web app

For some things a desktop app is required (more system access) or offers some competitive UX advantage (although this reason is shrinking all the time). Short of that user's are going to choose web 95% of the time.


This points to our failure as an industry to design a universal app engine that isn't a browser.

Counterpoint: is the web browser not already fulfilling the "universal app engine" need? It can already run on most end user devices where people do most other things. IoT/Edge devices don't count here, but this day most of their data is just being sent back to a server which is accessible via some web interface.

Ignoring the fragmentation of course; although that seems to be getting less and less each year (so long as you ignore Safari).


>Counterpoint: is the web browser not already fulfilling the "universal app engine" need?

Counter-counterpoint: Maybe it's time to require professional engineer certification before a software product can be shipped in a way that can be monetized. It's to filter devs from the industry who look at browsers today and go "Yeah, this is a good universal app engine."


This was cathartic to read thank you

Yes. But it consumes at least 10x-100x more resources to run a web app than to run a comparable desktop app (written in a sufficiently low level language).

The impact on people's time, money and on the environment are proportional.


> But it consumes at least 10x-100x more resources to run a web app than to run a comparable desktop app (written in a sufficiently low level language)

Does it? Have you compared a web app written in a sufficiently low level language with a desktop app?


Yes. I can run entire 3D games.... ten times in the memory footprint of your average browser. Even fairly decent-looking ones, not your Doom or Quake!

And if we're talking about simple GUI apps, you can run them in 10 megabytes or maybe even less. It's cheating a bit as the OS libraries are already loaded - but they're loaded anyway if you use the browser too, so it's not like you can shave off of that.


I believe Firefox use separate processes per tab and most of them are over 100MB per page. And that's understandable when you know that each page is the equivalent of a game engine with it's own attached editor.

A desktop app may consume more, but it's heavily focused on one thing, so a photo editor don't need to bring in a whole sound subsystem and a live programming system.


I think a browser is an inverted universal engine. The underlying tech is solid, but on top of it sits the DOM and scripting, and then apps have to build on top of that mess. In my opinion, it would be much better for web apps and the DOM to be sibling implementations using the same engine, not hierarchically related. You wouldn’t use Excel as a foundation to make software, even though you could.

Maybe useful higher-level elements like layout, typography, etc. could be shared as frameworks.


You are thinking along the same lines as me. The fact that the first thing to be standardized was HTML made it a fait accompli that everything had to be built on top of it, since that "guaranteed" <insert grain of salt> cross vendor compatibility.

There are many alternate histories where a different base application layer (app engine) could have been designed for the web (the platform)


We have failed to design a universal app engine…except for the one that dwarfs every other kind of software development for every kind of device in the world.

Can a single webpage address & use more than 4gb of ram nowadays? I was filling 16gb of ram with a single Ableton live session in 2011.

Via electron I’m sure it could. In the main browser it’s probably best to cap usage to avoid having buggy pages consume everything. Anything heavy like a video editor you’d rather install as an electron app for deeper system access and such.

But JS arrays are limited to 4GB. Size is defined to be a 32-bit int.

How about a webpage shouldn't ever address & use even 4GB of RAM! :O

But that's the thing, if I'm doing audio and buying 128GB of ram for the sake of doing music with my sample libraries, and loading hundreds of parallel tracks and being able to scrub through them without lags or audio clicks, I absolutely want to be able to load them to play with them.

No. We did, it is the browser.

"The Browser" has turned out to be a pretty terrible application API, IMO. First, which browser? They are all (and have been) slightly different in infuriating ways going all the way back to IE6 and prior. Also, a lot of compromises were made while organically evolving what was supposed to be "a system for displaying and linking between text pages" into a cross-platform application and system API. The web's HTML/CSS roots are a heavy ball and chain for applications to carry around.

It would have been great if browsers remained lightweight html/image/hyperlink displayers, and something separate emerged as an actual cross-platform API, but history is what it is.


They're not that different, and it's a pretty good platform and pretty easy to program for. That's why it won.

It didn't win. It just survived long enough. The web is a terrible platform. I haven't ever shipped a line of "web code" for money and I plan to keep it that way until I retire. What a miserable way to make a living.

Perhaps you're taking the npm/react/vercel world to be the entire web? I agree that that stuff is a scourge. But you can still just write html and Javascript and serve it from a static site, I wrote an outline in https://incoherency.co.uk/blog/stories/web-programs.html which I frequently link to coding agents when they are going astray.

I wouldn't say that react is what's wrong with the web. I would say that the web is what's wrong with react.

When I was a kid I was running websites with active forums and a real domain name, and I did it with vBulletin and my brain. Someone bought the domain name and website off of me, haven't touched web tech since. I did use Wt at an old job once, but the "website" was local to 1 machine and there were no security concerns.

I envy your pure soul. I am one of many who has had, at times, been coerced through financial strain to write some front end code. All I ask for is, when the time comes, you try to remember me for who I was and not the thing I became.

Look at caniuse, if you see green boxes on all the current version browsers. Than you are good to go. If not, wait until the feature is more widely supported.

Steam is pretty close.

Remember Flash? The big tech companies felt a threat to their walled gardens. They formed an unholy alliance to stamp out flash with a sprinkle of fake news labeling it a security threat.

Remember Livescript and early web browsers? It was almost cancelled by big tech because Java was supposed to be the cross platform system. The web and Javascript just BARELY escaped a big tech smack down. They stroked the ego of big tech by renaming to Javascript to honor Java. Licked some boots, promised a very mediocre, non threatning UI experience in the browser and big tech allowed it to exist. Then the whole world started using the web/javascript. It caught fire before big tech could extinguish. Java itself got labeled a security threat by Apple/Microsoft for threatening the walled gardens but that's another story.

You may not like browsers but they are the ONLY thing big tech can't extinguish due to ubiquity. Achieving ubiquity is not easy, not even possible for new contenders. Pray to GOD everyday and thank her for giving us the web browser as a feasible cross platform GUI.

Web browser UI available on all devices is not a failure, it's a miracle.

To top it all off, HTML/CSS/Javascript is a pretty good system. The box model of CSS is great for a cross platform design. Things need to work on a massive TV or small screen phone. The open text-based nature is great for catering to screen readers to help the visually impaired.

The latest Wizbang GPU powered UI framework probably forgot about the blind. The latest Wizbang is probably stuck in the days of absolute positioning and non-declarative layouts. And with x,y(z) coords. It may be great for the next-gen 4-D video game, but sucks for general purpose use.


As I recall, Flash and Java weren't so much security issues themselves, but rather the poorly designed gaping hole they used to enter the browser sandbox being impossible to lock down. If something like WASM existed at the time to make it possible for them to run fully inside the sandbox, I bet they'd still be around today. People really did like Macromedia/Adobe tools for web dev, and the death of Flash was only possible to overcome its popularity because of just how bad those security holes were. I miss Flash, but I really don't miss drive-by toolbar and adware installation, which went away when those holes were closed.

Flash had quite a lot of quite severe CVE; how many of those do you suppose are "fake news" connived by conspiracy (paranoid style in politics, much?) as opposed to Flash being a pile of rusted dongs as far as security goes? A lot of software from that era was a pile of rusted dongs, bloat browsers included. Flash was also the first broken website I ever came across, for some restaurant I never ended up going to. If they can't show their menu in text, oh, well.

> design a universal app engine

You've reminded me of the XKCD comic about standards: https://xkcd.com/927/

Do you really want a universal app engine? If you don't have a good reason for ignoring platform guidelines (as many games do), then don't. The best applications on any platform are the ones that embrace the platform's conventions and quirks.

I get why businesses will settle for mediocre, but for personal projects why would you? Pick the platform you use and make the best application you can. If you must have cross-platform support, then decouple your UI and pick the right language and libraries for each platform (SwiftUI on Mac, GTK for Linux, etc...).


Platforms and app engines are orthogonal concerns. I agree that platform guidelines are worth preserving, and the web as a platform solves it by hijacking the rectangle that the native platform yields to it. Any app engine could do the same thing.

Please, for the love of all that is holy, not GTK.

>or offers some competitive UX advantage (although this reason is shrinking all the time).

As a user, properly implemented desktop interface will always beat web. By properly, I mean obeying shortcut keys and conventions of the desktop world. Having alt+letter assignments to boxes and functions, Tab moves between elements, pressing PageUp/PageDown while in a text entry area for a chat window scrolls the chat history above and not the text entry area (looking at you SimpleX), etc.

Sorry, not sorry. Web interface is interface-smell, and I avoid it as much as possible. Give me a TUI before a webpage.


> its just waaaaaay easier to distribute a web app

Let's also remember that it's infinitely easier to keep a native app operational, since there's no web server to set up or maintain.


No DNS, no DDOS, no network plane, no kubernetes, no required data egress, no cryptographic vulnerabilities, no surveillance of activity... It's almost like the push for everything to go through the web was like a psyop so everything we did and when was logged somewhere. No, no, that's not right.

>Would you trust AI generated mesh firmware?

This is also a loaded question. The only specifics they've offered are that he simply used Claude Code. Um... OK? Do the tests pass? Did his changes add any security flaws? Regressions that were untested?


>We have always been wary of AI generated code, but felt everyone is free to do what they want and experiment, etc. But, one of our own, Andy Kirby, decided to branch out and extensively use Claude Code, and has decided to aggressively take over all of the components of the MeshCore ecosystem: standalone devices, mobile app, web flasher and web config tools.

>And, he’s kept that small detail a secret - that it’s all majority vibe coded.

Without any more context, I am highly suspicious of this framing.

1) Someone "taking over" the ecosystem seems like an entirely different issue. How is this possible? Does it mean he's publishing things and people want to use them?

2) Is the code bad? It sounds like they had no idea he was using AI. That seems to imply there was nothing wrong with the code as-is. Why not judge it on it's merits?

>The team didn’t feel it was our place to protest, until we recently discovered that Andy applied for the MeshCore Trademark (on the 29th March, according to filings) and didn’t tell any of us.

Taking this at face value, this is indeed hostile and bad.

But no, I'm not going to get outraged that someone is simply using Claude Code.


> Is the code bad? It sounds like they had no idea he was using AI. That seems to imply there was nothing wrong with the code as-is. Why not judge it on it's merits?

Anyone that has used AI at all knows this isn't how it works. AI is extremely good at producing plausible-but-wrong outputs. It's literally optimised for plausibility, which happens to coincide with correctness a lot of the time. When it doesn't you get code that seems good and is therefore very difficult to judge on its merits.

With human written code it's a lot easier to tell if it's good or not.

There are exceptions to this - usually if you have some kind of oracle like that security work that used AddressSanitizer to verify security bugs, or if you're cloning a project you can easily compare the behaviour to the original project. Most of the time you don't have that luxury though.


It's also easy to overwhelm reviewers with far more code than they can possibly review. And it's also the hardest stuff to review where the code at surface level looks totally fine, but takes long hours of actual testing to make sure it works.

If the code is bad why not lambast him for pushing bad code? If he's pushing straight to main why have such shitty controls? If it got through review whats the problem? AI in-and-of-itself is not bad. They never substantiate beyond saying he used Claude Code a lot.

Just read the code. There is nothing keeping people from reading the code.

As far as I know the meshcore app and meshos are closed source

I can't find the source code either for the official MeshCore app or for MeshOS -- where can I read the code?

It's so weird right? I keep hearing people say it's open source but like... where's the code then? I've tried to find it. I can find stuff for core components, but they lock features behind a delay-wall in the app. If it was open source that stuff would be gone immediately.

https://github.com/meshcore-dev/MeshCore

The vibecoder was on MeshOS, which indeed is not open source


That's the board firmware, not the official Meshcore Android or IOS app.

I admit it was a super meta-funny response, because everyone I see that's into meshcore thinks it's open source because of that repository and posts it after a 30 second google search when questioned.

Then you ask them to actually find a source file for the apps and they go quiet. It's wild how much it looks like open source from a casual glance.


Do folks not write tests and review their own code (AI generated or not)?

Also, citation needed:

> With human written code it's a lot easier to tell if it's good or not


Agreed. I use meshcore and have multiple repeaters setup. I don't care about people using ai assisted coding but I think it should be disclosed especially if its closed source.

Now the trademark take over seems crazy especially given Andy hasn't contributed to the github project, only personal for profit add ons.

I do also think that the meshcore core team have "tacked on" and tried to enforce a stronger narrative with their anti ai coding bias.


It wasn't ai assisted coding, it was vibe coding from someone with no real coding background. A communication protocol can't be vibe coded, how do you enforce security if the person is unable to understand what the tool created?

Especially when they try to hide that they were using those tools in the first place


>It wasn't ai assisted coding, it was vibe coding from someone with no real coding background.

Can you elaborate on this?

I'd agree that is problematic. But that's not what the article said. The article just said he was using Claude Code a lot. It's ridiculous to equivocate those two things. That's what I have an issue with.


> only personal for profit add ons

In that context it is quite logical to take a trademark out once the project is mature enough so you can profit off other people's work.

Considering their user base does not like the hidden vibe coded idea I don't think this is bias but a sane rationalisation.


There’s a lot of framing in how questions are asked. I’m going to bet asking the community “Would you like more features if they’re made using AI assistance?” is going to get wildly different results.

"AI assistance" isn't really an honest representation of the claim of what happened is, though.

"I wrote an iPhone app, so now I have the right to trademark 'Apple'."

Disagree: I applaud them for doing this. Anyone that says they've reviewed the 1000 lines of slop any AI has spit out is simply lying to everyone and even potentially themselves and has never done a single extensive code review in their life. Reading 1000 lines of text is one thing, reading and analyzing the complexity implications and edge cases in code - no chance. The "I've reviewed the slop" response is the reasons why 0 days and leaks are happening more than ever: because no one really reads the code cause "I vibe-coded it". An extensive and comprehensive code review may take days, and no slopper has ever done that. I'll get a 100 line pr and going over it can easily take hours, especially when something looks wrong and I need to test it. And it's a good reason why I'd never trust the "You are absolutely correct, apologies for the oversight, here's a revised version:"

>Disagree: I applaud them for doing this. Anyone that says they've reviewed the 1000 lines of slop any AI has spit out

You're assuming the thing in question - that it's just AI slop. They dont offer any insight on that - they merely say he used Claude Code a lot. It shows a real lack of understanding of AI tools to equivocate those two things.


No, it shows responsibility and critical thinking: I don't care if it's slaupecode, alanturingai++ code or whatever other slop machine anyone uses. It's the fact that no one and I mean NO ONE is capable of code reviewing that much crap and everyone that has written any code that was meant to last knows it: pouring a ton crap of code, even as a human, is the easy part if you learn the basics. The hard part is analyzing the problem, considering and addressing edge cases, exploits, memory leaks, resources, constraints and limitations - this is what a programmer is fundamentally responsible for and this is the only skill that matters. The code is just an abstraction, which was never the issue. Sloppers wash their hands with "look at how good it works" and "look at how beautiful the code is" and it's all fun and games before a production system crashes because, as I said, no slopper or an actual programmer has the ability to review 6000 line merge request so they hit "lgtm". Or better yet, outsource the problem to another slop machine. Anyone who believes that "AI tools" are in any shape or form viable for production, clearly do not understand a fundamental concept, called propagation of uncertainty[1].

"Sooo productive", sure, buddy, and when it breaks you have no idea why it broke and ultimately you don't know the codebase. Instructions.md containing:

``` * make no mistakes * don't hallucinate ```

Is the clear-cut evidence that we are speed running towards a technological apocalypse. Ironically, I'm in favour of it: if the house has to burn down for things to get better and people to get back to using their brains, I'm all for it.

[1] https://en.wikipedia.org/wiki/Propagation_of_uncertainty


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: