Hacker Newsnew | past | comments | ask | show | jobs | submit | mh-'s commentslogin

Wow, thanks for sharing this. Like the parent commenter, I have an increasing number of cheap devices like this. I wonder if anyone sells an "enclosed" version of this product. This won't survive 5 minutes in my house, haha.

A quick google I found one but they're $17 each (!) and it's from a site I've never heard of and can't vouch for, so not bothering to link it here.

I'm really surprised there aren't a number of these all over Amazon. Or if there are, they're using different keywords to describe them, so I can't find them.


Adafruit sells enclosures for them, too.

Putting commentary about AI media aside:

How many iterations (arrangements and recordings) do you think a typical Billboard pop song goes through before it's ready for a final mix and mastering?

Go find a YouTube of someone doing this work, it is kind of mind blowing. Given how expensive studio time is, you realize why it costs so much for a popular artist to produce a polished album.


If I were a PM at Google trying to connect those two products, the far more obvious approach is the end of the creation pipeline having an "Upload to YouTube Music" button.

YouTube already hosts significant factories of programmatically generated music; just look for Creators with the 3-hour or 10-hour or livestreams. For YouTube Creators, and also for Photos editors (that's everyone with an unrestricted Workspaces account), they provide a menu of background music that is "royalty-free" and so you can attach it to a montage or your own videos, to avoid awkward silences, or set the mood to vaguely sorta what you were hoping for.

So it's an evolutionary step in my view, rather than a revolution.


I feel like I make this comment a lot, but: how do you know?

You recognize obvious slop as slop. But this is a survivor bias-like phenomenon. You have no idea what goes unnoticed.

If you're talking about stuff posted to communities that are self-selecting for AI art submissions, that's another kind of fallacy.


Because good things are few and far between and it's pretty easy to discern provenance out of band in almost all of those cases.

I think this wasn't done earlier because the Suno (etc.) models couldn't output stems.

They could attempt messy stem splitting like all of the other tools have done for a few years now, but those aren't really usable in a production setting beyond small samples you were already going to chop/distort.


I agree, the tech was likely not there at the time.

I am not sure if it is there yet either, but imo your UX vision for it is the correct one, so if the tech is still not quite there yet, it is just a matter of time. But the AI-powered DAW UX is imo where it will eventually end up.


I think you've got a great app idea on your hands.

By the time I finish writing this comment - yours is 10 minutes old - someone will have vibe coded one, probably.

Also feels like an easy feature for someone like Suno to add, to help subscription retention.

But something like NotebookLM emphasizing subtle mnemonic devices set to music..


I believe they were drawing a parallel to oil commoditization, but that's as far as I got.

True, but when your cache configuration has exactly 2 TTLs and modalities, I don't think it's offbase to expect them to test what happens in the cache hit/miss scenarios for each of those.

(I write this as someone who likes Claude Code, if that matters.)


Overhiring wasn't a single decision.

Every low-effort, thought-free comment like this further discourages people from engaging here on submissions about their employer.

Please don't.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: