Hacker Newsnew | past | comments | ask | show | jobs | submit | nxobject's commentslogin

In this case, what would internal floppy drive mean? The last Macs with floppy drives (I think Old World G3s?) used a custom Apple controller, integrated into the chipset, with a bespoke 20-pin cable.

Even on the old world G3s, Mac OS X never had floppy drive support. There was a driver someone had ported from BSD you could install.

A majority-conservative Supreme Court's on an originalism kick, so we're very much stuck "when the sacred texts were written".

Only when it's the way they want to rule.

I never expected SU to come up in HN! Unfortunately, it wouldn't be the best reference...

Did something happen with SU?

Oh, no – I meant Spinel and her tragic past.

Or even just a compiler to C piggybacking off <objc/runtime/objc.h>; I think Apple still spends a lot of time making even dynamic class definition work fast. I haven't touched Cocoa/Foundation in a while, but I think (emphasis on think) a lot of proxy patterns in Apple frameworks still need this functionality.

This is slightly more niche, but I know it was pretty popular with users of AutoCAD as well.

But I would imagine that, for DTP, rasterizing PostScript on the printer would make things a lot easier.


Especially for a nation-state that's already hoovering up data broker products.

More details on leaked information from El Reg, especially after (laudably) the British government has been more transparent: https://www.theregister.com/2026/04/23/500k_biobank_voluntee...

"The charity did not specify the types of data that were included, but Murray stated in the Commons that several markers were included in the listings:

- Gender

- Age

- Month and year of birth

- Assessment center data

- Attendance dates

- Socioeconomic status

- Lifestyle habits

- Measures from biological samples related to haematology, biology, and chemistry

- Sleep, diet, work environment, mental health, and health outcomes data."


I'm curious – in which context? I've worked on NIH-funded grants in academic medical centers, throughout the research lifecycle, and I've seen how both stringently data management plans are vetted, and how annual IRB certification drills the basics even into the oldest tech-phobic investigators.

That being said, I may be as pessmistic as you are: I don't think people right now grasp how standards for deidentification may no longer be enough, and how easy and automated deanonymization changes everything. Unfortunately, cuts to federal science agencies means that I doubt any well-informed guidance will come soon.


As a biostatistician who's touched epidemiological studies, I'd argue losing the trust of participants and the public is one of the biggest threats to the viability of the whole research enterprise. It's reckless to jeopardize that as well. Conversely, this dataset will be mined for at least 30-50 years - there are an infinite number of questions that can be asked of this dat. Given that timescale, I think a little delay here is acceptable.

I like their idea of an audit log of analysis runs -- beyond transparency, I'm sure it'll help future researchers know how much iteration is needed to work with the messiness of medical records...

I'm also amused (in a good way) by the fact that SAS isn't supported as an analysis platform...


It's certainly an interesting idea, I remember he was on a few podcasts talking about it. I might submit it here to see if it gets some conversation going

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: