1. AI-generated charting.
2. The existence of a reliable record of the visit.
I am skeptical of the first in some cases (i.e. bias), but strongly in favor of the second.
My father is 80 and has Parkinson’s. He routinely leaves appointments unsure of what the doctor said, what changed, or what he is supposed to do next. Even when I attend with him, we sometimes disagree afterward about what exactly was recommended.
This happens with pediatric appointments too. My wife and I occasionally remember instructions differently: medication timing, symptoms to watch for, when to call back, whether something was “normal” or needed follow-up.
That is a care quality problem, not just a convenience problem.
The risks are real: privacy, consent, retention, training use, liability, and automation bias. But those argue for strict controls, not for a blanket refusal. Make it opt-in, give the patient access, prohibit training without explicit consent, keep retention short, and require clear auditability.
I do not want opaque AI quietly rewriting the medical record. But I also do not think “everyone relies on memory after a stressful 12-minute appointment” is some gold standard we should preserve.
Yes. It was great for when I had a major surgery last year and had a bazillion questions for the surgeon. But I don't always remember to. My parents definitely don't even think about it.
Surprised to see SWE-Bench Pro only a slight improvement (57.7% -> 58.6%) while Opus 4.7 hit 64.3%. I wonder what Anthropic is doing to achieve higher scores on this - and also what makes this test particular hard to do well in compared to Terminal Bench (which 5.5 seemed to have a big jump in)
There's an asterisk right below that table stating that:
> *Anthropic reported signs of memorization on a subset of problems
And from the Anthropic's Opus 4.7 release page, it also states:
> SWE-bench Verified, Pro, and Multilingual: Our memorization screens flag a subset of problems in these SWE-bench evals. Excluding any problems that show signs of memorization, Opus 4.7’s margin of improvement over Opus 4.6 holds.
I’m working on https://chess67.com, software for running over-the-board chess (clubs, coaches, tournaments).
I started this after volunteering at my kid’s tournaments and seeing how fragmented things are:
• registrations in Google Forms
• payments via Venmo/Zelle
• pairings in SwissSys/WinTD
• communication across email and text
Chess67 aims to unify that:
• coaches can sell lessons and manage scheduling and payments
• clubs can run events and communicate with players
• tournaments can handle registrations, with pairing and USCF submission in progress
Still early. The main challenge is not building features but matching existing workflows, especially Swiss pairings, which are more nuanced than they look.
Lichess has been absolutely fantastic platform. AS a chess enthusiast as an engineer of a chess website me and some others are building (shameless plug, https://chess67.com), they are the only platform I have worked with where so much is so easily accessible in terms of their APIs.
Their Oauth requires to special app registration nor any oauth secrets - only platform I have seen that does that.
I do wonder how this opens up ability for people to integrate Lichess’ player pool to their own apps.
Yeah that’s a really interesting point. The openness is mainly what makes Lichess so powerful for organizers and developers.
chess67 looks interesting from my perspective as a coach and club organizer, especially for running tournaments and gaining exposure for my coaching and events.
But I do wonder where the boundary is long term. If more tools start tapping into the player pool, there’s probably a balance between staying open and preventing people from just free riding on the Lichess ecosystem.
Either way, it’s pretty unique. You don’t really see that level of accessibility elsewhere in the chess world.
Didn't we just hear predictions about this from Geoffery a few years ago that turned out to be false? I could have sworn I heard Jensen talk about how the inverse has happened?
Don't we have more radiologists than we did five years ago?
I submitted puzzle game Pathology (https://thinky.gg) for ARC Prize 3. Sad to see didn’t hear back from the committee.
It is a simple game with simple rules that solvers have an incredibly difficult time solving compared to humans at a certain level. Solutions are easy to validate but hard to find.
I've seen the alternative where you make a tiny JPEG file (discarding the huffman and quantization tables), and use that as the placeholder. Just glue the header and tables back on, and let the browser handle JPEG decoding and stretching. It's not as small as 16 bytes, but the code for handling it is fast and simple.
The trick of using common huffman and quantization tables for multiple images has been done for a long time, notably Flash used it to make embedded JPEGs smaller (for when they were saved at the same quality level).
Chess67 is a platform for chess coaches, clubs and tournament organizers to manage their operations in one place. It handles registrations, payments, scheduling, rosters, lessons, memberships, and tournament files (TRF/DBF) while cutting out the usual mix of spreadsheets and scattered tools. I’m focused on solving the practical workflow problems coaches deal with every day and making it easier for local chess communities to run events smoothly.
1. AI-generated charting. 2. The existence of a reliable record of the visit.
I am skeptical of the first in some cases (i.e. bias), but strongly in favor of the second.
My father is 80 and has Parkinson’s. He routinely leaves appointments unsure of what the doctor said, what changed, or what he is supposed to do next. Even when I attend with him, we sometimes disagree afterward about what exactly was recommended.
This happens with pediatric appointments too. My wife and I occasionally remember instructions differently: medication timing, symptoms to watch for, when to call back, whether something was “normal” or needed follow-up.
That is a care quality problem, not just a convenience problem.
The risks are real: privacy, consent, retention, training use, liability, and automation bias. But those argue for strict controls, not for a blanket refusal. Make it opt-in, give the patient access, prohibit training without explicit consent, keep retention short, and require clear auditability.
I do not want opaque AI quietly rewriting the medical record. But I also do not think “everyone relies on memory after a stressful 12-minute appointment” is some gold standard we should preserve.
reply