Hacker Newsnew | past | comments | ask | show | jobs | submit | azhenley's commentslogin

I wish there was a writeup about the emulator. I did find the documentation for the language: https://spectre-docs.pages.dev

Over 900 commits and 400k loc to Spectre in less than 3 weeks has me thinking this is all AI.


I'm not really big on blogging, but I'll write a summary of what I did, since this post seems to have gained some attention.

This is a rewrite of an emulator I wrote in Nim called Cemu, you can find the original over at https://gitlab.com/navid-m/cemu, it adds several features to the original version, including CPU speed changing with Y and H keys and a better control mapping, since the CHIP-8 ASDF controls were cumbersome for game ROMs like space invaders. It was also done as a practice to test the language for more practical applications that would involve external C libraries, in this case SDL2.

The rewrite was a good stress test for a few areas of the compiler I hadn’t exercised before, like FFI ergonomics, global handling and tooling ergonomics. Most of the core emulator logic stayed fairly close to the original, though the surrounding infrastructure (input handling, rendering loop, and timing) is cleaner and more robust now.

Overall, it’s still a fairly small project, but it served its purpose well in validating that the language can handle something more substantial, while also exposing a few rough edges that need smoothing out.

I'll address the AI claim too since you added that sneakily after I responded, I don't know where you got the number of lines from, but chances are you're taking into account the sxc.ssa file, which is just the auto-generated QBE SSA bootstrap file from the compiler, used by the install.sh script so that people can install the compiler from source without needing a prebuilt binary for their platform. No, it's clearly not AI, and if you're hellbent on saying it is, I challenge you to write any Spectre code with AI, the AI will be useless.


I'm not sure where the grandparent got 400k lines from. Perhaps total number of lines modified? I'm seeing about 115k lines, with 88k from sxc.ssa as you mentioned. Still 27k lines of code in <3 weeks is a lot, so it isn't terribly surprising that GP would assume AI given current day and age.

On that topic, I don't think GP was referring to that fact that you wrote the CHIP-8 emulator via AI, but rather that the Spectre compiler (the original Rust version) was written/augmented with AI.

Looking at the very first commits, after you've added the initial source code, there are a bunch of commits with many lines changes within the span of several minutes. Assuming not AI, I'm curious about that. Do you really just write code that quickly? Or was this a project not tracked by git originally, and you made a bunch of commits to set up some form of history for your recent changes?


His spectre language is fully bootstrapped and written in spectre so his suggestion that you try to get an AI to write spectre stands for both the chip-8 emulator and the language itself. He is very obviously a prolific programmer with a very keen focus.

Unfortunately we’re in the phase where even if you write things yourself, be it prose with em dashes or code with velocity, you’re given a demerit. And, if you are using AI, the work is still treated as less valuable, even if it brings value.

> And, if you are using AI, the work is still treated as less valuable, even if it brings value.

This is true, some people do react this way. But I've noticed its far more pronounced if people try to hide the AI usage for code.

AI prose is always looked down on, I feel dirty after having read it.


Haters have always been hating.

There's nothing new under the sun.

They just have different buzzwords to hate.


Im curious, did you write a new project in a whole new language because you grew tired of Nim somehow?

Asking as someone considering nim as C/C++ replacement


I spoke with a bunch of profs about how they were assessing students in the age of AI:

https://austinhenley.com/blog/aihomework.html


Reminds me of Paper Website from the Tiny Projects series, discussed back in 2021.

https://daily.tinyprojects.dev/paper_website

https://news.ycombinator.com/item?id=29550812


I actually quite like this idea, especially if you could have an automated ingest system. It could be a good way to let isolated places have a voice online, even if it isn't necessarily very high speed. It's almost like http-over-post or something. You could even have a comments section, and post the comments to the website author

It was started 7 years ago.


I was trying to compare the two. At first glance, MonoGame has far more stars and recent commits. Or is it just in maintenance mode?


FNA is for porting /supporting XNA games with minimal changes.

MonoGame is trying to evolve XNA in small ways.

For a new project I would pick MonoGame.


I believe FNA is trying to be more loyal to the original XNA while my monogame tends to introduce new features.

I've been happy with monogame when I used it in the past. I'm pretty sure Celeste was made with FNA


You might be mistaken, the Monogame Github README cites Celeste as an example made with it.


Ah weird. Did a bit of searching and it looks like maybe it targeted multiple frameworks with the xna API. Including xna itself

https://www.pcgamingwiki.com/wiki/Celeste https://celeste.ink/wiki/Version_history


Several games used to target Monogame for consoles but XNA for PC, and later FNA for PC.

Monogame on PC used to be somewhat buggy in my hobbyist experience.


Oh interesting. I never hit any walls personally but I guess I didn't push that hard.


XNA was originally designed for XBox Live Arcade indie titles.


It was a convention to denote a variation or version. Not sure how the trend started though.


A few of my games from 2009 are on here! Very cool.


You deduct the expenses you paid, not the income you hoped to earn.


> we use 256-bit integers in our hot paths and go up to 564 bits for certain edge cases.

Why 564 bits? That’s 70.5 bytes.


I wonder if it was 5*64 bits that got mangled in editing. If 256 bits is sufficient for most of their code, I could see there being corner cases that need a few more bits but moving to 512 bits would be overkill.


Maybe it's a typo for 512. I'm not even sure how you would achieve 564 in this context.


It's actually not a typo. Our "real" internal code starts with integer bounds on the inputs (say 2^26) and then computes for each subexpression how many bits are actually needed to exactly represent that. That can even lead to fractional bits (like in "a + b + c"). The generated code then rounds up to the next 64 bit multiple.


Several years ago, I did write that every programmer should attempt to write a browser: https://austinhenley.com/blog/morechallengingprojects.html

:)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: