Hacker Newsnew | past | comments | ask | show | jobs | submit | michalc's commentslogin

> the rest will soon follow

If you’re looking for requests ;-), I would love an ECS (and specifically Fargate) emulator that actually ran Docker containers locally as though they were in ECS


That's a great idea i've had floating around in my head too!

Submitted to Unicode yesterday (so please be gentle!)

I think I can understand why this wasn’t addressed for so long: in the vast majority of cases if your db is exposed on a network level to untrusted sources, then you probably have far bigger problems?


it's also very tricky to do given the current architecture on the server side where one single-threaded process handles the connection and uses (for all intents and purposes) sync io.

In such a scenario, listening (and acting) on cancellation requests on the same connection becomes very hard, so fixing this goes way beyond "just".


So my definition of big data was data so big it cannot be processed on a single machine in a reasonable amount of time.

I guess they’re using a different definition?


I think it's partly tongue in cheek, because when "big data" was over hyped, everyone claimed they were working with big data, or tried to sell expensive solutions for working with big data, and some reasonable minds spoke up and pointed out that a standard laptop could process more "big data" than people thought.


> For our first experiment, we used ClickBench, an analytical database benchmark. ClickBench has 43 queries that focus on aggregation and filtering operations. The operations run on a single wide table with 100M rows, which uses about 14 GB when serialized to Parquet and 75 GB when stored in CSV format.

very much so…


In my former life as a soulless consultant mid-level IT managers really liked to hear the 3 "V"s mentioned: Velocity, Volume, Variety


The V of Value is very important in some circles.


Computers got bigger and software got smarter.

You have phones that are faster than cloud VMs of the past. You can use bare metal servers with up to 344 cores and 16TB of ram.

I used to share your definition too, but I now say that if it doesn’t open in Microsoft Excel, it’s big data.


Processing data that cannot be processed on a single machine is fundamentally a different problem than processing data that can be processed on a single machine. It's useful to have a term for that.

As you say, single machines can scale up incredibly far. That just means 16 TB datasets no longer demand big data solutions.


I get your point, but I don’t know if big data is the right term anymore.

Many people like to think they have big data, and you kinda have to agree with them if you want their money. At least in consulting.

Also you could go well beyond a 16TB dataset on a single machine. You assume that the whole uncompressed dataset has to fit in memory, but many workloads don’t need that.

How many people in the world have such big datasets to analyse within reasonable time?

Some people say extreme data.


“Your data isn’t big” is a good working definition of big data.

Google has big data. You are not google.


I think the definition of big is smaller than that. Mine was "too big to fit on a maxed-out laptop", effectively >8TB. Our photo collection is bigger than that, it's not 'big data'.

Or one could define it as too big to fit on a single SSD/HDD, maybe >30TB. Still within the reach of a hobbyist, but too large to process in memory and needs special tools to work with. It doesn't have to be petabyte scale to need 'big data' tooling.


“Your data is not big” comes from this thread…https://news.ycombinator.com/item?id=7192839

8TB is a couple hundred hours of 4k RAW video assets.


This is true, but 8TB is big data if it's text.


I think they are simply referring to analytical workloads.


Hmmm... depends on the project / phase of the project?

I am particularly not a fan of doing unnecessary work/over engineering, e.g. see https://charemza.name/blog/posts/agile/over-engineering/not-..., but even I think that sometimes things _are_ worth it


Short answer is no, not as far as I am aware/can reason about it

In more detail: so by my understanding there are two techniques in making zip bombs…

Firstly nested ZIPs that leverage the fact that some unZIP programs recursively extract member files. stream-unzip doesn’t do this (Although you could probably use stream-unzip as a component in a vulnerable recursive ZIP parser if you really wanted to… but that I would argue is not the responsibility of stream-unzip)

The second technique is overlapping member files, but this depends on them overlapping as defined by the central directory at the end of the ZIP, which stream-unzip does not use

But if you are accepting files from an untrusted source, then you should validate the size of the uncompressed data as you unZIP (which you can do as you validate along with any other properties of the data)


> Beyond that, I've grown fond of 'sticking to the defaults' over the years.

This resonates with me! Both in terms of things I use and things I make - I want them to "just work"


> without regard for the maintenance burden 1, 2, 5, 10 years down the road.

To me software craftsmanship isn't just about the code, it's about engineering use of time.

In general shouldn't knowingly make choices that would result in pain in the future, but if you're increasing the chance of the project not making it to the future, then is that really the better option? Finding out enough information to make the judgement call between long term/far future pain and short term benefits is all part of the craftsmanship.

> I don't blame agile. But I do kind of blame Agile™

(Loving the phrasing here! I think I'm right on board, especially if we're talking Scrum/Scum-ish)


> why not remind people of the purpose?

To answer this, I suspect that trying to change what certain words/phrases mean to people en-masse is extremely difficult, to the point of impossibility in most cases. However, we each have the power to be clearer in the words we use so they are understood by the people we're communicating with.

> engineering quality matters

But also, this to me suggests that there is some sort of absolute definition of quality, but it's much more nuanced. Nothing is inherently "bad quality", but instead has certain consequences, which may or may not happen or may or may not be acceptable in certain circumstances, and you might not even know what these are until the future. This I think is the point I'm trying to make - there is no absolute definition of engineering quality, and I suspect the term "technical debt" all too often suggests there is.


Have to admit the lazy thing threw me, but I can see how the “doing less” I’m arguing for could be taken that way. The “less” is not about avoiding handling edge cases that are possible now, but about avoiding putting in layers of code to handle cases possible only in some future versions of the code (with some limited exceptions that I mention at the bottom of the post)

In fact, it’s crossing my mind that people might not want to be accused of being lazy, and that is a motivation to over-engineer solutions.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: