This is a pretty banal comment at this point. Open source is the term used in the LLM community. It's common and understood. Nobody is going to release petabytes of copyrighted training data, so the distinction between open source vs weights is a rather pointless one.
"Open source" as a term has evolved due to its success. It wasn't some malicious attempt at redefining things from the technical elite. It was a natural shifting of language, as happens with all words, as it entered more common usage.
It's entirely reasonable that this colloquial understanding would be applied to new categories such as AI models. I'm sure it'll be applied to many other things that don't fit the OSD either. That's just language for you.
Yes it’s not bad, although it’s not meant to be a chatbot, post training is limited, so it won’t feel as smooth as TOTL of course. The number of supported languages is mind boggling.
Focus was on open data, languages and auditability.
Their loss function is fancy, not sure about the effects
While I agree with the general sentiment that this requires monitoring and study, the abstract is _very_ tendentious, lays multiple hypothesis as facts and doesn’t provide any measurement or alternatives to their preferred solution.
This isn’t a scientific study, it’s a militant manifesto
Yeah the whole “rationalist” movement is full of those lying fks that use a thin veneer of fallacious logic and self aggrandising discourse to rationalise their hoarding of resources and bottomless greed. They’re very well established in Bay Area and AI world.
The movement itself is consistently aligned with Tech Bros interests, the philosophical foundation is interesting, but the movement itself is quite problematic
The poor marijuana spider tried really hard
reply