Anthropic is the embodiment of bullshitting to me.
I read Cialdini many decades ago and I am bored by Anthropic.
OpenAI is very clever. With the advent of Claude OpenAI disappeared from the headlines. Who or what was this Sam again all were talking about a year ago?
OpenAI has a massive user advantage so that they can simply follow Anthropic’s release cycle to ridicule them.
I think it is really brutal for Anthropic how they are easily getting passed by by OpenAI and it is getting worse with every new GPT version for Anthropic.
Who's Sam again? oh that person whose house was molotov'd last week? Or the person who had an expose written in the new yorker calling him a sociopath? I forget.
AGI is not a fixed point but a barrier to be taken, a continuous spectrum.
We already have different GPT versions aka tiers. Gauss is ranging from whatever you want it: GPT 4.5 till now or later.
Claude Sonnet and Opus as well as Context Window max are tiers aka different levels of Almost AGI.
The main problem will be, when AGI looks back on us or meta reflection hits societies. Woke fought IQ based correlations in intellectual performance task. A fool with a tool is still a fool. How can you blame AGI for dumb mistakes? Not really.
Scapegoating an AGI is going to be brutal, because it laughs about these PsyOps and easily proves you wrong like a body cam.
AGI is an extreme leverage.
There is a reason why Math is categorically ruling out certain IQ ranges the higher you go in complexity factor.
I trust Google more than any government with my data. One needs security to survive the other couldn’t care less.
Google selling data? So far no one came to blackmail me for certain dispositions, while the other does as they want, IRS, foreign governments, social security whatever.
Google can be sued while the other gives itself a pass.
Who is the baddie?
In Germany the administration put massive duties on IT providers and added punitive damage as a looming consequence.
Fast forward and the government with its “Ha, we are so digital!” and “Europe is better than US in CS!” suddenly has to swallow some brutal medicine I guess.
I stick to my guns: Silicon Valley and especially Google is art regarding code and CS evolution. Same for FAANG etc.
EU is hubris to say the least.
Every time someone says “Let’s build our own Google/Cloud/…” a penguin dies.
E Invoice will be a brutal boomerang, XRechnung the greatest backdoor of all times.
I don't understand the downvotes. Literally every single German email provider took like 5 years to implement 2FA. Even now lots of security issues with many German providers that claim privacy. Even so-called DE-mail was sham. Still somehow people assume FAANG is crap in data security. (Yes, I am not demanding privacy from ANY MultiNational company)
I think the forensic work looks on the wrong aspects. If this guy was really this monster created and protected by intelligence until he wasn't anymore, these traces might be intentionally?
More compelling are possible aliases. That's why no dirt could be found so far except for the obvious things.
Why are there only handshake contracts in the underworld? Try calling the police. That's why your life was always your final payment.
For some things to happen there needs to be only the spoken word.
Drug dealers for example have their lower ranking foot soldiers actually talking over phone if necessary. Voice recognition. You hear for example two people talking and there are many breaks in between. Classy.
Buying drugs? "I am just some dude who was told to wait here and lead you to this place." Same for the money. It is tough, but if one of these foot soldiers get caught they can be easily replaced and really don't know details. That's why the drug business became so violent. More people involved, more people to punish in the worst case, more people tempted.
Fascinating business, but only from the outside.
Same here. Either E wasn't the overlord he is imagined as, or he was. Covert operations are by its very nature covert and not overt.
Judging through a behavioral scientist lenses, this is pretty exciting.
Don't get me wrong. It is not to applaud here, it is simply fascinating looking through a factual lens here.
And maybe there will be more questions than answers. And I bet this is going to be funny, when there won't be a clear picture in the end. What are high performer, low performer anyway? There are many pieces missing. I for example do a lot of visuals using a notebook with a pencil. To this day I find Miro etc. distracting and for my creativity to distracting. Hand writing is different to typewriting. I am way faster in the last case and associate through everything until I lose track of the main thing. Not with notes on paper. I utilize this fact, don't go for one thing over the other, but bloat is the result of doing everything virtually.
So how would my keystrokes then look like? I don't know. Highly efficient, maybe, but lots and lots of gaps without hitting the keys.
Low performers? I was overlooking over 500 engineers, did over 400 interviews, build departments from the ground up, watched people do work and helping them via inhouse developed tools to work better while having more fun.
So I think in Gauss often times and low performers, a term, I despise, but it is used for simplicity, aren't really doing nothing, in fact they work a lot, it is just the content or method that is so bad.
My best devs but a lot of consideration into architecture and communication - I trained them. They fell key decisions and helped teams get better.
The industrious low performers complained about them, that they rarely are doing "the work" on their PC. Well, well.
So, would I feel comfortable? No. And don't do to others - as the saying goes.
But if there won't be any consequences just data which cannot be tight to a worker, or and if, it can only be used to benefit them, I would happily take part in such a data gathering, because we all do personal optimization and I am curious about what the data "says" vs. subjective feeling.
On the other hand, tracking might be inevitable - hear me out - if these people are working on NDAs etc. Leakage is monitored anyway, make no mistake. So it sounds like closing a gap.
Tough, very tough.
I tend to say no one gets ousted in corporate companies for their mistakes but by their foes. So data is one thing, the one stabbing you another.
Vercel did a great job with NextJS and supports quite some OS projects.
But even before AI they had some serious struggles according to long time users.
With the introduction of the deployment platform NextJS appeared to be having advantages being deployed there.
What I can say is that Next has some weird things going on under the hood most senior coders know as “it works, no one knows why, don’t touch these 1.000 LoC here”
Build and runtime settings are a mess. Pre building a docker image on a local machine and deploying it on another turned out to be its Achilles Heel. Weird settings prioritize not as documented, different settings in one area lead to changes in default settings somewhere else. ReactJS server components played a role.
In other words: I sense that while being incredibly useful there might more to come.
It ain’t easy for them, V16 was a rewrite which was API stable. I am not sure about that.
I see your point but reconsider: we will and need to see. Time will tell and this is simply economics: useful? Yes, no.
I started being totally indifferent after thinking about my spending habits to check for unnecessary stuff after watching world championships for niche sports. For some this is a calling for others waste. It is a numbers game then.
We need to flip the script. AI is trying to do marketing: add “illegal usage will lead to X” is a gateway to spark curiosity. There is this saying that censoring games for young adults makes sure that they will buy it like crazy by circumventing the restrictions because danger is cool.
There is nothing that cannot harm. Knives, cars, alcohol, drugs. A society needs to balance risks and benefits. Word can be used to do harm, email, anything - it depends on intention and its type.
Processing power, not training. The larger the scene in 2ď the more you need to compute. The resolution itself is not flexible. Imagine painting a white canvas. It is still a pixel per pixel algo which costs LLM GPU power while being the easiest thing to do without it.
You can create larger images by creating separate parts you recombine. But they may not perfectly match their borders.
It is a Landau thing not a trading thing. The idea of LLM is to work on the unknown.
It depends on the model. Diffusion models, which are among the more popular approaches, are typically trained at a specific image resolution.
For example, SDXL was trained on 1MP images, which is why if you try to generate images much larger than 1024×1024 without using techniques like high-res fixes or image-to-image on specific regions, you quickly end up with Cthulhu nightmare fuel.
Not the best argument.
Also there is nothing without dependencies. Loose coupling means coupling.
reply