> It could be that OpenAI is subsidising their models by _fifty times_. Do you really think they are doing that?
Possibly. I don't know.
It could be unfeasible to increase prices so much whenever a new model was released.
Any assumptions made here is based on vibes. I see no reason to drop my skepticism.
> Its easier to just admit that technological advances helped decrease the cost instead of coming up with more complicated reasons like VC funding, subsidies and so on.
They raised an absurd amount of cash, and still bleed money to an absurd degree.
VCs make money when they exit. OpenAI only needs to "make sense" until an IPO happens. Once private investors have their exit, the markets can be left to handle the resulting dumpster fire.
> For instance take Deepseek and other opensource models - even they have reduced their costs by a huge margin.
Chinese companies are very opaque. I don't pretend to have insight into it.
Is the company behind Deepseek profitable?
> What explanation is there for opensource models?
What opensource models have to do with inference?
Your argument is that training is expensive but inference is cheap (something I see no evidence of). Why would a company give away the expensive part of the work?
And proprietary formats (Excel, PowerPoint, PDF) are not designed for autonomous modification.
The result: agents operate with partial visibility.
---
## The Idea
Selfware defines a minimal file protocol where:
* A file contains its own canonical data source
* Views are projections, not authorities
* Structured memory is first-class
* Execution logic can be embedded
* Collaboration is optional but native
* Everything runs locally by default
* Only `content/` is writable canonical data
* Views are disposable projections
* Memory is structured and inspectable
* No silent remote mutations
* Git-compatible history
* Optional: contains agent-executable skills or plugins
The goal is not to replace Markdown.
The goal is to make a file capable of carrying:
* Data
* State
* Context
* Collaboration metadata
* Execution scaffolding
React and RSC are not dope they are a kludge and the only reason you’re blind to that fact is because you’re React brained and have no experience with modern alternatives that are actually good like SvelteKit or SolidStart.
Even people recording short form video are doing it. They're reading out their chat-gpt-psychosis induced fever dreams using scripts written by chatGPT.
The 7-part tweets that build to a slop crescendo are doing my head in.
The solution might be some combination of:
1. leave the social media sites where the slop is irredeemable
2. unfollow everyone, reset your algorithm
3. be aggressive about who you add back in. Make sure they're humans having high quality discussions
4. be aggressive about who you block. Lower the bar on blocking - one and done. No chances, no wait-and-see.
5. move to smaller communities of real humans
None of this has worked for me yet :joy: I'm still swimming in a vast sea of slopity slop slop. Dead internet theory appears to be playing out in front of us.
Great, that means its working. I hope every single country in the world builds competent IT infrastructure. Having more competition will help us to develop more and better technology and have more alternatives, and overall increase the quality and resilience of technology globally. The current effective monopsy of US cloud providers has caused an unnecessary hard convergence that prevents innovation, is dangerous to privacy and security, and unnecessarily hinders national sovereignty.
I really enjoyed reading both posts. Thanks for sharing!
I, like many others, have written my own "claw" implementation, but it's stagnated a bit. I use it through Slack, but the idea of journaling with it is compelling. Especially when combined with the recent "two sentence" journaling article[1] that floated through HN not too long ago.
Posted elsewhere but will copy here. Been doing this for a while.
—-
get tailscale (free) and join on both devices
install tmux
get an ios terminal (echo / termius)
enable "remote login" if on mac (disable on public wifi)
mosh/ssh into computer
now you can do tmux then claude / codex / w/e on either device and reconnect freely via tmux ls and tmux attach -t <id>
The employer usually does not know about your family structure
when i need to let go people from my company because i need to downsize for whatever reason i need to choose those who would be least affected. that means i need to know who is single, married, or has children. because if i let go the one who is a parent instead of someone who is single, they might sue me because it would cause them undue hardship, if say finding a new job would force them to move which would affect the other parents job and also the kids school. and their whole social life.
sometimes this can't be avoided. if all my employees have families and children then i am stuck. but if there is a choice, then the choice must be the person who is more likely to recover, or who has less dependents. the needs of the many outweigh the needs of the few.
long story short, i have to know the family structure to make that choice.
> With the amount of talent working on this problem, you would be unwise to bet against it being solved, for any reasonable definition of solved.
I'm honestly not sure how this issue could be solved. Like, fundamentally LLMs are next (or N-forward) token predictors. They don't have any way (in and of themselves) to ground their token generations, and given that token N is dependent on all of tokens (1...n-1) then small discrepancies can easily spiral out of control.
In addition to what others have pointed out, many of these aren't actually missing from traditional dictionaries: they're just inflected differently. So your example lists phrases like "operating systems", "immune systems" and "solar systems" as missing from traditional dictionaries, but at least the online OED and M-W have "operating system", "immune system" and "solar system" in them. It's just that your script is apparently listing the plural as a separate phrase.
On languages other than English: in general, different languages do word division very differently. At least in German and Dutch, many of those phrasal verbs are separable, meaning that they are one word in the infinitive but are multiple words in the present tense. So for example, where in English you would say "I log in to the website", in Dutch it would be "Ik log in op de website". "Log in" is two words in both cases, but in Dutch it's the separated form of the single-word separable verb inloggen ("I must log in now" = "Ik moet nu inloggen"). The verb is indeed separable in that the two words often don't end up next to each other: "I log in quickly" = "Ik log snel in".
Dutch, like German, has lots of compounds. But there are also agglutinative languages, which have even more complex compound words, perhaps comprising a whole sentence in another language. Eg (from Wikipedia) Turkish "evlerinizdenmiş" = "(he/she/it) was (apparently/said to be) from your houses" or Plains Cree "paehtāwāēwesew" = "he is heard by higher powers"; and these aren't corner cases, that's how the language works.
Hah! I just noticed something - in the video at the top of the page, the female technician assembling servers is wearing a pink smock with Chinese text on it, right above the ESD grounding lead. She features in a still photo down below, but they've digitally removed the Chinese. I think it says "富士康科技" for "Foxconn Technology." Funny that they would go out of their way to hide the depth of their partnership.
>Is a safe model one that refuses to produce code for a weapons system? Well.. does a PID controller count? I can use that to keep a gun pointed at a target or i can use that to prevent a baby rocker from falling over.
I've been using LLMs for some cyber-y tasks and this is exactly how it ends up going. You can't ask "hack this IP" (for some models), but more discrete tasks it'll have no such qualms.
This is just a sad comment. Please stick to the merit and substance instead of reaching for bizarre speculation about my motives. And no, I do not work or have ever worked for the Church. I am an observer with an above average knowledge of what is occurring in the Church. The idea that I am necessarily less objective for that reason, and less than an ignorant outsider, is ridiculous and fallacious.
And for your information, my motive is correctness. I get annoyed by confidently expressed, ignorant claims posing as knowledge, especially when it is unfair to the accused party.
> So.. you basically agree, you just don't like the wording
No. I disagree with your reasoning, which I took the time to explain in detail and which you seem to have completely ignored.
Depends on jurisdiction. In the UK it's not an absolute defence, you still have to prove it's an opinion a "reasonable person" could come to based on facts.
> The concept itself doesn’t even make sense if you fully understand the intersectional scope of technology and society
Societies demands are the things that are unsafe not the technologies themselves
What has Tesla accomplished lately? I mean, within the last decade?
They certainly have accomplished amazing things. They had a lead that even five years ago was considered insurmountable. But they've made at best incremental progress, the kind made by mediocre engineers. The only novelty was the Cybertruck, which didn't live up to expectations and didn't open up any new domains.
SpaceX is still advancing, though even that is getting a bit of an asterisk if they can't get Starship to fulfill its promise.
There's a reason the acronym TACO exists - every time Trump goes after the really deep money the backlash forces him to change his tune. If only the tariffs disproportionately affected the rich then we would have been done with them within a week - instead the most effected individuals and companies just got carve outs.
Possibly. I don't know.
It could be unfeasible to increase prices so much whenever a new model was released.
Any assumptions made here is based on vibes. I see no reason to drop my skepticism.
> Its easier to just admit that technological advances helped decrease the cost instead of coming up with more complicated reasons like VC funding, subsidies and so on.
They raised an absurd amount of cash, and still bleed money to an absurd degree.
VCs make money when they exit. OpenAI only needs to "make sense" until an IPO happens. Once private investors have their exit, the markets can be left to handle the resulting dumpster fire.
> For instance take Deepseek and other opensource models - even they have reduced their costs by a huge margin.
Chinese companies are very opaque. I don't pretend to have insight into it.
Is the company behind Deepseek profitable?
> What explanation is there for opensource models?
What opensource models have to do with inference?
Your argument is that training is expensive but inference is cheap (something I see no evidence of). Why would a company give away the expensive part of the work?