I remember when the place started to go. It had been Mecca for all the components and switches and tools, and fun to visit. Then shelves were no longer full and as time went on, sported increasingly wide gaps. Toward the end, far more shelf than product. And the packages, as you said - there were always a few that were obviously previously opened, retaped sloppily, sometimes having a returned-item sticker. I don't recall if the returns were a lower price. It was depressing and I stopped going. I think I went to a closing sale but there was nothing I wanted.
I've been running something similar for a few months — a voice-first interface for Claude Code running on a local Flask server. Instead of texting from my phone, I just talk to it. It spawns agents in tmux sessions, manages context with handoff notes between sessions, and has a card display for visual output.
The remote control feature is cool but the real unlock for me was voice. Typing on a phone is a terrible interface for coding conversations. Speaking is surprisingly natural for things like "check the test output" or "what did that agent do while I was away."
The tmux crowd in this thread is right that SSH + tmux gets you 90% of the way there. But adding voice on top changes the interaction model — you stop treating it like a terminal and start treating it like a collaborator.
I think you have a misunderstanding of the term alignment. Really, you could replace "aligned" with "working" and "misaligned" with "broken".
A washing machine has one goal, to wash your clothes. A washing machine that does not wash your clothes is broken.
An AI system has some goal. A target acquisition AI system might be tasked with picking out enemies and friendlies from a camera feed. A system that does so reliably is working (aligned) a system that doesn't is broken (misaligned). There's no moral or philosophical angle necessary if your goal doesn't already include that. Aligned doesn't mean good and misaligned doesn't mean evil.
The problem comes when your goal includes moral, ethical and philosophical judgements.
False dichotomy. The market for BEVs isn't limited to BEV-only companies. Most considering one not made by Tesla are probably looking at an incumbent manufacturer instead.
Isn't there a max filepath length? Or does find not ever deal with that and just deal in terms of building its own stack of inodes or something like that?
This is something I vibecoded to learn my kid the clock. I think this is a very good use of ai coding, stuff that is for visualization and temporary learning.
This drives me nuts. Now that people have figured out the em dash, this is the number one way I spot AI text. Not even opposed to writing with AI, but sometimes it feels like they want us to spend more time reading it than they did writing it.
I understand that but there's a time and a place. Rust has nothing to do with this. 100% of the people on this site understand that this challenge can be done faster in C, or Rust, or whatever. This is a PHP challenge. Perhaps we could discuss the actual submission as opposed to immediately derailing it.
This. When I look at why my life sucks and is on hard difficulty mode, it's not because I use US tech instead of EU tech. Most people and companies have bigger economic challenges right now trying to keep the lights on, than data sovereignty and domestic alternatives. My company just had a 3rd round of layoffs and its wasn't due to lack of EU SW.
Google doesn't sell their list to you. They give it to you for free. Using their list costs them money. Pumping up numbers gains them nothing but the headache of PR issues when they get a false positive.
Spyware filters used to boast about how many domains they filter out because they wanted you to buy their filters instead of someone else's. By the time they hit a false positive, they've already sold a year's subscription to that customer.
My preferred version of "safe" is "in its actions considers and mostly upholds usually unstated constraints like 'don't kill unless necessary', 'keep Earth inhabitable', 'avoid toppling society unless really well justified for the greater good', etc. The kind of framing that was prevalent pre-ChatGPT. Not terribly relevant for a chat software, but increasingly important as chat models turn into agents.
Of course once you have that framing, adding additional goals like "don't give people psychosis", "don't give step-by-step instructions on making explosives, even if wikipedia already tells you how to do it" or "don't harm our company's reputation by being racist" are conceptually similar.
On the other hand "don't make weapon systems" or "never harm anyone" might not be viable goals. Not only because they are difficult to impossible to define, but also because there is huge financial and political pressure not to limit your AI in that way (see Anthropic)
Also an outsider, but my perspective is that "safety" has always been a nebulous term for a variety of concepts. No AI institution will ever give up on alignment because "the AI does what you want it to" is a pure functionality thing. On the other end of the scale there's a censorship aspect to it where models will refuse to provide wikipedia level information because it's "dangerous". The latter is very much subject to the whims of the labs, politicians, journalists, etc.
I don't think it's going to be anymore successful in the EU, honestly. The last couple of years have EU politicians throughly over their shit, and it's unlikely many concessions to US BigTech can be bought without serious reciprocity on the table (for example, a major expansion of US military aid to Ukraine)
Sure but if that’s the case there should be some tax on the mark to market difference. If not it’s just straight up tax fraud (which I suspect is often actually the case).
No, literally no one understands how to solve this. The only option that actually works is to isolate it to a degree that removes the "clawness" from it, and that's the opposite of what people are doing with these things.
Specifically, you cannot guard an LLM with another LLM.
The only thing I've seen with any realism to it is the variables, capabilities and taint tracking in CaMeL, but again that limits what the system can do and requires elaborate configuration. And you can't trust a tainted LLM to configure itself.
Race is a tricky topic in the US due to history. The term is an 18th century creation of German academia, but somehow got adopted in the late 19th c in the US, presumably because racial restrictions were written into law and so fancy terminology was adopted.