Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Pi – A minimal terminal coding harness (pi.dev)
375 points by kristianpaul 12 hours ago | hide | past | favorite | 167 comments
 help



To me, the most interesting thing about Pi and the "claw" phenomenon is what it means for open source. It's becoming passé to ask for feature requests and even to submit PRs to open source repos. Instead of extensions you install, you download a skill file that tells a coding agent how to add a feature. The software stops being an artifact and starts being a living tool that isn't the same as anyone else's copy. I'm curious to see what tooling will emerge for collaborating with this new paradigm.

I see this happening, too.

We know that a lack of control over their environment makes animals, including humans, depressed.

The software we use has so much of this lack of control. It's their way, their branding, their ads, their app. You're the guest on your own device.

It's no wonder everyone hates technology.

A world with software that is malleable, personal, and cheap - this could do a lot of good. Real ownership.

The nerds could always make a home with their linux desktop. Now everyone can. It'll change the equation.

I'm quite optimistic for this future.


I'm presently in the process of building (read: directing claude/codex to build) my own AI agent from the ground up, and it's been an absolute blast.

Building it exactly to my design specs, giving it only the tool calls I need, owning all the data it stores about me for RAG, integrating it to the exact services/pipelines I care about... It's nothing short of invigorating to have this degree of control over something so powerful.

In a couple of days work, I have a discord bot that's about as useful as chatgpt, using open models, running on a VPS I manage, for less than $20/mo (including inference). And I have full control over what capabilities I add to it in the future. Truly wild.


That's just because corporations got greedy and made their apps suck.

Strip away the ads, the data harvesting, add back the power features, and we'll be happy again. I'm more willing than ever to pay a one-time fee good software. I've started donating to all the free apps I use on a regular basis.

I don't want to own my own slop. That doesn't help me. Use your AI tools to build out the software if you want, but make sure it does a good job. Don't make me fiddle with indeterministic flavor-of-the-month AI gents.


> That's just because corporations got greedy and made their apps suck.

It is true for me with Linux. I code for a living and I can't change anything because I can't even build most software -- the usual configure/make/make install runs into tons of compiler errors most of the time.

Loss of control is an issue. I'm curious if AI tools will change that though.


I think there's room for both visions. Big Tech is generating more toxic sludge than ever, and yeah sure this is because they're greedy, but more precisely the root cause is how they lobbied Washington and our elected officials agreed to all kinds of pro-corporate, anti-human legislation. Like destroying our right to repair, like criminalizing "circumvention" measures in devices we own, like insane life-destroying penalties for copyright infringement, like looking the other way when Big Tech broke anti-trust laws, etc.

The Big Tech slop can only be fixed in one way, and actually it's really predictable and will work - we need to fix the laws so that they put the rights and flourishing of human beings first, not the rights and flourishing of Big Tech. We need to fix enforcement because there are so many times that these companies just break the law and they get convicted but they get off with a slap on the wrist. We need to legislate a dismantling of barriers to new entrants in the sectors they dominate. Competition for the consumer dollar is the only thing that can force them to be more honest. They need to see that their customers are leaving for something better, otherwise they'll never improve.

But our elected officials have crafted laws and an enforcement system which make 'something better' impossible (or at least highly uneconomical).

Parallel to this if open source projects can develop software which is easier for the user to change via a PR, they totally should. We can and should have the best of both worlds. We should have the big companies producing better "boxed" software. Plus we should have more flexibility to build, tweak and run whatever we want.


Very good points, I agree and would add : "Interoperability" is the key to bring back competition and open the ecosystem again.

And then they will take away your right to boot whatever you want. For national security reasons and the children, of course.

What you're describing is the expected and correct outcome inside a profit-oriented, capitalist system. So the only way I see out of this situation would be changing policy to a more socialist one, which doesn't seem to be so popular among the tech elite, who often think they deserve their financial status because of the 'value' they provide, without specifying what that value is (or its second-order consequences). Whether that's abusing a monopolistic market position they lucked into, making apps as addictive as possible, or building drones that throw bombs on newborns in hospitals.

> a living tool that isn't the same as anyone else's copy

Yes, which is why this model of development is basically dead-in-the-water in terms of institutional adoption. No large firm or government is going to allow that.


Large institutions and governments had been pushing back against open source too until it became obviously inevitable.

It wasn't "inevitable", it took Red Hat and some other key players addressing the concerns the businesses and governments had, which took the better part of a decade. If LLMs as an ecosystem don't implode in the next year or so I imagine you'll start to see some big consultancies starting that same process for them.

And how great it will be to troubleshoot any issues because everyone is basically running a distinct piece of software

It's like the dude who monkey-patches their car and goes to the dealer to complain why the suspension is stiff.

It's because you put 2by4's in place of the shocks, you absolute muppet. And then they either give them a massive bill to fix it properly or politely show them out.

Same will happen in self-modifying software. Some people are self-aware enough to know that "I made this, it's my problem to fix", some will complain to the maker of the harness they used and will be summarily shown the door.


I don’t want to be the one who has to upgrade this software + vibe coded patches.

It’s going to be very likely that once something is patched is to be considered as diverged and very hard to upgrade


... made minutes ago.

So everybody will be using (sometimes slightly, sometimes entirely) different software. Like mutations, these adapt to the specific problems in the situation they were prompted to be programmed.

The skill for feature thing is just horrible, it's wasteful to everyone but the maintainer. It feels like a YOLO people are getting away with because people drank some kool-aid.

Think of skills more like Excel macros (or any other software with robust macro support). It doesn't make sense for Microsoft to provide the specific workflow you need, but your own sheet needs it.

My current fave harness. I've been using it to great effect, since it is self-extensible, and added support for it to https://github.com/rcarmo/vibes because it is so much faster than ACP.

Can you shed some light on the speed difference of the direct integration vs. ACP?

I’m still looking for a generic agent interaction protocol (to make it worth building around) and thought ACP might be it. But (and this is from a cursory look) it seems that even OpenCode, which does support ACP, doesn’t use it for its own UI. So what’s wrong with it and are there better options to hopefully take its place?


Which ones have you compared it against?


wow, i love this! was about to build this myself, but this looks exactly what i want.

The better web UI is now part of https://github.com/rcarmo/piclaw (which is essentially the same, but with more polish and a claw-like memory system). So you can pick if you want TS or Python as the back-end :)

if i ever want a claw, i'd obv. go with this :)

The claw version’s web UI essentially has better thinking output, more visibility of tool calls, and slightly better SSE streaming. I’ve backported some of it to vibes, but if you want to borrow UI stuff, the better bits are in piclaw. I use both constantly on my phone/desktop.

Wdym harness? Its a coding agent

I think the thesis of Pi is that there isn't much special about agents.

Model + prompt + function calls.

There are many such wrappers, and they differ largely on UI deployment/integration. Harness feels like a decent term, though "coding harness" feels a bit vague.


I recently discovered this via a YouTube video a few days ago

I really like the customization aspect of it and you can build tools on fly and even switch model mid session

There’s another project here called oh my pi has anyone here tried it


I began with pi, and have been using oh-my-pi the last two weeks.

https://github.com/can1357/oh-my-pi

More of a batteries included version of pi.


How’s your experience so far with oh my pi

Are you running it in some kind of sandbox? Does it have sandboxing features?

I haven’t met a single person who has tried pi for a few days and not made it their daily driver. Once you taste the freedom of being able to set up your tool exactly how you like, there’s really no going back.

and you can build cool stuff on top of it too!


What self-built capabilities do you like the most that claude code doesn't offer?

Not the person you replied to, but I'll stress the point that it is not just what you can add that Claude Code doesn't offer, but also what you don't need to add that Claude Code does offer that you don't want.

I dislike many things about Claude Code, but I'll pick subagents as one example. Don't want to use them? Tough luck. (AFAIK, it's been a while since I used CC, maybe it is configurable now or was always and I never discovered that.)

With Pi, I just didn't install an extension for that, which I suspect exists, but I have a choice of never finding out.


> I haven’t met a single person who has tried pi for a few days and not made it their daily driver.

Pleased to meet you!

For me, it just didn’t compare in quality with Claude CLI and OpenCode. It didn’t finish the job. Interesting for extending, certainly, but not where my productivity gains lie.


People seem to be really enjoying rolling everything themselves these days...

I've spent way too long working around the jank and extra features in Other People's Software.

Now I can just make my own that does exactly what I want and need, nothing more or nothing less. It's just for me, it's not a SaaS or a "start-up" I'm the CEO of.


Because it’s very easy todo nowadays. Why making compromises in your workflow anymore?

I've been using pi via the pi-coding-agent Emacs package, which uses its RPC mode to populate a pair of Markdown buffers (one for input, one for chat), which I find much nicer than the awful TUIs used by harnesses like gemini-cli (Emacs works perfectly well as a TUI too!).

The extensibility is really nice. It was easy to get it using my preferred issue tracker; and I've recently overridden the built-in `read` and `write` commands to use Emacs buffers instead. I'd like to override `edit` next, but haven't figured out an approach that would play to the strengths of LLMs (i.e. not matching exact text) and Emacs (maybe using tree-sitter queries for matches?). I also gave it a general-purpose `emacs_eval`, which it has used to browse documentation with EWW.


Nice! I'm curious to hear how you're mapping `read` and `write` to Emacs buffers. Does that mean those commands open those files in Emacs and read and write them there?

Let me also drop a link to the Pi Emacs mode here for anyone who wants to check it out: https://github.com/dnouri/pi-coding-agent -- or use: M-x package-install pi-coding-agent

We've been building some fun integrations in there like having RET on the output of `read`, `write`, `edit` tool calls open the corresponding file and location at point in an Emacs buffer. Parity with Pi's fantastic session and tree browsing is hopefully landing soon, too. Also: Magit :-)


Stop advertising pi, people. It _somehow_ continued to fly somewhat under the radar after that whole OpenClaw nonsense. Don’t make Anthropic’s sic their bloodhounds on them like they did on OpenCode.

Interestingly, since OpenClaw, there has been ~one post about Pi every week. But practically no one voted any of them except this one.

pi is an officially accepted harness of either Anthropic or OpenAI. I forgot which.


This looks great but It feels really risky to add more and more tools to the harness from random repos. Nothing against this repo in particular but I wish we had better security and isolation so I that I knew nothing could go wrong and I could just test a bunch of these every day the same way I can install an app on my phone and feel confident it's not going to steal my data.

Big fan of this fork, been using it for everything for the last couple of weeks.

Went from codex/claude code -> opencode -> pi -> oh-my-pi


It is an awesome fork! Tried to contribute also, but community seems quite close knit.

I feel like this misses the point of pi somewhat. The allure of pi is that it allows you to start from scratch and make it entirely your own; that it’s lightweight and uses only what you need. I go through the list of features in this and I think, okay, cool, but why should I use this over OpenCode if I just want a feature-packed (and honestly -bloated) ready-made harness?

I'd quite like the web tools from oh-my-pi, but able to be extracted to a normal pi tool or plugin... Maybe I should look into that sometime...

Why not OpenCode?

Oh-my-bloat.

I am still an avid user of opencode, my own fork though with async tools etc, but it is cumbersome and tries to do too many things.


very interesting, i tried it at the start but haven't come back. could you expand on what you mean?

I spent 3 months adopting Codex and Claude Code SDKs only to realize they're just vendor lock-in and brittle. They're intended to be used as CLI so it's not programmable enough as a library. After digging into OpenClaw codebase, I can safely say that the most of its success comes from the underlying harness, pi agent.

pi plugins support adding hooks at every stage, from tool calls to compaction and let you customize the TUI UI as well. I use it for my multi-tenant Openclaw alternative https://github.com/lobu-ai/lobu

If you're building an agent, please don't use proprietary SDKs from model providers. Just stick to ai-sdk or pi agent.


I left some notes about this. I agree with you directionally but practically/economically you want to let users leverage what they're already paying for.

https://yepanywhere.com/subscription-access-approaches/

Captures the ai-sdk and pi-mono.

In an ideal world we would have a pi-cli-mono or similar, like something that is not as powerful as pi but gives a least common denominator sort of interface to access at least claude/codex.

ACP is also something interesting in this space, though I don't honestly know how that fits into this story.


IIUC to reliably use 3P tools you need to use API billing, right? Based on my limited experimentation this is an order of magnitude more expensive than consumer subscriptions like Claude Pro, do I have that right?

("Limited experimentation" = a few months ago I threw $10 into the Anthropic console and did a bit of vibe coding and found my $10 disappeared within a couple of hours).

If so, that would support your concern, it does kinda sound like they're selling marginal Claude Code / Gemini CLI tokens at a loss. Which definitely smells like an aggressive lockin strategy.


Technically you're still using claude CLI with this pattern so it's not 3P app calling Anthropic APIs via your OAuth token. Even if you would use Claude Code SDK, your app is 3P so it's in a gray area.

Anthropic docs is intentionally not clear about how 3P tools are defined, is it calling Claude app or the Anthropic API with the OAuth tokens?


Unfortunately it's currently very utopian for (I would assume) most devs to use something like this when API cost is so prohibitively expensive compared to e.g. Claude Code. I would love to use a lighter and better harness, but I wouldn't love to quintuple my monthly costs. For now the pricing advantage is just too big for me compared to the inconvenience of using CC.

You technically still use CC, it's not via SDK but via CLI programmatically triggered via pi.

Is this in line with Anthropic ToS? They cracked down hard on Clawdbot and the like from what I gathered. I guess if you are still invoking CC it might be fine, but isn't that gonna lead to weird behavior from basically doubling up on harnesses?

I also wondered for months why it feels so difficult to use Openai or Anthropic SDKs until i came to a similar conclusion.

Pi has made all the right design choices. Shout out to Mario (and Armin the OG stan) — great taste shows itself.

I do not understand why in the age of ai coding we would implement this in javascript

It’s one of the most productive languages and ecosystems (IMO top 1 over all).

It’s straightforward: JavaScript is a dynamic language, which allows code (for instance, code implementing an extension to the harness) to be executed and loaded while the harness is running.

This is quite nice — I do think there’s a version of pi’s design choices which could live in a static harness, but fully covering the same capabilities as pi without a dynamic language would be difficult. (You could imagine specifying a programmable UI, etc — various ways to extend the behavior of the system, and you’d like end up with an interpreter in the harness)

At least, you’d like to have a way to hot reload code (Elixir / Erlang could be interesting)

This is my intuition, at least.


Code hotloading isn't a particularly difficult feature to implement in any language.

Sure, but why implement a novel language with said feature if your concern is a harness ... not on implementing a brand new language with this feature?

Rust can't even dynamically link!

I'm super on board the rust train right now & super loving it. But no, code hot loading is not common.

Most code in the world is dead code. Most languages are for dead code. It's sad. Stop writing dead code (2022) was no where near the first, is decades and decades late in calling this out, but still a good one. https://jackrusher.com/strange-loop-2022/


Incredible talk and I agree with all the things and I've worked on this problem a bunch.

But Rust can dynamically link with dylib but I believe it's still unstable.

It can also dynamically load with libloading.


I built my own harness on Elixir/Erlang[0]. It's very nice, but I see why TypeScript is a popular choice.

No serialization/JSON-RPC layer between a TS CLI and Elixir server. TS TUI libraries utilities are really nice (I rewrote the Elixir-based CLI prototype as it was slowing me down). Easy to extend with custom tools without having to write them in Elixir, which can be intimidating.

But you're right that Erlang's computing vision lends itself super well to this problem space.

[1]: https://github.com/matteing/opal


Thank god it's written in JavaScript. I might have skipped it if it were zig or something.

This confused me about openclaw for quite some time. The whole lobster/crustacean theme is just firmly associated with rust in my head. Guess it's just a claude/claw wordplay.


If you look at that code it’s possibly the worst rust code I’ve seen in my life. There are several files with 5000 to 10000 lines of code in a single file.

It looks 100% vibe coded by someone who’s a complete neophyte.


Fwiw @dicklesworthstone / jeff Emanuel is definitely my favorite dragon rider right now, doing the most with AI, to the most effect.

Their agent mail was great & very early in agent orchestration. Code agent search is amazing & will tell you what's happening in every harness. Their Franktui is a ridiculously good rust tui. They have project after project after project after project and they are all so good.

Didn't know they had a rust Pi. Nice.


You should look at the code in that project. It’s terrible, I mean, really, really terrible.

It’s clear it was 100% written by Claude using sub-agents which explains the many classes with 5000 lines of rust in a single file.

It’s a huge buggy mess which doesn’t run on my Mac.

If you’re a rust engineer and want a good laugh, go take a look at the agent.rs, auth.rs, or any of the core components.


I am building an entire GPT model framework from the ground up in Typescript + small amounts of c bindings for gpu stuff. https://github.com/thomasdavis/alpha2 (using claude)

Don't hate me aha and no, there is no reason other than I can


yes! I just don't understand that as well. Up until some time ago claud code's preferred install was a npm i, wasn't it? Please serious answers for why anyone would use a web language for a terminal app

Because it's what the person writing it's preferred language.

So it can share code with the web app.

Because writing it in javascript is easier than writing it in raw brute forced assembly.


See also: pz: pi coding-agent in Zig

https://news.ycombinator.com/item?id=47120784


i wrote an agent in zig, it kinda sucks tho. the language is just words

Hugging Face now provides instructions for using local models in Pi:

https://x.com/victormustar/status/2026380984866710002


there is also pz a drop-in replacement for pi rewritten in Zig. 1.7MB static binary, 3ms startup, 1.4MB RAM idle. Find more at:

https://github.com/elyase/awesome-personal-ai-assistants?tab...


Cool, thanks for this. What about the extensions though? For me the point about pi is minimal base plus configurable extensions you choose.

Direct link to pz for those on mobile: https://github.com/joelreymont/pz

Pi was probably the best ad for Claude Code I ever saw.

After my max sub expired I decided to try Kimi on a more open harness, and it ended up being one of the worst (and eye opening experiences) I had with the agentic world so far.

It was completely alienating and so much 'not for me', that afterwards I went back and immediately renewed my claude sub.

https://www.thevinter.com/blog/bad-vibes-from-pi


Technically you're not allowed to use Claude subscription account with Pi (according to Anthropic's policy). So yeah, Pi is the best anti-ad against Anthropic.

> I would say that the project actively expects you to be downloading them to fill any missing gaps you might have.

Where did you get this perspective from?

> I thought pi and its tools were supposed to be minimal and extensible. So why is a subagent extension bundling six agents I never asked for that I can’t disable or remove?

Why do you think a random subagents extension is under the same philosophy as pi?

Your blog post says little about pi proper, it's essentially concerned with issues you had with the ecosystem of extensions, often made by random people who either do or do not get the philosophy? Why would that be up to pi to enforce?


Sharing extensions is very much the philosophy. Using them however is less so.

Pi ships with docs that include extensions and the agent looks there for inspiration if you ask it to build a custom extension.

Looking at what others publish is useful!


> As it turns out, the opinions in question are that bash should be enabled by default with no restrictions, that the agent should have access to every file on your machine from the start, and that npm is the only package manager worth supporting.

Yep. This is why I've been going "Hell, no!" and will probably keep doing so.


> if I start the agent in ./folder then anything outside of ./folder should be off limits unless I explicitly allow it, and the same goes for bash where everything not on an allowlist should be blocked by default.

Here's the problem with Claude Code: it acts like it's got security, but it's the equivalent of a "do not walk on grass" sign. There's no technical restrictions at play, and the agent can (maliciously or accidentally) bypass the "restrictions".

That's why Pi doesn't have restrictions by default. The logic is: no matter what agent you are using, you should be using it in a real sandbox (container, VM, whatever).


But the agent has to interact with the world; fetch docs, push code, fetch comments, etc. You can't sandbox everything. So you push that configuration to your sandbox, which is a worse UX that the harness just asking you at the right time what you'd like to do.

I too would like to know what a good UX looks like here but I have doubts that the permission prompts of Claude are the way to go right now.

Within days people become used to just hitting accept and allowlisting pretty much everything. The agents write length scripts into shell scripts or test runners that themselves can be destructive but they immediately allowlisted.


Well, you are imagining a worse UX, but it doesn't have to be. Pi doesn't include a sandboxing story at all (Claude provides an advisory but not mandatory one), but the sandbox doesn't have to be a simple static list of allowed domains/files. It's totally valid to make the "push code" tool in the sandbox send a trigger to code running outside of the sandbox, which then surfaces an interactive prompt to you as a user. That would give you the interactivity you want and be secure against accidentally or deliberately bypassing the sandbox.

Paraphrasing The Dude, that’s like, just your opinion, man.

I had a very similar experience. I have different preferences, but ultimately, my takeaway was that if I want to follow my own version of their philosophy, I should just create my own thing.

In the meantime, the codex/cc defaults are better for me.


hypegrift

Is that an official term "coding harness"

Wondering if you wanted a similar interface (though a GUI not just CLI) where it's not for coding what would you call that?

Same idea cycle through models, ask question, drag-drop images, etc...


Yes. It seems to be the term that stands out the most, as terms like "AI coding assistant", "agentic coding framework", etc. are too vague to really differentiate these tools.

"harness" fits pretty nicely IMO. It can be used as a single word, and it's not too semantically overloaded to be useful in this context.


LLM harness has been in vogue for a year now…

A harness is a collection of stubs and drivers configured to assist with automation or testing. It's a standard term often used in QA as they've been automating things for ages before Gen Ai came on to the scene.

Yes, it is also a device used to control the movement of work animals, which farmers have been using for ages before QA came on to the scene.

Has anyone used an open coding agent in headless mode? I have a system cobbled together with exceptions going to a centralized system where I can then have each one pulled out and `claude -p`'d but I'd rather just integrate an open coding agent into the loop because it's less janky and then I'll have it try to fix the problem and propose a PR for me to review. If anyone else has used pi.dev or opencode or aider in this mode (completely non-interactive until the PR) I'd be curious to hear.

EDIT: Thank you to both responders. I'll just try the two options out then.


pi has an RPC mode which just sends/receives JSON lines over stdio (including progress updates, and "UI" things like asking for confirmation, if it's configured for that).

That's how the pi-coding-agent Emacs package interacts with pi; and it's how I write automated tests for my own pi extensions (along with a dummy LLM that emits canned responses).


Been using pi exactly for this and it's working great!

You probably want to look into pi then - it's extremely extensible.

you can run https://block.github.io/goose/ in headless mode (I work on goose)

fast-agent lets you do this as well (and has a skill in its default skills repo to help with automation/running in container/hf job).

Note there is a fork oh-my-pi: https://github.com/can1357/oh-my-pi of https://blog.can.ac/2026/02/12/the-harness-problem/ fame. I use it as a daily driver but I also love pi.

I've been using Pi day to day recently for simple, smaller tasks. It's a great harness for use with smaller parameter size models given the system prompt is quite a bit shorter vs Claude or Codex (and it uses a nice small set of tools by default).

Which models do you use and what for? I'm looking for ideas to play with.

For local models I've been trying it with GLM-4.7-Flash and the new LFM2 24B model. I'm excited to try it with the new Qwen3.5 models that came out today as well.

Pi ships with powerful defaults but skips features like sub-agents and plan mode

Does anyone have an idea as to why this would be a feature? don't you want to have a discussion with your agent to iron out the details before moving onto the implementation (build) phase?

In any case, looks cool :)

EDIT 1: Formatting EDIT 2: Thanks everyone for your input. I was not aware of the extensibility model that pi had in mind or that you can also iterate your plan on a PLAN.md file. Very interesting approach. I'll have a look and give it a go.


See my comment in the thread but there is an intuitive extension architecture that makes integrating these type of things feel native.

https://github.com/badlogic/pi-mono/tree/main/packages/codin...


I plan all the time. I just tell Pi to create a Plan.md file, and we iterate on it until we are ready to implement.

Agreed. I rarely find the guardrails of plan to be necessary; I basically never use it on opencode. I have some custom commands I use to ask for plan making, discussion.

As for subagents, Pi has sessions. And it has a full session tree & forking. This is one of my favorite things, in all harnesses: build the thing with half the context, then keep using that as a checkpoint, doing new work, from that same branch point. It means still having a very usable lengthy context window but having good fundamental project knowledge loaded.


Check https://pi.dev/packages

There are already multiple implementations of everything.

With a powerful and extensible core, you don't need everything prepackaged.


I’m working with a friend to build an ui around Pi to make it more user friendly for people who prefer to work with a gui (ala conductor). You can check out the repo: https://github.com/philipp-spiess/modern

In the same spirit, I also ported a planning UI extension for Pi.

https://plannotator.ai/blog/plannotator-meets-pi/


What are people using to cost efficiently use this? I was using a Google Ultra sub which gave enough but that’s gone now.

ChatGPT $20/month is alright but I got locked out for a day after a couple hours. Considering the GitHub pro plus plan.


Run Qwen3-coder-next locally. That's what I'm doing (using LMstudio). It's actually a surprisingly capable model. I've had it working on some LLVM-IR manipulation and microcode generation for a kind of VLIW custom processor. I've been pleasantly surprised that it can handle this (LLVM is not easy) - there are also verilog code that define the processor's behavior that it reads to determine the microcode format and expected processor behavior. When I do hit something that it seems to struggle with I can go over to antigravity and get some free Gemini 3 flash usage.

What kind of hardware do you run it on?

Same here

Qwen3 Coder Next in llama.cpp on my own machine. I'm an AI hater, but I need to experiment with it occasionally, I'm not going to pay someone rent for something they trained on my own GitHub, Stack overflow, and Reddit posts.

FWIW the lockout probably wasn't related... maybe the content you were working on or your context window management somehow triggered something?

You could try minimax 2.5 via openrouter.

MiniMax has an incredibly affordable coding plan for $10/month. It has a rolling five hour limit of 100 prompts. 100 prompts doesn't sound like much, but in typical AI company accounting fashion, 1 prompt is not really 1 prompt. I have yet to come even close to hitting the limit with heavy use.

Coming from OpenClaw, it's pretty amazing how fast pi is, particularly paired with Qwen3 that dropped today. It's a magical time.

What dropped today? Wasn't Qwen3 Coder Next released beginning of the month?

Qwen3.5 released a couple of days ago but I'm not that RAM rich


Alibaba released a whole set of new Qwen 3.5 models including a ~120B and a ~35B MoE.

Indeed, it seems to just works with a self hosted Qwen3 coder next.

The way you’re able to extend the harness through extension/hook architecture is really cool.

Eg some form of comprehensive planning/spec workflow is best modeled as an extension vs natively built in. And the extension still ends up feeling “native” in use


Anyone tried pi with 5.3-codex vs codex cli?

I run it almost exclusively with codex models. Zero issues.

Interesting approach to planning via extensions. I took a similar direction with enforcement. A governance loop that hooks into the agent's tool calls and blocks execution until protocol is followed. Every 10 actions (configurable), the agent re-centers. No permission popups, but the agent literally can't skip steps.

Open source: https://github.com/isagawa-co/isagawa-kernel


ive been using pi for about a week as daily driver and so far im happy with it. I really like the modular concept and also that its rather minimal

I’ve been testing it for a few days on pretty much clean install (no customizations/extensions) and it’s ok. Not sure if I like it yet.


What's a coding harness? Claude Code is a "harness" and not a TUI?

The fact that it's a tui isn't particularly relevant. It could be a gui or cli and provide very similar value.

Nearly all of its value is facilitating your interaction with the LLM, the tools it can use, and how it uses them.


If you run Claude Code with `-p --output-format json` it's no longer a TUI, but it's still a harness.

Anyone managed to run pi in a completely sandboxed environment? It can only access the cwd and subdirectories


Yeah I wrote a small landlock wrapper using go-landlock to sandbox pi that works well (not public, similar projects are landrun and nono).

Note that if you sandbox to literally just the working directly, pi itself wont run since pretty much every linux application needs to be able to read from /usr and /etc


I’ve been tinkering with Gondolin, a micro-vm agent sandbox.

Here’s an example config: https://github.com/earendil-works/gondolin/blob/main/host/ex...


I do this with an extension. I run all bash tools with bwrap and ACLs for the write and edit tools. Serves my purposes. Opens up access to other required directories, at least for git and rust.

I think I published it. Check the pi package page.


Excited to give this a try, looks really well done.

But I can't use my Codex plan with it, right? I have to use an API key?

You can use your Codex plan with it. OpenAI endorsed it several weeks ago, as far as I remember. That could change, however, but now seems safe.

You can use your Claude or Gemini plan with it too for now, though Anthropic and Google have made it clear this is a ToS violation.

Pi makes GPT-5.3-Codex act about on par with Claude easily.

There's something in the default Codex harness that makes it fight with both arms behind its back, maybe the sandboxing is overly paranoid or something.

With Pi I can one-shot many features faster and more accurately than with Codex-cli.


Pi treats you like an adult and shows whatever the fuck LLM is doing rather than actively hiding shit from the user. And just for that, once you tasted the freedom and transparency, there’s no way to go back to CC.

After 2.20.0 of Claude code where they started not showing what files are read / searches are made by default .. I fucking love how easy it was to ditch Claude code for pi.

I think OpenCode is the same.

They are all open source though so you can just find out whats going on if you want right?


Another batteries included pi setup. Built a lightweight mobile webui to run it on termux and code on my phone.

https://github.com/mikeyobrien/rho


Naming skills though...

I mean using the captive agents is much cheaper than supplying your api key to a 3rd party agent.

Wtf is that example gif?

The prompt shown is

"Who's your daddy and what does he do?"

Is this a joke or tech? Is the author a dev or a clown?


No one cares about your opinions.

This coding agent certainly couldn't give a fuck.


It’s a quote from the movie Kindergarten Cop.

The backing to OpenClaw/MoltBot whatever they're calling themselves. Why is it insecure, well, Pi tells you >No permission popups.

Anyway, even if you give your agent permission, there's no secure way to know whether what they're asking to is what they'll actually do, etc.


> Why is it insecure, well, Pi tells you >No permission popups.

Pi supports permission popups, but doesn't use them by default. Their example extensions show how to do it (add an event listener for `tool_call` events; to block the call put `block: true` in its result).

> there's no secure way to know whether what they're asking to is what they'll actually do

What do you mean? `tool_call` event listeners are given the parameters of the tool call; so e.g. a call to the `bash` tool will show the exact command that will execute (unless we block it, of course).


you want to put agents in a sandbox instead such as bwrap anyways.

Just how expensive was that domain?

README on Github says “pi.dev domain graciously donated by exe.dev” (though that doesn’t say anything about the original price of course).

oh that's kind. i hope they keep the old domain up too though: https://shittycodingagent.ai/



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: