Unclear why this is being downvoted. It makes sense.
If you connect to the database with a connector that only has read access, then the LLM cannot drop the database, period.
If that were bugged (e.g. if Postgres allowed writing to a DB that was configured readonly), then that problem is much bigger has not much to do with LLMs.
So that after the attackers exfiltrate your file to their Anthropic account, now the rest of the world also has access to that Anthropic account and thus your files? Nice plan.
> Just go back in time and get a snapshot of what the repo looked like 2 weeks ago. Ah. Except rebase.
This is false.
Any googling of "git undo rebase" will immediately point out that the git reflog stores all rebase history for convenient undoing.
Shockingly, got being a VCS has version control for the... versions of things you create in it, not matter if via merge or rebase or cherry-pick or whatever. You can of course undo all of that.
Up to a point - they are garbage collected, right?
And anyway, I don't want to dig this deep in git internals. I just want my true history.
Another way of looking at it is that given real history, you can always represent it more cleanly. But without it you can never really piece together what happened.
The reflog is not a git internal -- it is your local repository's "true history", including all operations that you ran.
The `git log` history that you push is just that curated specific view into what you did that you wish to share with others outside of your own local repository.
The reflog is to git what Ctrl+Z is to Microsoft Word. Saying you don't want to use the reflog to undo a rebase is a bit like saying you don't want to use Ctrl+Z to undo mistakes in Word.
(Of course the reflog is a bit more powerful of an undo tool than Ctrl+Z, as the reflog is append-only, so undoing something doesn't lose you the newer state, you can "undo the undo", while in Word, pressing Ctrl+Z and then typing something loses the tail of the history you undid.)
Indeed, like for Word, the undo history expires after a configurable time. The default is 90 days for reachable changes and 30 days for unreachable changes, which is usually enough to notice whether one messed up one's history and lost work. You can also set it to never expire.
It is fine for people to prefer merge over rebase histories to share the history of parallel work (if in turn they can live with the many drawbacks of not having linear history).
But it is misleading to suggest that rebase is more likely to lose work from interacting with it. Git is /designed/ to not lose any of your work on the history -- no matter the operation -- via the reflog.
But it's at best much harder to find stuff in the reflog than to simply use git's history browsing tools. "What's the state of my never-rebased branch at time X" is a trivial question to answer. Undoing a rebase, at best, involves some hard resets or juggling commit hashes.
None of it is impossible, but IMHO it's a lot of excitement of the wrong kind for essentially no reward.
A shell injection vulnerability ad soon as somebody copies the same approach somewhere else or trained your LLM on it.
Write correct code by default, always, otherwise it will end up somewhere you care about.
The best way to do that is to avoid shell, as a language that makes writing insecure code the most convenient.
(The original intent looks like it's making a desktop/launch icon, e.g. you might call it with "firefox" as an argument and it would put its logo into an application starter, provided a logo of the correspond name is already in the place the script expects.)
This completely breaks the Linux experience for anybody living in a reasonably populous area. The issue has 3 upvotes.
I also put a 400 $ bounty on it, if anybody wants to give it a shot. (Given that AI is supposed to replace 90% of programmers last year, making the Wifi list stay visible should be easy, right?)
This worked fine 10 years ago.
Most of my gripes are around some UI garbage behaviour like that. I have a file manager on one PC (I think it's the Ubuntu one where some "GUI in Snap" stuff breaks the GUI) breaks the file picker dialogue, so that when pasting a directory path in to navigate there, at the exact instant you press Enter, it autocompletes the first file so that that gets selected, leading you to upload a file you didn't want to upload.
That said, all of that feels like really high quality compared to when once per year I click the Wifi menu on some Windows and it take 20 seconds to appear at all.
If you are sensitive to these issues, unfortunately you need to go with a mainstream linux distribution and use near-default settings.
It's great that you can customize everything and use your own window manager, compositor, etc ... but these issues are the price you pay. It is unfair to compare this to Windows, where you don't even have these customization options.
Specifically for the network manager applet, it is not fixed because it's not really used anymore. GNOME Shell has it's own network selection menu that does not use the applet. It is the default on most systems, so users don't face this issue by default.
>With Linux, you just have to be prepared to hit a bug and find no help coming anytime.
I'd argue it's the opposite. Windows stuff randomly breaking on forced unattended updates is a common trope by now. If you try to search for solutions, you will find "Trusted Microsoft Computing Expert Gold Level Diamond Star" people on MS forums giving you advice ranging from "reinstall drivers, uninstall drivers, update bios, run virus scan, and defrag your ssd".
If you search for problems on linux, you will get much higher quality answers.
> If you search for problems on linux, you will get much higher quality answers.
Not only that, but in the past I've cooked hacky bash scripts to work around issues while waiting for upstream fixes. I'd imagine that'd be harder with other OSs.
Also a long-time Linux user/administrator. Whenever I've tried searching for Windows answers to issues, I've been genuinely shocked by how low quality the answers are. I've got just a basic understanding of Windows, but it's obvious to me that over 99% of all Windows advice is from people who are just posting meaningless answers so that they can get points for answering or similar.
Well, see sibling thread: Looks like you just need to post your bounty in HN and somebody will do within a few hours. Somebody to that for Windows or macos.
Sometimes I feel the bounty topic isn't well served yet. On the GNOME bug tracker it doesn't seem to be very discoverable. Are there current good platforms to advertise bounties where people actually look?
Are you proposing that the linux community offers worse support than any kind of software support that you pay for? I've found strangers on the internet to be worlds better than anything I've ever gotten from a vendor.
I had one client who's explorer didn't load, we tried different file browsers, all that used explorer as backend failed to load, only double commander (forgot the exact name, it's a dual pan file browser like midnight commander) that worked. And we couldn't find any solution online, at the end he was stuck with it for over an year, as it was not possible to reinstall.
On Linux everything is mostly decoupled, so this is not working not going to break the other thing, and I can replace it with something else.
People forgets that you're not working with a black box, unlike Windows
Most explorer issues are really file system issues. It's touchy. chkdsk in offline repair mode usually fixes it. For the rest, clear the thumbnail cache.
I did try those, but it did not solved the issue. But searching around I came to know, this was a rare known explorer bug. Which was not resolved ... so that's that.
The default remote desktop client on Windows 11 can have his picture freeze. Mouse and keyboard input still goes through though. (Which is especially dangerous because enraged users will smash their keyboard.) Years without a fix from Microsoft. Just a registry hack as a workaround.
I jumped on the Linux bandwagon with my main work laptop last week, when my perfectly fine (I thought) Windows 11 installation nuked itself without warning (possibly related to merely opening Teams).
I somewhat randomly chose Mint, and a few oddities aside; it’s been a pretty good experience.
It takes skill to make a GUI that integrates dynamic information a good UX. For things like WiFi I discovered that modifying config files is an infinitely better experience than any GUI on Linux.
Also for some reason DE's sometimes fail to automatically connect to an AP when it's right there and I have to click for them to do it. This issue literally never happened to me when just using wpa_supplicant, for years whenever an AP is operational then so is the connection without fail.
Excellent, I will try it straight away. I'll pay out 75% if it works (as it fixes my immediate problem), and the remaining 25% if it gets merged. I'll email you after my test.
I think one possible complaint you might get in the review is that when refreshing is fully disabled in the menu, people won't see new networks come up (e.g. when they had just enabled Wifi, or unsuspended).
Maybe a good solution would be to to have one unclickable menu entry pop up labelled e.g. "Networks changed, re-open this menu" to solve that. Probably in nm-applet's main context menu of which the list is a child, instead of in the list itself, so that its appearance doesn't move around the networks on which the user is currently intending to click.
It only stops refreshing if you are hovering the actual SSID list items which in my opinion is the cleanest way to do it, if you want new data you can reclick/rehover the "available networks". The Other option is putting the refresh on a global timer, but that would add magic which isn't clear to the user.
I agree that logic is sound, but it is also not discoverable to the user:
They might open the list (with the cursor resting on one of the items, or use the keyboard to navigate out of comfort or for accessibility reasons), then notice "oh wait, I haven't actually enabled my phone's Wifi hotspot yet", enable that, and wait forever for it to appear.
That's why I'm thinking something should visually (and non-visually) change so the user can notice.
Maybe even cleaner would be to add a tooltip to the currently-hovered entry? That might work for both mouse and non-mouse use cases, and might even work for screenreaders.
Yeah I think this is what OS X does (or used to do), you open the menu and it does its initial refresh, and only after quite some delay of the menu being open, it refreshes again. Easy enough to choose your network in that large amount of time. I may be missing some subtle details of it though, since I haven't used it in a while >_>
Last time I tried Linux I was so done with Windows I installed Arch. Couldn't connect to Wifi. I figured it was Arch, so I installed Ubuntu. Literally the same problem. So I got a new USB wifi adaptor that said it supported Linux...same problem. I gave up and have been using a MacBook ever since lol.
Re: "Or maybe the operating system should just work reliably for (at least) the basics?"
So, out of curiosity, if I tried installing MacOS on any of the 15+ computers I have at home, what are the likely chances that this "operating system should just work reliably for (at least) the basics?"
I can tell you that my success rate with Linux is 100%.
I’m not especially speaking for MacOS, but to your question, I suspect if you tried to install an appropriate version of MacOS on Mac hardware, you’d have very close to a 100% success rate. That’s certainly my past experience with Mac and, FWIW, Windows too.
Anyway, my point wasn’t that Linux should be perfect; but that if it can’t be, maybe give some help why, and more experienced users shouldn’t just jump to blaming the struggling newbie.
The key is this: if you want Linux to win with non-experts, it needs to target being a better experience for non-experts than the alternatives, to justify the effort of changing.
Re: "if you want Linux to win with non-experts, it needs to target being a better experience for non-experts than the alternatives"
I agree in broad terms, but let me re-capitulate this. Which OS do you think would offer a better experience for non-experts when installing on bare-metal? By my reckoning, Windows is a nightmare to install afresh on random hardware, and MacOS wont work on most-all random hardware. Users think that Windows is easier because they almost never have to install it from scratch.
Also, do you factor in the ever-increasing nuisances (AI, ads, spyware)[0][1][2][4] that Microsoft and Apple are injecting into their operating systems, and the move towards digital sovereignty which is accelerating in every nation outside of the US in any computation of what is a 'better experience'?
> I agree in broad terms, but let me re-capitulate this. Which OS do you think would offer a better experience for non-experts when installing on bare-metal? By my reckoning, Windows is a nightmare to install afresh on random hardware, and MacOS wont work on most-all random hardware. Users think that Windows is easier because they almost never have to install it from scratch.
I've done multiple installs of every Windows (except 8) Windows since the NT4 era, and multiple installs of OS X over the last decade. They have almost always been straightforward and successful, unless I've complicated things with weird partition/dual boot requirements. (OS X isn't really a fair comparison, as the target hardware is so hugely restricted.)
----
Aside from the initial installation 'just working' (which I accept might not be dramatically different with Linux, these days, and indeed, I accept that Windows often needs additional drivers downloading, depending on your system.) there's another big factor to consider.
With Windows and OS X there's a long-established concept (at least, prior to the app store era) that if you want to install something, you download a file and run it. This applies whether it's drivers or software, and >95% of the time also provides a simple uninstall path. Even my elderly mother can grok this.
With Linux, this is my recent journey: Must I use APT or APT-GET? Flatpack? Snap? Or can I use the built-in Software Manager (FWIW, I really like the one in Mint, except when stuff isn't available on it.) Oh, so some software (Mullvad, Blender, etc.) I need to download manually? I've installed Mint; am I on a Debian system? Okay, I'll download the DEB, but then how to install that? (Oh, it failed - open-whispr). For other things, we must download an Appimage and make it executable - great, that works, but it doesn't have an install feature, so how to install it somewhere so that it's not forever sitting in Downloads? Huh, okay, I can figure that out, but it's a pain. Oh, wait, some of those self-contained files I've downloaded will run directly from file manager, but for some reason fail silently via the start menu link I've just made. Okay, better trouble-shoot that tomorrow...
(For brevity, I've left out that at every stage, there were multiple web searches to find instructions for the correct approach, diving into all manner of forums, Stack Overflow posts, and Github repositories. And I've left out the more esoteric stuff, like slowing down touchpad scrolling via obscure command-line incantations.)
This is the reality of setting up a simple Linux system with (what is reputed to be) one of the most user-friendly distros there is.
And which is why, if the goal is Linux 'winning' on the desktop (beyond committed nerds) there's still quite some way to go on UX.
> Also, do you factor in the ever-increasing nuisances (AI, ads, spyware)[0][1][2][4] that Microsoft and Apple are injecting into their operating systems, and the move towards digital sovereignty which is accelerating in every nation outside of the US in any computation of what is a 'better experience'?
Totally with you, 100% - that's why I'm experimenting with a full shift to Linux myself. But this only applies to relative nerds. Many/most non-expert users don't know or care about such things.
Re: "Or can I use the built-in Software Manager (FWIW, I really like the one in Mint, except when stuff isn't available on it.)"
I think this (built-in Software Manager) is probably the right track for most normal users. Last time I checked, the Debian software repo had over 120,000 packages, so for most normal users, the bulk of what they need is likely there and thus likely easier to install than apps on MacOS or Windows. My usual track record for installing a new desktop for family members, including the top 100 apps they likely need, is under 30 minutes for Linux. The last time I tried this with Windows, it took days of effort and frustration and to some extent opened the computer up to security risks because of the multitude of binary sources I had to trust.
But yes, once you start needing specialist software, then your-mileage-may-vary. Having said that, apps like Blender are already in the Ubuntu repo, which should mean they are also in the Mint Software Manager, and thus a single-click away from installation.
In general, I would consider Linux to be the easiest platform to install software on for the most common 80% of the software that normal users need. It's certainly the easiest to maintain and update that commonly used software of any of the mainstream desktop OSes.
Again, I think a lot of the mismatch of norms & experiences comes down to what someone becomes accustomed to. If you're accustomed to downloading an installation binary (EXE/MSI) and double-clicking that to install on Windows, then you can become accustomed to downloading an installation binary (DEB/RPM) and double-clicking that to install on Linux (viz: https://www.youtube.com/watch?v=OOPQPrzmnw0).
Re: "I gave up and have been using a MacBook ever since lol."
I'm curious. What will you do when Apple too starts shoehorning AI into every part of MacOS and when Apple introduces increasingly unpalatable or government-mandated surveillance functionality like Microsoft is doing with Recall?
Asahi linux to not waste hardware and then move away from apple products slowly. But in the meantime, their products are good and are Unix based so they're not a pain for development.
Or, you could help accelerate the move away from proprietary platforms, even if there is a small hit to you personally. This is how we help save society, rather than having others do all the work, no?
In the end, it's in your best interests that Linux and open platforms improve in the direction you want them to, and the best way to achieve that is by joining the effort now.
Last summer Manjaro released usual heavy update and suddenly wifi on my old spare mbp was gone. Luckily digging around I found that a firmware was available in aur so I had to just plug ethernet in, install the package and reboot the system. But then another smaller update out of blue made system unbootable so instead of doing "forensics" I went by the easiest way of reinstalling the system and wifi again was working out of the box.
This is still a problem. There are a lot of, eg, realtek chipsets that don't work well or simply don't work on Linux.
Another issue is they advertise "Linux support," which actually translates to: minimally working driver source available for very out-of-date kernel. Good luck if you want to rely on upstreamed drivers or even run a recent kernel.
Also the latest KDE UI that inserts a tiny password input box below the SSID when you click the SSID, and doesn't scroll it into view, so you're left wondering what's going on
As a maintainers, if you want to be be able to tell real issues from non-issue discussions, you still gave to read them (triage). That's what's taking time.
I don't see how transforming a discussion into an issue is less effort than the other way around. Both are a click.
Github's issues and discussions seem the same feature to me (almost identical UI with different naming).
The only potential benefit I can see is that discussions have a top-level upvote count.
> able to tell real issues from non-issue discussions
imo almost all issues are real, including "non-issue" - i think you mean non-bug - "discussions." for example it is meaningful that discussions show a potential documentation feature, and products like "a terminal" are complete when their features are authored and also fully documented or discoverable (so intuitive as to not require documentation).
99% of the audience of github projects are other developers, not non-programmer end users. it is almost always wrong to think of issues as not real, every open source maintainer who gets hung up on wanting a category of issues narrower than the ones needed to make their product succeed winds up delegating their product development to a team of professionals and loses control (for an example that I know well: ComfyUI).
If discussions had a more modern UI with threads or something then the difference might be real. But AFAICT it’s the same set of functionality, so it’s effectively equivalent to a tag.
They sorta do: each comment on a discussion starts a thread you can reply to, unlike on issues where you have to keep quoting each other to track a topic if there’s more than one. It still sucks, especially since long threads are collapsed and thus harder to ctrl-f or link a reply, but it’s something.
When you're shipping software, you have full control over LD_LIBRARY_PATH. Your entry point can be e.g. a shell script that sets it.
There is not so much difference between shipping a statically linked binary, and a dynamically linked binary that brings its own shared object files.
But if they are equivalent, static linking has the benefit of simplicity: Why create and ship N files that load each other in fancy ways, when you can do 1 that doesn't have this complexity?
That’s precisely my point. It’s insanely weird to have a shell script to setup the path for an executable binary that can’t do it for itself. I guess you could go the RPATH route but boy have I only experienced pain from that.
Sorry for the delay! It's fairly simple.
1. You have a column on your objects you want secured as an LTREE[] 2. You add a GIST index on that column
The values should be the different hierarchy paths to access the object starting with a "type" e.g departments.root.deptA
When you run a query, depending on how you want to access you use a <@ query. E.g. I'm a user with root access to all depts "col <@ 'departments.root'::ltree" or I'm a user in dept A "col <@ 'departments.root.deptA'::ltree" etc
If you connect to the database with a connector that only has read access, then the LLM cannot drop the database, period.
If that were bugged (e.g. if Postgres allowed writing to a DB that was configured readonly), then that problem is much bigger has not much to do with LLMs.
reply