For those on zsh I have something similar [1-2]. It hooks to zshaddhistory and stores the command, running time, CWD, hostname and exit status in a sqlite database, and provides a simple query command.
With a git merge driver [3] the history database can be kept in source control and shared between hosts.
Queries look a bit like
$ histdb blah
time ses dir cmd
09/02 24 ~ ogr2ogr temp/blah.shp 'WFS:http://environment.data.gov.uk/ds/wfs?SERVICE=WFS&INTERFACE=ENVIRONMENTWFS&VERSION=1.0.0&LC=3000000000000000000000000000000'
09/02 24 ~/temp cd blah.shp
15/02 146 ~/.emacs.d git commit -am "blah"
22/02 175 ~ test="asdf/blah.shp"
09:48 743 ~/.histdb hist blah
The sess column here is a unique (per-host) session number which means it can recreate any transcript with a suitable query; if I ran histdb -s 24 it would produce the whole session containing the top two results above, including directory history.
Looks good. You should separate this into another repo with a README. I use zgen [1], a package manager with ZSH which lets you add libraries via github repos. That way I can keep it up-to-date without manually checking in on the project.
I don't use a zsh package manager and don't really want to learn one, but if you can tell me the minimal glue I need to add to make these files into a package I will happily duplicate them into another repository and keep them up to date there. Ed.: https://github.com/larkery/zsh-histdb
I haven't done anything to import the history file because some of the information is missing so it would be a bit untidy. You could do something like
history 0 | while read -r num line; do
zshaddhistory "$line"
done
This will make bogus entries where the history includes newlines; you could probably use a loop on history number instead, or do something in the while loop to accumulate split lines by looking at "$num" and seeing if it goes up.
> The last few versions of MacOS have introduced random fires which burn down your .bash_history, leaving this occasional sad, sad result:
$ wc -l .bash_history
0
I think there is some context missing here... Why is your .bash_history getting cropped? Was there a change in some default configuration of your Terminal app on update perhaps?
On a side-note I really think fish-shell really got history-recall and completion right, which is really the one thing that keeps me using it even if I still script in bash.
I like your zsh history too. Will you post all of it here unedited?
Also, on pentests the ~/.bash_history file is literally the first thing we look at. It's lovely: in an instant, you know exactly which files on the system are important, why they're being used, and what the high-priority targets might be for pivoting to a different box. In some cases it cuts hours off the job. So to keep hackers happy, please don't delete your bash history on production servers.
It's just a delay. Are you really suggesting that making work harder on a daily basis is worth it to slow down by a few hours (not preventing!) lateral movement in an already exploited system? Because that's why users hate security restrictions.
Also, you shouldn't really be running sensitive commands on a production server ever. And if you are you really should clear the bash_history. Sensitive data leaks into that file and I don't see a clear argument that it really makes your life any faster. Use something like etckeeper and document the important stuff on an internal, authenticated, Wiki or some other more secure central note repository. Often you don't need the whole history for prod, but the few important commands.
In a perfect world most things are done via config management, leaving only debugging to be done on prod servers, which should be transient in nature.
In the real world, prod servers should still get treated as transiently as possible.
You shouldn't, but you will. Especially in small companies. The production with no access is a holy grail that is both a good idea and harder to achieve the bigger you get. Almost nobody starts a new service (unless you're already in a company where a framework for this exists) with "I'll use an immutable architecture with staging environment and simple traffic failover and database I can easily clone for debugging and...". They start with "I'll get a VM and put my app on it".
Authenticated wiki or documentation repo is fine. Automation for tasks, external log shopping, error collection, separate db interface even better. Complete mirror of production in a separate, debuggable environment is the bees knees. But they all take time to achieve. In the meantime, with one or two VMs running your business, I bet you're glad to have a command history, especially if something's down at 3am :-) It all depends on your size and system.
By all means, steam ahead. It's certainly your right as a developer not to be slowed down by pesky restrictions. The easiest marks were always the companies that had a culture of hating security restrictions.
Wait a minute. If you've owned my user, what's to stop you from setting the histfile even if I turned it off? Not to speak of how you can just listen to my pty.
I mean, it's really clever talking like a mafia enforcer making vague threats but everyone here is treating you respectfully and you're not responding in kind. It's not cool, man. If you don't feel like engaging, don't.
You have already pwned the machine, what disabling history is gonna do to save someone's ass after that?
The hassle of not having a history file every day is a much bigger cost than having it on an already pwned machine, isn't it?
Stop being an ass, that's why most people don't listen to security experts, instead of educating people in a kind way and making the world more secure most of you like to live on your high ivory towers with snark comments, passive-aggressiveness doesn't help anyone or any discourse.
Attempted to make a shorter version that says most of the important points:
You have already pwned the machine, what disabling history is gonna do to save someone's ass after that?
The hassle of not having a history file every day is a much bigger cost than having it on an already pwned machine, isn't it?
This is why most people don't listen to security experts, instead of educating people in a kind way and making the world more secure most of you like to just point out how bad we are.
I guess you've never typed a password into the command line by accident? (sudo apt-get install<forget to hit enter>hunter2)
Or passed in a password as an argument to any programs? (e.g. mysql -uusername -ppassword -hprod.db.com)
Plus, if the commands are important as they appear to be (with the need to be referenced days or weeks later), assuming they don't contain passwords, why not put them into a protected team wiki (or a private Github Gist)?
As an attacker, your goal is to get the most amount of privilege and credentials in a brief amount of time without being detected while it's happening. So you grab .bash_history, .bashrc, .bash_profile, .aws/credentials, .ssh/config, .ssh/id_*, .ssh/known_hosts, .pgpass, .psql_history, .mysql_history, .gitconfig, and then get out before a second has even elasped.
As a target, your goal is to limit the blast radius as much as possible, and by following certain practices, if/when you are compromised you can accurately state what the attackers could or could not have had access to.
This isn't about being 100% hackproof, it's about limiting the damage when you are hacked.
As a team member with promises to keep my only goals are getting my job done and meeting my KPI's. I cannot afford to not have bash history or any other history. And I certainly don't have time to do the securistatsi's job for them.
If a server is 'compromised' I delete it from the stack, spin up another and provision it all within 60's. The goal is not preventing being hacked, its being in a position where it does not matter.
Meeting your KPIs has to happen within the InfoSec framework of your company. The "securistatsi" exist for a reason - not to slow you down, but to protect the company from you. If your only goal is hitting your target then _someone_ has to make sure you're not doing it the wrong way, as you clearly don't care - full speed ahead, security be damned.
If your server is compromised, you shouldn't be back in business in 60s. You need to rotate EVERY secret on that instance and everything it connected to. Your main database that coordinates 50 servers? One of the 50 gets compromised, you need to change the pw on all 50. That should include 3rd party API keys and other fun stuff too.
And PS all of that needs to happen _after_ you find out how they got in.
> I guess you've never typed a password into the command line by accident? (sudo apt-get install<forget to hit enter>hunter2)
> Or passed in a password as an argument to any programs? (e.g. mysql -uusername -ppassword -hprod.db.com)
I'm legitimately curious about their question so I'll phrase it differently: have you ever found something in the shell history that you wouldn't have discovered on your own? Is it more of a delay tactic or real deterrent?
I have done my share of internals. Some of the time you find valuable clues in administrative audit logs and bash_history. Clues that mean you make a good and impactful finding within the timeframe of your timebox. Reality is an attacker with this level of access is going to have more time than I will, but it still means a measurable impact to security because it decreased the time it took to find something. So in that sense the OP is correct, it comes down to a risk reward trade off and is it worth clearing the history to potentially buy yourself more time in the event you are owned. It isn't as black and white as "well, the server is owned, of course you have my bash_history, that is the least of my worries now..." it is, but it could also be the one thing that gives you 3 extra hours to detect and shut down an attack.
I'm not a developer. I'm actually an ex security engineer, but thanks.
Restrictions on what people want to do are always a bad idea. Removing history where people actually use it means that you'll get shadow IT. That means snippets of commands in home directories, "manual" history equivalents, and expect scripts with passwords embedded shared between teams.
Instead of trying to restrict what people want, you can make it easier - spend time automating tasks, providing simple interface to need information, and make sure you can take hosts out of live system for debugging (and then rebuild instead of returning). If there's important stuff in the history file, that means there's a need for history access. Working on solving that will give you more benefits than killing history files.
This sort of thinking is why ssh known_hosts is hashed by default, breaking tab complete for ssh.
I really wish it weren't, and a decent penetration tester will know what to do with the known hosts machine on a machine with IP 192.168.0.83 anyway (which is the common case for me), or can just feed it to one of those botnets that ssh's to every public IPv4 address on earth every 60 seconds or so.
But, like password restrictions, the important thing is is inconveniences everyone, so it looks like the security team is doing something.
As a blue team guy, either I'd catch you before you got that far or I wouldn't catch you even after pivoting. Either way, it's not worth slowing down business for that chance.
Sure, I might fail a pentest, but what I'm concerned with is protecting against actual real-world attacks without disrupting business operations. Protecting against a pentest just looks good on the pentester's checklist, what really matters is "did we get hacked?".
I am biased here, but that isn't really fair if the person performing the penetration test is any good. Speaking strictly of external/internal penetration tests here, an average penetration tester, the kind you are referencing, will smash around for a bit, find one path to domain admin / service accounts on your Unix-likes and call it a day. Write a report and say, "yeah, you got pwned".
A good penetration tester will look around a bit with their elevated privileges and maybe find some other things and write those up too.
A great penetration tester will own everything, document how they did it initially, use their elevated access to gain an advanced understanding of your environment, document this understanding, and find as many nooks and crannies as they can in their assessment time frame. They will document these issues realistically and label things like auditing and clearing bash_history as defense in depth.
But the parent also has a point. These little things are accelerators on a pen test. No they aren't going to be the thing that gets you owned.
To be fair I have seen plenty of pen testers that wield findings like this as a hammer making it seem very important while missing the forest (for the tree in front of them) so to speak. Also, the best question probably isn't "did we get hacked?" but "how badly are we hacked right now?", because you are certainly "hacked", just maybe not at a nuclear melt down level.
Yeah, you stated my intent better than I did. Most pentesters I've worked with would show a bash history and say "look, you failed, fix these" and my client would get mad at me for not securing this even though this isn't what would get you hacked. It's like a home security company recommending you keep your cell phone in a locked safe at night instead of on your nightstand. Sure it's safer, but it's really inconvenient and a good home security system would notice the robber kick down your door before your phone even gets stolen.
The idea of any security system should be to compliment normal business operations, not to hinder them.
I'm a programmer and an historian. So the title of this entry gave me a wrong first impression: there are more people like me! I'm not alone! Imagine the disappointment... ;)
I am one of the dozens. If only I was able to locate a history-related technology job. It seems like you need to be in Washington DC to find anything like that.
In the Netherlands, where I live, there is the International Institute of Social History, which does a lot of data processing, and has programmers as employees. As a student I screw up an interview very badly, by not showing up. I still regret it after 20 years. How stupid can a youngster be. On the other hand I don't have any complaints about my work and pay and benefits at the jobs I have had since then, which probably would have been very different had I gone the historyprogrammer route.
But I quit the academic path due to economic reasons, i.e bad pay, and needing to apply to grants. I think unless you're explicitly acknowledged as an engineer/programmer it's hard to get anything remotely competitive.
I do the same thing, and have a couple of commands set up to run grep over the last day, week, month, or ever in my history. I don't use my full history every day, but it's something worth keeping around.
What an interesting contrast in attitudes toward privacy:
He makes an extraordinary effort to maintain his bash history whereas I jump through hoops to delete my history (on logout using a .bash_logout file).
I also set my browser to maintain no history, disable saving of chats in everything, delete caches wherever possible, and generally wish that apps and programs had an option to save zero information on exit!
If I want "history" beyond a session, I'll explicitly make a shell alias for a command that I want to remember, or make bookmark for a site I want to revisit, etc.
Your bash history is stored on your computer, so there isn't really a difference toward privacy unless you permit anyone sit on your computer.
Furthermore, for many people, using the command line is part of their daily job. Not keeping a history and/or log, is akin to compiling your source code, keeping the binary and deleting the source.
If there are certain things you don't want in your bash history, use the ignorespace capability and explicitly do not store them.
You can do HISTFILESIZE=0. That way the .bash_history file is never written to, but you can still get the history that's in memory.
If you write it to disk, it could available on the disk via an undelete tool or via a backup - which is a concern if you ever accidentally paste a password onto the command line.
Yea, I don't know why you would do that, why would you cripple your own tools like that and pay a heavy price each day in productivity and convenience to protect against the really insignificant risk of losing... what exactly?
Now paranoia is a scale, and we are, I'm sure, gaussianly distributed on it. I approve of this, as it provides a nice safeguard against extinction events against the hump in the middle. But man, I glad AF I'm not on your side of the curve. That would suck.
in my .bashrc for 20 years. I just don't see the appeal of persistent history. And I've seen too many intrusion pastes where passwords and databases are pulled out of the bash history file.
In a work environment with many complex shell commands to perform daily tasks shell history is the most efficient way to repeat tasks with complex command lines performed weeks/months ago. Documentation, wikis etc are always slightly lacking/stale and are less effective than simply seeing how you did it last.
I see the appeal of this approach. Browser histories are nice in theory because you could search for what you want to revisit, but it never works very well. It'd be better to learn to develop a trigger happy bookmark finger.
I still try to keep my site history available, but it's very awkward with the amount of non-family content that I visit. Oh well.
It searches tags and urls too. I'd like a full body-text search for bookmarked pages or something similar, I'm a hoarder (I actually have a script to archive all my bookmarked pages, via wget, so i can grep them).
My history file is a symlink into a directory synced to google drive. That's easy and appropriate security. You can turn on 2FA. People who think that's inappropriate security are forgetting all the items (like convenience) on the two sides of the cost-benefit analysis.
Written in go, uses sqlite and can work locally or remotely. In the latter case you can send history from many accounts and search globally. Transport encryption is based on NaCl. One binary serves all roles.
Doesn't store other information than command line and time but this is what I need most of time.
The main pain point behind its creation was searching in history. There is a global flag (search in all accounts, -g), the wild card is SQL's wild card (%). Less frequently used but very helpful, content search (-A, -B, -C, like grep). Also regexp is supported though very rarely is needed.
tbqh, I would like to see regexp be default in more situations. It's handy and important to learn for personal growth and setting it as a default would passive-aggressively try to encourage more novices to learn it.
Regexp search is slower. SQLite's support for regexp comes from an external library (sqlite3-pcre) that most of the time isn't installed on the host system. Even when it is, it doesn't always work.
My solution around this, is, when regexp is enabled, to query for all the history and then have Go perform the regexp search itself —the normal is to let SQLite perform the search.
Depending on your CPU and history size, you may need a couple seconds for the result. For example when the server process runs on an Atom processor and the history is about 150.000 lines long, a regexp search takes about 7 seconds. A normal search with a wildcard on the same machine and history database, needs only 1 second.
If it's slower, then use a simpler query or come up with a way to improve the back-end search speed (which you did, it looks like).
My point is that novice developers don't usually work with that size of data. And if they are working with that size of data, then both regex and performance are good things for them to learn :)
Holy guacamole. You madman, you put your whole bash_history on a public github?
If someone put a gun on me and asked to give him my bash history, I'll probably do it but not before thinking twice.
I've also kept my bash history (well, more like my zsh history actually) for a few years and across a few re-installs now. I'm sure there's a few cleartext passwords in there (for nothing too important), a couple of dubious wget commands, all the videos I've ever watched (I use a CLI video player) and who knows what else.
Oh and a bunch of confidential work stuff, I'm sure. Now that I think about it, maybe I should delete it...
For me, the *_history is a deeply personal thing as it shows how i work and how i changed my working over the years.
I think you are very brave to make this document available to the world - it is not something i would like to do.
EDIT: I think of it like the browser history for your daily work.
EDIT 2: It seems quite a lot of people think it is a good idea to store their bash_history on github, who knew!
google dork: "inurl:github.com intitle:bash_history"
When I was first programming and into UNIX it was the norm amongst those I was learning from that you'd symlink your history file to /dev/null
This had a few advantages which might not be immediately apparent. The most obvious is that you remove the risk of leaking private or sensitive data.
The second is that searching your history is an anti-pattern. If there is a command you can't remember, or if there is a group of commands you need to accomplish a task, you should be writing shell scripts
My ~/bin folder has 104 files in it[0] and is in git, is available wherever I log in, whichever tty I have open and has a much better taxonomy than 'search for part of command' based on doing tasks. The same can't be said for history files.
[0] at its peak when I was a BSD and RHEL user it would have been more files but i've recently switched to Alpine.
The problem with history is that records all your activity and is a big snitch. On the other hand you will not want to save in your database repeated or trivial commands like "ls ." normally. You can have the best of both worlds if you configure its behaviour in bash.
Any command that starts with a space is still successful but will be discarded in bash history. As "internet is for porn" could be useful to delete any video viewer entry automatically. (Add some lines like: "alias vlc=' vlc'" in your .bashrc). History can be also configured to save only one copy of duplicated commands.
If you have a sensible or difficult to memorize command the best is to put it in a script, not in a db.
I like that as a concept, except it doesn't even go into local history (up arrow), which makes the feature so frustrating for me that I ended up turning it off. Too many times I copy and paste something and end up with a little extra whitespace at the beginning. I guess the number of times I've done it on purpose was dwarfed by the number of times I did it on accident, and that killed it for me.
To a point you have a point, but there is a point at which your ~/bin becomes unwieldy. I have 386 commands in mine, and for commands I don't use very often, I tend to forget their names, or that they even exist. So searching my fish history with Percol can be much faster than trying to remember the specific, hyphenated name I gave to the script I wrote a year ago.
I also have a `~/.savedcommands` file that I save one-liners to with a comment at the end, and `C-s` at the prompt loads that file into Percol, so I can search these more easily than searching the entire shell history. `savelastcommand` saves the last command to this file, or `savecommand COMMAND` does the same thing.
Odd that your history is being cleared out, it's never happened to me.
I always keep my own history of every command I ever run. I've moved this from computer to computer. So I can look back years in the past and see what I've done.
case "$TERM" in
xterm*|rxvt*)
PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME}: ${PWD/$HOME/~}\007"; echo `date +"%b %e %Y %H:%M:%S"` $? \"`history 1|cut -c7-`\" in `pwd` >> ~/.audit'
;;
*)
;;
esac
It seems like Bash history was made for a time that is long gone, and has not been fixed to work in a modern environment. I have tons of shells open, and I cannot promise to close them carefully and sequentially.
You can append to .bash_history immediately, and thus make the items available in any other open shells immediately as well. This is supported natively. Search for `shopt -s histappend`.
I want up arrow to give the last command from that shell though. I'd like Ctrl+R to search all the full history, but I don't think bash can do that combination.
Also I just don't trust bash to actually keep my history, no matter what I do with HISTSIZE and HISTFILESIZE.
`histappend` probaby works the way you want it to.
I have about a half dozen terminals open. I want to have the current shell's history seem consistent within any shell, but I want ^R in any NEW terminal to include all the commands entered in any terminal.
shopt -s histappend
HISTCONTROL=ignoredups:ignorespace
# append to bash history Right Now
export PROMPT_COMMAND="history -a"
I share your distrust, though -- I've frequently lost my bash history for $REASONS (which another comment here seems to have answered :)) -- but the convenience of not having my shell history clobbered by which-shell-I-close-first is _amazing_.
'histappend' just makes the shell append its history list (from current session) to the history file, instead of overwriting it. This still occurs at exit time, not immediately.
Besides, since you load the HISTFILE at startup, you'll append your whole history (file+session) to the HISTFILE each time you exit a shell, so either it will grow very large quickly, or you set HISTFILESIZE to limit it but as a consequence you'll lose entries (and unfortunately not the oldest).
The option "only append the session entries typed in this session", while still using the whole history (file+session) for searching, is missing, as far as I know.
Fish shell has that problem fixed. History is always saved correctly and if you want access to the history of your other open shells, you can run history --merge
Yes, I should try fish or zsh. I'm just afraid I will spend a lot of time debugging incompatibilities. At least my current toolset doesn't involve source:ing stuff, so it should be ok.
Last time my .zsh_history was randomly emptied on me I was so so stranded - I felt helpless.
I've now doubled-down on it and actually store lots of todo notes or receipt numbers or random snippets like that in my shell history as comments. It's just always there :-)
And fortunately btrfs snapshots (and one time, a backup from a laptop migration...) have saved me form losing it for a long time now :-)
Shell history is awesome, I'm always really intrigued when I see someone like a sysadmin who doesn't use shell history very well (read, at all...)
Alternatively, you can use ZSH and not lose history, even better, have history work simultaneously in multiple terminal windows simultaneously without closing the window to save/read the history of other windows. http://ohmyz.sh/
# share history among terminals
setopt share_history
# append to the history instead of overwriting
setopt append_history
# append to history incrementally instead of when the shell exits
setopt inc_append_history
I've gradually come up with a way too elaborate system for maintaining and sharing bash histories locally. I run tmux with several long-running bash prompts and not only I want to append each shell's history into a log incrementally but also to share history between the running shells.
I have a function as my prompt command that does:
- dump the current in-memory history into a separate logfile every few minutes timestamped (using HISTTIMEFORMAT)
- clear the local in-memory history and reload in-memory history from unique history lines from all stored log files, using the most recent timestamp on each
Once there are more than N log files, one of the shells does a similar uniquefying compation step for N log files and writes the output into one new log file. It uses 'mv' to do the compation atomically by moving the affected log files away into a temporary directory for compaction.
If other shells were to update their logs during the compaction those additions would just be saved as normal and compacted in the next cycle. There are always duplicate history lines in the logs to keep things fast so there's no chance for a race condition: no command line in the history can get accidentally deleted and any duplicates are just removed during reading.
As a final step I atomically copy the latest compacted history into .bash_history as a backup. The real data lives in my logs anyway but a single .bash_history is easier to backup: it might be a few lines short but it's always mostly up-to-date.
Looking back at this, I reckon the current Bash history implementation surely can't be ideal.
Wow, cool! How does this handle the case of multiple shells writing to the history at the same time? I have yet to find a safe way to preserve all that.
Looking at the script, it doesn't write any history itself, it just loads whatever is in the existing .bash_history into its db when `hist import` is invoked (when a new shell is started if you add it to your .profile like the readme recommends.)
So this does not do anything to manage different sessions clobbering each others' histories. The solution posted by zootboy will work together with this script however.
FWIW, my understanding of sqlite is that it allows atomic operations against the same DB, even if there are different clients... so I would guess at worst the clobbering is minimal. I could be wrong, though.
I really like how the search feature shows the frequency of each entry. I've thought before it would be cool to see the most used commands of some of the more experienced folks I work with, though ideally it would be slightly more generic than exact commands - i.e. Different arguments would count for the same command.
Backups on a different drive. Having an SQLite database in your home directory isn't really going to help you there (I mean as far as: if only parts of the filesystem are corrupt and you have re-duplicate entries, your chances are better of finding a working one - but it's hard to call that a solution).
I like this. It looks like a much better fleshed out version of a one-off python script I use to search bash history - it even coincidentally has the same name so it'll fit right into the muscle memory :D
I love this idea. It's not important for production / env that are managed through tooling, but the "randomish" nature of losing history on dev/stage is one of those, admittedly minor, annoyances that this would solve.
Some sites require the periodic deletion of .bash_history and .history files as a security policy, because they're quite likely to contain passwords and other secrets in them.
Sounds like something that should be optional in the shell itself, scrub command before writing to history.
Naive non-programmers version: use a hash table to censor all substrings of a given length range that match against it ... sounds like it might be expensive (in compute time), also you have to keep a list of those hashes around.
Bet it's been wontfix-ed in at least one major shell?
See the HISTCONTROL environment variable in bash (and maybe something similar in other shells):
HISTCONTROL
A colon-separated list of values controlling how commands are
saved on the history list. If the list of values includes
"ignorespace", lines which begin with a space character are not
saved in the history list.
[...]
This is convenient because any time you don't want a command saved in history, just start it with a leading space.
Also look at HISTIGNORE for finer grained control.
I wish there were a tool to share .bash_history across an organization. This would make onboarding so much easier. It would also help to disseminate shell tricks.
I think a lot of the problems linked to bash history could be solved elegantly by the use of a daemon to exchange information between concurrently opened shells.
On a sidenote, how do I tell bash to immediately commit to history? I often close iterm tabs with cmd-w which seems to make my bash forget to write history.
How can we store history of many users [with screen sessions] on another server - as a simple history keeping and monitoring thing? I looked into auditd but did not like the logging format (maybe there is just a nice web front-end missing that interprets the details) - a simple history would be enough.
Next todo was to look if we could send the history to some syslog server - do you have any experiences with a similar solution?
I use my shell history to pickup past command invocations, but at some point, you have to clean things up and create a script. The script could be for repetitive tasks or help tame complicated command line syntax. The nmap, curl and groff utilities are a few of my favorite examples of commands that have too many options.
my bash history gets deleted whenever I inadvertently type my password and hit enter at the wrong prompt. Type sudo 30 time a day, its gonna happen. Yes I know my history has permissions protection, but still. It is plain text.
I don't want to have to look for all the places it get copied too with these solutions.
As a pen tester who frequently finds passwords and other sensitive data in shell history files[1], I fully support this effort to make my job even easier :).
[1] Either inline in mysql-type commands, or when users accidentally paste/type their password in when the system isn't expecting it.
I would understand something like "fish is not for me" (since fish is a C shell, not a POSIX shell) but zsh is pretty similar to bash at stock, but can be configured to be massively better
I actually keep a bash history per directory. it really helps when i reopen a project folder after a few weeks or months. lets me know generally what I was doing.
With a git merge driver [3] the history database can be kept in source control and shared between hosts.
Queries look a bit like
The sess column here is a unique (per-host) session number which means it can recreate any transcript with a suitable query; if I ran histdb -s 24 it would produce the whole session containing the top two results above, including directory history.[1] https://github.com/larkery/zsh/blob/master/sqlite-history.zs... [2] https://github.com/larkery/zsh/blob/master/self-insert-overr... [3] https://github.com/larkery/zsh/blob/master/histdb-merge.zsh