the point is that this isnt particularly useful except to just see a raw diff of the files. the example shows some interesting data points about fires changing periodically but there's basically nothing you can do with that information unless you put it in a real database.
Sure - that's what I did with my PG&E outages project (https://simonwillison.net/2019/Oct/10/pge-outages/). I wrote a Python script that iterated through the git commits and used them to create a SQLite database so I could run queries.
Essentially I was using the commit log as the point of truth for the data, and building a database as an ephemeral asset derived from that data.
What's different here is what you treat as the point of truth.
If the point of truth is the git repository and its history, then the SQLite database that you build from it is essentially a fancy caching layer - just like if you were to populate a memcached or redis instance or build an Elasticsearch index.
Not sure if databases really meet #3.
It may be hard for a database to beat the simplicity of Git when all you want is add file revisions and look at diffs.
But hey, if there's a different kind of database that you think would be better here, I'd be fascinated to see it. Just make sure that it can
1. handle major structural changes to the files being tracked (e.g. JSON schema revisions)
2. store and retrieve these changes as efficiently as git
3. do the above with as little work required by the user
Good luck!