I have a graph from "Expert Political Judgement" that I've kept on a cork board for over a decade. It's from page 55 in my edition. It charts "Objective Frequency" vs "Subjective Probability" It has three curves, Experts (people in Government, paid to make political assessments), Dilettantes (people who are well read, read NYT, WSJ and the like), and College Undergrads. The Expert and Dilettante lines are more or less on top of each other. The undergrads are observably much worse and farther from the "Perfect Calibration line" that is a 45 degree line between objective frequency and subjective probability. So it's not the case that there is no difference in people's ability to predict political events, it's that so called "experts" are no better than people who follow current events closely. This was for me the main takeaway from the book, is that nobody can predict political events very well, but some groups are measurably worse than others. Tetlock has a brief section that somewhat mirrors your argument on page 186 "Misunderstanding what game is being played" where one expert tells him making predictions is all about getting your sound bite out, not about being correct. In this game, stronger, incorrect predictions might be advantageous in that they can change the narrative.
Right, and then he followed this up with Superforecasters which is all about the people who are on that 45 degree line. They exist! They just aren't popular.