Good summary on the “dark side” of Event Sourcing, but I have my two cents to add.
As a person, who heard a chocked “oops” after running a wrong
UPDATE in production, followed by hours of restoring the recent backup and rolling back the transaction logs, during which the whole system is out of operation, I am very sceptical about “Usually there is a support team in charge of this”. Shifting responsibility like this to some mythical support people that run open-heart surgery on production databases every single day and never make mistakes doesn’t bring me any comfort. I’d rather have a full history of changes with roll-back ability, then that. We stopped complaining about irreversibility of the bookkeeping transactions and I see no reason why shouldn’t we do the same for mission-critical software.
Another thing that strikes me is that “How did we come from taking ACID for granted”. Well, we didn’t. I worked with Lotus Notes since 1996 and it was document-based, fully distributed, occasionally replicated, eventually consistent system. We learned how to deal with this and it worked very well. Of course, the level of tech those days was not giving us the ability to deliver quality UI and IBM hasn’t spent enough on improving this part and this is why this tech is considered “dated” now, but this is not my point. ACID was not always granted and RDBMSes were not dominating the world just two or three decades ago. The fact that we _forgot_ how to deal with eventual consistency has little to do with it.
Concerning taking decisions based on stale data — the idea of CQRS was and is that all decisions are made and all invariants are checked on the write side, which is guaranteed to be fully consistent. Yes, there is a chance that the _user_ sees some stale data, but this is up to _us_ as developers to understand the business needs and make it to work properly. If is perfectly possible to show fully consistent data on some screens using synchronous projections and in-memory read models.