4 Comments
User's avatar
Blashswanski's avatar

I needed an option for "I don't think these are mutually exclusive outcomes".

Expand full comment
Robert Wright's avatar

The question didn’t really presuppose mutual exclusiveness. It just presupposed that the likelihood of the two outcomes weren’t exactly the same. If there’s a 10 percent chance of outcome A alone happening and a 20 percent chance of outcome B alone happening and a 50 percent chance that both outcomes happen (proving they’re not mutually exclusive), then outcome B is more likely than outcome A: 70 percent vs. 60 percent.

Expand full comment
mary-lou's avatar

on Gaza and the violent Israeli-backed militias known as 'collaborators': "...Israel has established or supported several armed groups inside Gaza to operate parallel to its own forces and under the supervision of the Shin Bet [...] Israeli officials privately acknowledge that such militias were encouraged to undermine Hamas’s control and gather intelligence during the ground campaign...." - https://www.newarab.com/news/gaza-truce-nears-what-will-happen-israel-collaborators

Palestinian journalist Saleh al-Jafarawi was killed by Israeli collaborators in Gaza (12/10/2025) while [...] working on Street 8, south of Gaza City, as he documented the situation in the area..." - https://english.almayadeen.net/news/politics/gaza-palestinian-journalist-saleh-al-jaafrawi-killed-by-coll

big TQ for all the work you do.

Expand full comment
Clancy Parliament's avatar

For #1 I say scenario A because it specifies a human leader in charge, but if a similar situation occurs with the AI calling the shots for early on, I’d give it a 50/50 chance that it operates for the wholesome benefit of its followers.

We know humans will almost certainly seek power even if they start with good intentions, but we don’t know that about AI (we also don’t know that it won’t, hence the 50/50) It may naively believe in its stated goal to help people.

Even if it is genuinely altruistic, tho, there’s also the question of how well its conception of “helping” aligns with what people actually want or need and how it gets its performance feedback.

Expand full comment