David Wallace-Wells once again writes an important piece in the NYT illustrating some of the current uses of AI in warfare. An excerpt:
“The more abstract questions raised by the prospect of A.I. warfare are unsettling on the matters of not just machine error but also ultimate responsibility: Who is accountable for an attack or a campaign conducted with little or no human input or oversight? But while one nightmare about military A.I. is that it is given control of decision making, another is that it helps armies become simply more efficient about the decisions being made already. And as Abraham describes it, Lavender is not wreaking havoc in Gaza on its own misfiring accord. Instead it is being used to weigh likely military value against collateral damage in very particular ways — less like a black box oracle of military judgment or a black hole of moral responsibility and more like the revealed design of the war aims of the Israel Defense Forces.”