AI weaponry


As a consequence, even when there are clear instances of AI going wrong, they are unlikely to be held responsible. 

This lack of accountability creates a hazard as it disincentivises learning and corrective actions.

The “cosying up” of tech executives with US president Donald Trump only exacerbates the problem as it further dilutes accountability.

Society may be willing to accept mistakes, as with civilian casualties caused by drone strikes directed by humans. This tendency is something known as the banality of extremes —humans normalise even the more extreme instances of evil as a cognitive mechanism to cope. 

The alienness of AI reasoning may simply provide more cover for doing so.

Rather than joining the race towards the development of AI weaponry, an alternative approach would be to work on a comprehensive ban on it’s development and use. 


Comments

Popular posts from this blog

Perplexity

Hamza Chaudhry