MIT makes breakthrough in morality-proofing artificial intelligence

MIT makes breakthrough in morality-proofing artificial intelligence

MIT makes breakthrough in morality-proofing artificial intelligence
If the Halloween month has you feeling a bit puzzled and uncertain, maybe it’s because of the unsettling proposition that life-and-death decisions are increasingly placed in the hands of artificial intelligence. No, this isn’t in reference to doomsday military drones developed in top-secret government labs, but rather the far more pedestrian prospect of self-driving cars and robotic surgeons. Amidst the uproar about potential job losses on account of said automation, it’s sometimes forgotten that these artificial agents will be deciding not merely who receives a paycheck, but also the question of who lives and who dies.

Fortunately for us, these thorny ethical questions have not be lost upon, say, the engineers at Ford, Tesla, and Mercedes, who are increasingly wrestling with ethics as much as efficiency and speed. For instance, should a self-driving car swerve wildly to avoid two toddlers chasing a ball into an intersection, thus endangering the driver and passengers, or continue on a collision course with the children? These types of questions are not easy, even for humans. But the difficulty is compounded when they involve artificial neural networks.

Read Also:
Platforms Are Eating The World

Towards this end, researchers at MIT are investigating ways of making artificial neural networks more transparent in their decision-making. As they stand now, artificial neural networks are a wonderful tool for discerning patterns and making predictions. But they also have the drawback of not being terribly transparent. The beauty of an artificial neural network is its ability to sift through heaps of data and find structure within the noise.

Read Full Story…

 

Leave a Reply

Your email address will not be published. Required fields are marked *