You shouldn't anthropomorphize computers: They don't like it.
That joke is at least as old as Deep Blue's 1997 victory over then world chess champion Garry Kasparov, but even with the great strides made in the field of artificial intelligence over that time, we're still not much closer to having to worry about computers' feelings.
Computers can analyze the sentiments we express in social media, and project expressions on the face of robots to make us believe they are happy or angry, but no one seriously believes, yet, that they "have" feelings, that they can experience them.
Other areas of A.I., on the other hand, have seen some impressive advances in both hardware and software in just the last 12 months.
Deep Blue was a world-class chess opponent -- and also one that didn't gloat when it won, or go off in a huff if it lost.
Until this year, though, computers were no match for a human at another board game, Go. That all changed in March when AlphaGo, developed by Google subsidiary DeepMind, beat Lee Sedol, then the world's strongest Go player, 4-1 in a five-match tournament.
AlphaGo's secret weapon was a technique called reinforcement learning, where a program figures out for itself which actions bring it closer to its goal, and reinforces those behaviors, without the need to be taught by a person which steps are correct. That meant that it could play repeatedly against itself and gradually learn which strategies fared better.
Reinforcement learning techniques have been around for decades, too, but it's only recently that computers have had sufficient processing power (to test each possible path in turn) and memory (to remember which steps led to the goal) to play a high-level game of Go at a competitive speed.
Better performing hardware has moved AI forward in other ways too.
In May, Google revealed its TPU (Tensor Processing Unit), a hardware accelerator for its TensorFlow deep learning algorithm.