Empathy in the AI Age: How well can we really determine how other entities feel


Details
Come join us for some informal philosophical discussion!
This piece (https://amandaguinzburg.substack.com/p/diabolus-ex-machina) shows a rather disturbing conversation between its author and ChatGPT, where ChatGPT repeatedly engages in demonstrably deceptive behavior.
Many of us would label a human who behaves in a similar way as something like a "manipulator" or a "sociopath", but it is unclear whether or how we can apply such labels to an AI. Accordingly:
* What does it fundamentally mean for an entity to experience emotions, or have certain personality and character traits and the like? How about feeling and/or displaying empathy?
* Does it make sense to define the above qualities purely in terms of certain externally observable behaviors? Should any definitions change depending on the entity in question, e.g. humans, animals, or AIs?
* How might our answers to the above questions be shaped by our individual personalities and life experiences, as well as the current scientific/philosophical zeitgeist?

Empathy in the AI Age: How well can we really determine how other entities feel