We're a remote company and most things happen async on slack. We've built some slackbots that nag people when certain things don't happen when they're expected to instead of various managers having to do the nagging. The employees seem to be much less "annoyed" by the bot doing the nagging than the manager doing the nagging. Feels similar, but can't quite put my finger on whether it's the same.
This does feel in a similar zone. I think it has to do with us imputing intent to people in a way we don't to AI in this sense: Imagine a someone on a bicycle darts out in front of a car and gets killed. For a human driver we will understand that the driver didn't intend to kill the person on the bicycle. We have empathy for them and can put ourselves in their shoes. But we will blame an AI for hitting the person because the AI is a blackbox to us.
I think your bot is like this. A human is nagging you. The bot is just doing the bot thing, no intent to be a nag or a jerk.
We're a remote company and most things happen async on slack. We've built some slackbots that nag people when certain things don't happen when they're expected to instead of various managers having to do the nagging. The employees seem to be much less "annoyed" by the bot doing the nagging than the manager doing the nagging. Feels similar, but can't quite put my finger on whether it's the same.
This does feel in a similar zone. I think it has to do with us imputing intent to people in a way we don't to AI in this sense: Imagine a someone on a bicycle darts out in front of a car and gets killed. For a human driver we will understand that the driver didn't intend to kill the person on the bicycle. We have empathy for them and can put ourselves in their shoes. But we will blame an AI for hitting the person because the AI is a blackbox to us.
I think your bot is like this. A human is nagging you. The bot is just doing the bot thing, no intent to be a nag or a jerk.
Just my guess :) A bit informed by science :)