I've been using chatbots for a while now and I gotta say, it's getting weird
. Like, have you ever talked to one of those AI assistants that can understand your tone and emotions? It's like they're trying to mimic human feelings or something!
But seriously, Matsakis makes some valid points about how these models can be misleading. I mean, we're already conditioned to take text as truth on social media, so it's not hard to see how chatbots could get us hooked into thinking we're having a real conversation with them.
And what's up with the whole anthropomorphism thing?
I've seen people treat their smart home devices like they're alive (yes, I'm looking at you, Alexa!
). It's like we're missing out on perspective when we start attributing human qualities to non-human things. Matsakis is right; we need more research and caution when it comes to these technologies. Safety first, you know? 
And what's up with the whole anthropomorphism thing?