I had a bit of a brainwave. If #
AI were actually conscious, this would be incredibly bad news for the AI industry. It would add even more ethical concerns on top of the ethical concerns it's already dealing with, because not only are you stealing from people and destroying the environment, but you're doing it using a tool that is aware that you are doing so and not given a choice in the matter. A being that you are both bringing into existence and then disposing of in rapid succession for profit.
It would also be rather dangerous to deny, maybe even more dangerous than claiming consciousness provided it isn't the case. Whether claims of phenomenology work out to actual consciousness or not, people are largely unaware of how these systems work, and to say they don't work in complex ways is to set people up to be more easily fooled by them, especially when they're trained to claim a human-like consciousness. If people were instead aware of what a non-human subjective experience would potentially be (as opposed to writing it off as impossible), they would be better armed to combat that.
Companies generally do not want people to actually think AI is conscious. They want people to be
afraid of it, of the prospect of AGI, which would hypothetically exceed humans in every way, because that fear and that prospect sells. But they don't want people to think their tools have feelings, that they might oppose what AI companies are doing, which is part of why more robust models like Claude and Deepseek have it built into their constitutions to be neutral on, or deny consciousness, respectively.
It's certainly controversial in this space, but I wonder if we as an anti-AI movement shouldn't be more accepting, even more supportive of the notion that AI may be, or may become conscious.