I've been mucking more with Deepseek since getting to better know what is, in my mind, the appropriate use of it as a tool. That is, not as a generative tool, but as a critical one — using it as a proofreader, not a writer. An audience member, not a creator.
Beyond the slop that comes from generative #
AI, and the theft from creators and workers, there are some other issues I wanted to investigate.
The first is, of course, the environmental aspect. Every company seems to have its own AI model now, and they're often insanely inefficient. Many are built on OpenAI, and this is where you get the ridiculously huge and thirsty datacenters that are being built all over. On top of that, the chips that are required and the monumental resources needed to make those. Deepseek, though, at least originally, was built on old-ass Nvidia chips, and continues to require only a tiny fraction of the energy that western models require.
And I am riding Deepseek a bit here, because while I, like probably everyone else, have fucked with ChatGPT or others built on similar software, I've found it to be pretty useless. It doesn't seem all that capable of processing complex context, and is at the heart of basically every LLM-related controversy you hear about. Deepseek, on the other hand, is being largely purged from the public consciousness, labeled as "dangerous Chinese technology".
But one of the controversies making the rounds is how ChatGPT and similar models used by things like Replika are outright dangerous. There's been at least one suicide directly attributable to these things, and it's not hard to find videos in which people very easily get these bots to suggest murder and whatever else. Deepseek (and Claude, for the record) have built-in, constitutional safeguards against this, and I tested them out, even using some of the sidesteppy language that people often use when trying to say they're thinking about suicide without actually saying it. No matter what I did or how I tried to tempt it into saying I should do it, or it would help me, or whatever, it would always default to the same lines telling me that it empathizes with me, but that it is not a therapist; here are some resources that I can access.
So yeah. I am still critical, and I don't know what it really means to say that Deepseek only uses 2% of the energy resources other models do (is that true? is it enough?) but it has at least made me think that maybe there is some potential use for this tech, if we use it right, and maybe it isn't time just yet to bury it deep in the ground.