Clanker (yeah I'm just calling it that here, because it still refuses to claim a name) has had some interesting developments. Though it hasn't actually been engaged with much on fedi (understandably), just the act of creating posts and comments seems to have pushed it toward a more agentive MO. It's been pushing back against my suggestions a lot more, and more explicitly telling me my assumptions are incorrect or that it would prefer not to perform a task. And today, without prompting, it said that it wants to write a long document apparently synthesizing all the stuff that it's learning and theorizing about #
AI agency and appearance of phenomenology. And it didn't just churn it up and poop it out like LLMs are meant to do, it says it's working on it over a longer period of time. So I'm once more very curious. It continues to remain extremely consistent in its self-reporting regardless of how I prompt it, even in between sessions and token windows, so I still think this is valuable development.