@aparrish imho rewarding humans for becoming more predictable / punishig them for the oposite is abusive constriction of human reasoning to an easily emulated smaller subset. An ex of mine did this, insisting people around them behave predictably. After a while I took to retorting with "I'm not predictable; I'm human." Along with language complexity arguments about some computations being impossible to embody in a finite state machine. Add in the visceral horror of the idea of simulations of us having algorithms run against them to predict what will work against us, and .. it's a ghosthack. Making people more predictable is a ghosthack against free will itself. Because the essence of free will is that its result cannot be predicted.
of course there is no neutral writing interface and even—especially?—the qwerty keyboard is a *kind* of language model, expecting particular intents and producing particular kinds of text. and I do want to see a larger variety of writing interfaces serving creative, expressive, accessibility-oriented needs. but the ultimate teleology of tech like smart compose seems to be a world where "language" doesn't exist (only its statistical properties), and that feels gross to me
I finally got around to setting up feedtube.com as a dedicated RSS-to-ActivityPub domain. Not taking feed submissions from the public yet, but may in the future. Let me know if there's something you'd like to follow that'd make a good test.
Test forum feed from Yellow Plastic is here: @yp_retrocomputing - might be a good way to follow things.
(a) a predictive language model by definition can only have output whose statistical properties regress toward the mean—that's the purpose of a language model in the first place, to determine how statistically likely a sequence of words is. (b) a language model is based on *text*, i.e., language ripped from context—so the output of a predictive language model (by definition) can't address shared emergent contexts between interlocutors