Not all AI-like software is evil garbage, not all software that learns about user behaviour is evil garbage, not all recommendation and behavioural monitoring software and services are evil garbage.
But all capitalist subsided for-profit AI-like recommendation software is always evil garbage without exception, 100% of the time.
If you're going to claim the opposite you will just have to show one single example of a successful capitalist such software or service.
But you can't, and you won't.
@h To paraphrase Melvin Kranzberg, technology is neither good, nor bad, nor is it neutral.
Key criteria: who owns and controls it?
It’s “smart” tech so who’s getter smarter about whom? If it’s decentralised – the Ai, data gathering, etc., is happening on our devices and is accessible only to us – then we (individuals) are getting smarter about ourselves. That’s great. Compatible with democracy.
In surveillance capitalism, corporations get smarter about us. That’s feudal.
@aral Without getting too political, and tooting something that fits in a toot, I agree for the most part.
We still have huge challenges ahead because granular decentralised power also means our systems need to get smarter about the ways in which we help people to take back control of their own data, which implies at least a little new level of competency will be needed, and new kinds of systems that help the user to always remain in charge delegating only as much responsibility as she wants.
@aral Firefox as packaged by Mozilla has some flaws, as we've been discussing earlier, but one good thing it has is the new drop-down panel that allows you to see all the permissions you have given to any given website.
It's a very coarse set of permissions and it will look primitive from the perspective of an engineer circa 2025. Most people still don't even understand the current primitive state of these technologies very well.
@aral Where we're going with more decentralisation, and atomisation of more individual responsibility things are much more complex.
Hundreds of new permissions and levels of access may be required. Where competency does not exist, and where competency is unlikely to emerge, ethical orgs will have to deliver new solutions to help people in dealing with the increased complexity inherent to more freedom.
People will either need to grow up as users, or we have to give them little wheels.
@aral Since the former is not happening, we need to find better ways of working on the latter, in democratic ways that involve the user from the ground up. But there's no escaping responsibility going forward, the choices are clear: ideally understand as much as you can (no one will be able to understand everything, not even programmers, after AI takes over), delegate as little as you can still retaining control. Or be lazy and be toast.