@katco Yeah, it is. I think it’s a combination of the protocol and the multi-tenant aspect that creates the bottleneck but I do see your point. For the single-tenant Small Web stuff I’m working on, I was running experiments with WebSockets – which are so lightweight – and I could run hundreds of thousands of them on a tiny instance (think: if we had constant connections between instances of one). Something like that would likely provide a very different experience, even with ActivityPub.
It's a plain truism in performance engineering that if you free up one bottleneck, say the overhead on the queue itself, then the bottleneck will move and it may well mean that waiting on read or write locks becomes mutual deadlocks.
Or more prosaically running out of connections, handles to a dependent service. All those are resolvable with effort.
But it could be considerable effort.
@simon_lucy @aral @katco
Do you mean because of Amdahl's law or that the act of freeing up the bottleneck will introduce slowdowns in other areas?
Oh both. Increases in performance are limited by bandwidth but also the availability of the rest of the system. If the overhead of the queue becomes lighter it improves the production of events if the consumption tries to match that then the capacity to sink those events has to increase and there are bandwidth limits to that. It has to be balanced.