I disagree that The Web is a good tool for that. It has it's place, but we really need to pushing for libre native apps following open standards.
And browsers should assist with that be pointing people towards the apps they need to use those standards securely. I've implemented that for Odysseus.
@alcinnz @strypey Agree. But we need untrusted relay nodes to guarantee findability and availability to match expectations of the levels possible in centralised systems if we want to build a bridge from here to there. We can shed those training wheels once there are enough nodes. But without them we won’t get adoption.
@aral @strypey I do not see how these untrusted relay nodes (which yes, we do need), but I do not see how that relates to the argument we are having. Why can't we be distributing native apps for each platform?
> We can just turn to P2P networking and destroy the idea of "servers"
I like this idea politically. But after 20 years of waiting, I've yet to see a single piece of network software that can work entirely #P2P, without supernodes of any kind for bootstrapping, relay, ID mapping etc. Can you think of one? #BitTorrent needs both trackers and search sites to be useful. Even #BlockChains need "miners", which are supernodes, not pure P2P.
@buoyantair @aral @alcinnz
Having said that, I agree that supernodes don't have to be as centralized as servers. The #fediverse takes this one step, by connecting standard servers. The next step might be to disaggregate server functions, with some specializing in authentication, or media storage, or search, and so on. But we can't get rid of the concept of servers entirely without giving every device a persistent IP address (#IP6), and creating a decentralized replacement for #DNS.
@VeintePesos @buoyantair @aral @alcinnz
@buoyantair I don't know enough about the network topology of #Scuttlebutt to be sure, but from what I've learnt so far, it seems unlikely to be able to work on a large scale without a much greater use of "pubs" and other such supernodes. I don't see that as a problem (see the second post). To me, it just means that server/client and P2P are two ends of a spectrum, rather than a hard dichotomy, and that successful future network tech will involve hybrids of the two.
@alcinnz @aral @VeintePesos
Though to be clear Aral's logic for developing web apps is sound.
I come to this from feeling some responsibility to understand the code, WebKit, which forms a major part of my native app, the Odysseus web browser. And because of that I've got a strong understand of just how bloated the "web platform" has become.
A website that is acting as a Matrix client? That's acting as an *app*, and I'm more inclined to enable JS. Of course it can run code, just as I would allow a native app to do.
(My *ideal* would be sandboxed native apps shipped with a standard package manager.)
It's not the same of an application.
When you download an application, you can compare it's cryptographic hash with that got by your friends. Usually you don't need to authenticate to download so the chance you get a malicious version specifically crafted for you are lower.
And sandboxing is not enough.
2. Check its hash how, exactly? The developer doesn’t even know it.
With a client-side-only JS app:
1. Include a single script file using subresource integrity.
2. Publish the hash of the HTML file for independent verification.
3. Use a browser extension to verify the hash of the HTML.
You’ve verified the app.
Not yet enough to be safe, @aral, but a good starting point: https://bugzilla.mozilla.org/show_bug.cgi?id=1487081#c6
If the application is completely client side (never does any network connection) it might be enough.
Otherwise further restrictions should be in place and they require changes to the browsers: https://bugzilla.mozilla.org/show_bug.cgi?id=1487081#c12
As for the App Store, I'm pretty sure the installer already check the hash transparently for the user.
But I was thinking about applications that are manually installed by users by downloading them.
The fact that the server that provide the file cannot identify the user (hopefully, if they don't use #GAFAM #cloud services or #Cloudflare's #MitM) is a protection to the downloader: an attacker cannot easily target a specific person or a minority.
Sandboxing, in the way I'm thinking of, can go beyond what browsers normally do. There's no reason you'd have to allow sideloading of random scripts and resources.
@varx I sometimes contact the webmasters / editors of activist and independent media websites to ask if they knew about all the scripts from third-party domains that #NoScript has to block for me when I visit their sites. Often they have no idea. I suspect that often scripts are being called by off-the-shelf JS modules built into #WordPress themes and the like.
@Shamar @alcinnz @aral
@alcinnz @aral @strypey The distribution method is something completely unrelated! Either Open Web, Peer Web or whatever will suffer from the exact same thing: people putting malicious logic in content. The only way is to have logic-less content. For Web Apps: users need to *install* it (read: authorize) prior to execution. Same thing happened with macros and there we go again! Don't conflate "apps" and "pages".
This is my personal Mastodon.