mastodon.ar.al is one of the many independent Mastodon servers you can use to participate in the fediverse.
This is my personal fediverse server.

Administered by:

Server stats:

1
active users

Aral Balkan

A web server has zero (say it with me, “zero”) right to compel any client to reveal information about itself.

Unless we inscribe this as a fundamental and immutable tenet of the web, the web as we know it will cease to exist once some version of Google’s Web Environment Integrity proposal makes it past the corporate lackeys at the W3C.

github.com/RupertBenWiser/Web-

GitHubGitHub - RupertBenWiser/Web-Environment-IntegrityContribute to RupertBenWiser/Web-Environment-Integrity development by creating an account on GitHub.

(Not to mention that this principle has already been violated via the application of DRM in browsers for streaming media.)

@aral i read someone say that they have re-invented Flash and I'm laughing because it's true

@aral

and neonazis everywhere will thank you

if you rent property, the property owner has more rights

@jgmac1106 Wait, are you equating protecting the right of privacy of the people who access web sites with enabling Nazis?

And defending the right of trillion-dollar corporations to mine people to their hearts’ delight?

@jgmac1106 @aral The neonazis in your metaphor can lie.

The thing about this, is that on top of making the web worse...it's going to be circumvented.

@aral What if the client just, kinda like, lies? Not to sound like I'm smart but, the stated goal of this sounds impossible with a Turing-complete client. I guess cryptography is involved, but you just never know what's looking over the shoulder of the machine on the far end.

@aral

Unfortunately, given how the #W3C works, something like this is likely to become a Web standard.

For those unaware, being part of a W3C working group, given how expensive it is (in terms of direct fees and indirectly time required to dedicate to this), it is only possible for big companies. That is why the Web continues being shaped to their interests.

@j @aral this is also why there's a myopia on what those standards should be. The focus is on the tech solution for an organisation that pushes for it.

Which is what @dentangle found when he looked at the working group for multicast in the browser.

@j @aral This matches my experience of interacting with TC39 as well. It's basically just a way for Google to screen for features that are easy to implement in Chrome and then only support the easiest implementations of those.

They then spend a lot of effort making it look like "It's the best for the platform" even when it's massively obvious that it isn't.

@j @aral
I wouldn't bet. That might blow up the W3C.

@aral I am not terribly worried about this proposal as currently represented. There are significant challenges in implementation that, at least with the available materials, don’t seem to be under consideration. I’m unaware of any effort successfully using COSE at any sort of scale. Since the web server to attester is a many to few relationship, there will need to be some public key infrastructure in place. COSE and X.509 isn’t a great fit, though it could be shoehorned in.

@aral Also, since every new requester to a web server will require attestation, it adds an entire ecosystem of fragility to client server interactions. An outage of an attestation service could render larger portions of the internet unavailable, or at least highly latent with retries to fallback attesters. Fallback mechanisms, being generally untested until there’s a failure, are famous for causing further problems.

@aral The web server configuration and management component of this is likely to be so burdensome as to completely nuke all hope of adoption.

The cost/benefit of this proposal does not scale.

Source: worked on proposals for internet-scale attestation (in a different space) and considered COSE at the time.

@aral This is sort of the inside out version of Dr. Malcom’s admonition in Jurassic Park: Your developers were so preoccupied with whether they should, they didn’t stop to think whether they could.

@aral Wow! The nerve to try to constrain the web to their liking. It still feels like something very far fetched, their details "pending" seem like impossibilities that can't be addressed without revealing the need for centralized "google-web"

@aral "Users often depend on websites trusting the client environment they run in."

Uhhhh... No, they don't? It has always been a fundamental aspect of web development to never trust client side code... Do they really want to open this can of worms?

@smaisidoro @aral I mean, "users" here most likely means "advertisers" (ie. Google's customers)

@smaisidoro @aral

Indeed. This ends badly. Lazy web devs will use this a shortcut and "implicitly trust input."

@aral The hell?? The first paragraph of Google’s introduction is already complete bullshit.

> Users often depend on websites trusting the client environment they run in.

No they don’t!! The whole point of authentication and having logic on the server is that you Never Trust The Client! 🤪

@aral@mastodon.ar.al I appreciate that at least the file is named correctly: spec.bs (emphasis mine). 🤮

@aral They have now blocked all comments on this "proposal's" github issues - unless you've been contributing, i.e. are in favor of it.

@katzenberger @aral time to move to a new repo and reference those issues - they should be linked by GitHub automatically. Or via pull request or is this blocked as well?

@jnv @katzenberger @aral ah learned something new, good to know these tools exist 👍

@katzenberger

It's as if these people deliberately avoid reality.

Also, cowards.

@aral

@katzenberger @aral then start emailing bewise [at] chromium [.] org ?

@aral My prediction:

- Within 3 days at most, there'll be a statement expressing their surprise how this proposal was received

- They will regret that they were not able to "express" their "intentions" more "clearly", to avoid "misunderstandings"

- That specific repo will be locked. Comments on that statement won't be permitted.

- Within 3-6 months, this attempt will resurface, probably under a different name, and certainly on a different platform.

I'd love to be proven wrong on this.

@aral
Atm, #Google's #YoavWeiss is keeping the issues & comments locked, while closing multiple issues and typing in his replies to others. Must be fun to reply while those with different views can't.

@aral As predicted:
github.com/RupertBenWiser/Web-

✔️ Within 3 days at most, there'll be a statement expressing their surprise how this proposal was received

✔️ They will regret that they were not able to "express" their "intentions" more "clearly", to avoid "misunderstandings"

✔️ That specific repo will be locked. Comments on that statement won't be permitted

ToDo: Within 3-6 months, this attempt will resurface, probably under a different name, and certainly on a different platform

GitHubDon't. · Issue #28 · RupertBenWiser/Web-Environment-IntegrityBy klrtk

@katzenberger
@aral
Plus: as far as possible, this will get buried. To the extent that they can influence it, it will not be on search results. The repository or account might change names or get archived or deleted or into a private repo.

@katzenberger @aral

Goodness, that "explainer" is one of the murkiest pieces of technical prose I've ever read. To the extent that I understand it (which may be nearly not at all), it appears to be describing something very much like Steam's in-game purchase system. That uses a third-party server (typically provided by the game developer) to host the DLC, with Steam merely validating the transaction. Allows Steam to deny knowledge of the C in the DLC.

Spooky, for sure.

@aral I like the energy but FWIW that's too strong a constraint.

Cloudflare provides an extremely useful service separating likely malicious traffic from likely legit traffic, and this is necessary because some users are hostile. You have to be able to protect your server to provide a service.

... but the amount of data collected should be scoped to what is needed, no more, and servers should always be required to operate in an ecosystem where clients can be assumed to be lying (and still function as a server).

ETA: most importantly, the standard should be servers must protect themselves from the maximum possible damage a client can do, not servers can assume a trustworthy client. Because there's no WEI solution that will guard against a legit client being piloted by a bad user to do bad things; if it is possible to break the server with a client, someone's gonna do it.

@aral I have no idea what any of this means - can someone ELI5 please? Is this likely to affect someone using Firefox or TOR Browser?

@SkellySoft yes, because if the website you are connecting to doesn't consider your Firefox or Tor Browser as "approved", you will not be able to access that content. In a case like that, you would instead have to use a browser like Chrome... And potentially on an OS like Windows 11...

@jasonnab That... I mean, I literally can't upgrade to 11, my laptop doesn't meet the specs. Would it be a good idea to get everything off of googles services asap then, if this thing ends up happening?

@SkellySoft just to give a quick reply, in my personal opinion yes, move off Google as feasible as possible for you to do within budget time and hardware/software constraints.

In my professional opinion, move off Google if it's not going to impact your livelihood, or access to a lifeline, etc. And I would of course recommend a paid provider such as Protonmail, Tutanota, Runbox.com, Posteo.de, Mailbox.com

And switch to Firefox now if you've been using Chrome(ium)/related Chromium browsers #wbi

@jasonnab Unfortunately, I have to use free services - so cloud storage and email providers would have to be free ones. I don't have any significant source of regular income. I don't really know if I can "de-google" my phone without causing a major shift in how I use technology, but I can likely alter how I interface with the web through my laptop...

@SkellySoft@mastodon.gamedev.place @aral@mastodon.ar.al When you open a web page, your browser send some information about itself to the server. This includes what browser you are using (e.g. Firefox) and what operating system you are using (e.g. Windows).
These informations are completely voluntary. The reason behind this is, that browsers do not always process websites in exactly the same way. So servers can adapt to these differences and can use slightly different code for webpages to avoid errors and ugliness.
Unfortunately, these informations can also be used for "browser fingerprinting". Every single information, that is sent, is in itself mostly harmless (no harm done, letting them know you are using Firefox, right?) but the combination can actually be quite unique (how many people are using the same version of Firefox with the same version of the same operating system as you, while being in the same time zone as you, using the same language as you, having exactly the same privacy settings as you, ...).
To prevent this (and also as a development/testing feature), some browsers/plugins offer the option to send specific fake data or randomised data (afaik the TOR browser does this as a default).

Now google whines, that they aren't able to trust the received data and are proposing some method of validating the data they receive. This would of course undermine any attempt on privacy on the user side.
It would also allow google, to make their website plugins only work with "approved" browsers (which would probably be chrome only). I have seen this done in the past with gmail, which was (at least for a lengthy period of time) block my perfectly fine E-Mail-Client from accessing my gmail-account. I had to check some threatening sounding checkbox ("allow unsafe third party clients" or something like that) to make my Client work. The checkbox would be automatically unchecked after some time "for security reasons".
Something similar is being done on android, if you want to install apps, that are not installed via google play.
For me this was and is mostly a reason to avoid services from google. But for many people, this creates the impression, that google is the only source for secure software, since everything else constantly triggers security warnings.
So yes, that would likely affect everyone, that is interested in privacy and it would likely affect everyone, that is not using chrome.

@till @aral Jeez, I sideload a lot of .apks that google wouldn't approve. I really hope this doesn't get passed, things are scary enough rn without google getting even worse

@aral@mastodon.ar.al the only real solution is to bombard their issue tracker with nonsense to waste their time.
get your boys goin.

@aral talk about insensitive like how is a low-end server supposed to be able to handle this kind of bloat? not to mention the power use involved, and the possible network requirements. real anti-net activity

@aral Web servers do have the right to refuse service to users that won't release certain data, but I agree that it should be unacceptable for web servers to seize data not explicitly consented to

@aral We desperately need a successor to the web. Something so simple it cannot be controlled by corporations. Something that can be implemented by a single person in a day so that we will never have single products with double digit market shares.

@aral Perhaps the web should be more like text editors or terminals. Both have many different implementations and nobody codes for one in particular.
A terminal approach might also make web-applications a lot easier. It wouldn't need to be text-only as you could just have a common abstracted GUI toolkit.

@aral I'm old enough to remember websites telling me they were optimised for IE4 and wouldn't work on my old PC with Win 95 and Netscape. I'm going to have the same problem again aren't I as I now run FF on desktop Linux and Android.

@aral I think it also misses one important aspect: How does the browser know if the server hasn't been compromised?

That's why I suggest to have a mandatory API request on webservices where the user can send code to the server. The server then has to execute that code with high privileges (kernel, supervisor or management mode). This way the user can check the hard- and software of the server to establish some trust.

This is much better than trusting some random code certificate.

@aral I think they should get themselves some integrity first.