You are probably aware that I have been blessed with a grant by the NLNET foundation from the European Commission NGI0 programme to work on OPAQUE and it's integration into various tools. If not, now you have an explanation for all these posts lately. ;)
In my previous post I mentioned that for other protocols like XMPP, IMAP, POP3, etc OPAQUE would make a lot of sense. Each of these supports SASL as a generic authentication abstraction. Implementing an OPAQUE mechanism for SASL would immediately allow all these to benefit from the security guarantees provided by OPAQUE.
What's more, the SL in SASL stands for security layer, basically an encrypted channel. So with SASL connections could choose to use an OPAQUE negotiated key for secure channel construction instead of TLS. This is brilliant, two of OPAQUEs uses (authentication and secure channels) could be deployed in a lot of places with just one mech (and the SL) being implemented.
My last milestone to complete was HTTP server and client support for OPAQUE: apache, nginx, firefox and chrome. In my previous post I also mentioned that HTTP is not quite so cool with OPAQUE. I tried to weasel out of this, negotiating about this with my NLNET contacts, instead they mentioned another NLNET project which implements SASL for HTTP authentication (and even specifies an RFC draft) for apache and a java-based and a c++/qt-based variant for firefox.
Implementing a SASL mechanism achieves the widest possible coverage for OPAQUE, and thanks to the ARPA2 project it also provides cheap solutions for half of the HTTP milestone. Only nginx and chrome support for SASL would need to be implemented. So there it was, an OPAQUE SASL mechanism that worked even between firefox and apache2.
The SASL OPAQUE mechanism does not implement any secure-layer though, this is work for the future. This probably means finding or cooking up an libsodium based protocol to handle a secure channel. I looked at the existing SL protocols supported by SASL and I have to say I was underwhelmed, it felt like the 90ies called and wanted their DES back. But then I guess most protocols nowadays just use TLS underneath. Which is fine, but it defeats one benefit of using OPAQUE.
The next step was to also have nginx supported. Module development for nginx is not very well documented. I had to do a lot of trial-and-erroring how to attach variables to a request that survives an internal redirect. Variables are just what their names suggest, in the nginx config you can use those to add headers or to pass variables to fastcgi or uwsgi backends. The problem was to figure out how to keep these variable values while there is an internal redirect, which happens for example when someone tries to get "http://example.com/" which gets first authenticated with SASL, and then nginx redirects this to "http://example.com/index.html". In the end it all worked out.
Another thing I figured out (and this also applies to the apache module) is that due to SASL being stateful this whole authentication process only works, if all requests are handled by the same worker, the one in which the SASL state is stored. Using shared memory seems not a good idea, since the struct that needs to be shared contains a bunch of function pointers. If you have your webserver set up with seperate forked processes then these cannot share the SASL state and things will go wrong. But it seems that most browsers do keep-alive sessions, and the TCP connection is reused and thus the same worker is getting all the requests. However if your setup involves a load-balancer with seperate nginx servers as backends you will have trouble doing SASL auth, unless you terminate the authentication already at the load-balancer. Or use only SASL mechs which have no state, but those are much less secure than OPAQUE.
Another issue with OPAQUE authentication in HTTP is that if every request needs to be authenticated, then either the user types their username and password repeatedly, for most average webpages this means dozens of times. Or we need to cache the username/password somewhere, this sounds like a horrible opportunity to leak these credentials. This also means, that every request for authenticated resources involves 2 or 3 HTTP round-trips for each execution of OPAQUE. To solve this, there are at least two options, either implement channel binding in HTTP (which seems to exist for Microsoft IIS and is supported by all browsers) or to use the shared secret calculated by the OPAQUE execution in an ephemeral HOTP style protocol:
Authorize: SASL mech="HOTP",c2s="base64(username || hmac(kdf(shared_secret,dst), counter++))"
which would enable 0 round-trip re-authentication. This could be another SASL mechanism which could also be useful for other multi-round-trip SASL mechanisms. Specification and implementation of this is also considered future work.
So far this is all quite cool. I now can do OPAQUE authentication in a bunch of protocols even in HTTP on apache and nginx and firefox. But I had one last big puzzle piece missing, chrome and derivatives. Also I started to have this nagging thought that I should also support CLI tools like wget and curl, but they were out of scope for the NGI0 grant, but still, it would be nice.
For the last part - implementing a SASL auth browser extension for chrome -, I was quite optimistic^Wfoolish. I thought it will be easy to port the firefox add-on to chrome, since firefox adopted this google web-extension framework to make add-ons easy to port to all browsers. I hoped I just have to make a few small changes and everything will be dandy. I was so wrong!
Google is pushing for something they call "Manifest V3" (MV3) which is restricting the capabilities of web-extensions:
Manifest V3 is part of a shift in the philosophy behind how we approach end-user security and privacy.
So google is changing their approach end-user privacy? I'm surprised, that would mean a change in their business model.
Most importantly the blocking webrequest API will be restricted in MV3, for privacy reasons, surely no other reason:
Privacy: This requires excessive access to user data, because extensions need to read each network request made for the user.
Translated: google still gets access to every request made, but their supposed competition of other privacy invading add-ons (is that even a thing?) will not. It's a very unfortunate side-effect that those add-ons that actually protect users privacy, and cut into the profits of google also suffer. If this would be about protecting users privacy from misbehaving add-ons, then there is another solution. Google already control who is allowed in their app-store, so they could actually just remove all the extensions that violate the privacy of users, and those extensions that protect it by intercepting all requests could still be supported. This is purely a monopolistic profit oriented move, nothing else.
But it goes on:
The blocking version of the webRequest API is restricted to force-installed extensions in Manifest V3.
So there is a loophole: force-installed extensions, what are those?
As enterprises adopt Chrome Browser and Chrome OS, they often require added controls and configurations to meet their productivity and security needs. This can be achieved through the management of Chrome Enterprise policies. Chrome Enterprise policies give IT admins the power to configure Chrome for their organization or business. You can manage browsers on-premise for Windows or Mac/Linux, or manage browsers for all desktop platforms using Chrome Browser Cloud Management.
These policies are strictly intended to be used to configure instances of Google Chrome internal to your organization. Use of these policies outside of your organization (for example, in a publicly distributed program) is considered malware and will likely be labeled as malware by Google and anti-virus vendors. If Chrome detects that devices are configured with enterprise policies, it will show a message informing end users that their device or browser is being managed by the organization.
On Microsoft® Windows® instances, apps and extensions from outside the Chrome Web Store can only be forced installed if the instance is joined to a Microsoft® Active Directory® domain, running on Windows 10 Pro, or enrolled in Chrome Browser Cloud Management.
On macOS instances, apps and extensions from outside the Chrome Web Store can only be force installed if the instance is managed via MDM, or joined to a domain via MCX.
Employees have no privacy, companies want to inspect and control all communications and for that they still need blocking webrequests, and google is actively supporting this.
Anyway, Google being Google/Evil (or as I've recently taken to calling it, Google plus Evil), it was still an interesting loophole to explore. On linux an enterprise managed policy is under
/etc/{chrome|chromium}/policies/managed/managed_policies.json
and it must contain a stanza for a force-installed add-on and an optional appstore update.xml url. Luckily urls can also have a file: schema. So this is a valid managed_policies.json:
{ "ExtensionSettings": { "nolflnhkeekhijfpdmhkehplikdkpjmh": { "installation_mode": "force_installed", "update_url": "file:///tmp/updates.xml" } } }
The above example would try to automatically install the add-on which has an id: "nolflnhkeekhijfpdmhkehplikdkpjmh" by reading the update.xml which could look like this:
<?xml version='1.0' encoding='UTF-8'?> <gupdate xmlns='http://www.google.com/update2/response' protocol='2.0'> <app appid='nolflnhkeekhijfpdmhkehplikdkpjmh'> <updatecheck codebase='file://tmp/http-sasl-plugin-chrome.crx' version='0.1' /> </app> </gupdate>
Look ma', only file: schema urls, no webserver needed. This is actually nifty. It allows to package webextensions in Linux distros, that auto-install and are able to use the webrequest API that is otherwise blocked in consumer-grade chrome browsers.
Of course this would only benefit Linux users, the cyber-proletariat of Microsoft or Apple users would have to clear more barriers, running their own AD or MDM/MCX whatever these latter are. So theoretically this is interesting, but in practice useless. Also I want to rise with my digital proletariat comrades not from them.
I also learned from the original firefox HTTP SASL extension developer Henri, that:
The main reason only Firefox is supported is because Firefox is the only browser allowing for the callback function parameter to return a Promise resolving to a BlockingResponse...
Since it seems that getting SASL HTTP auth into chrome is a very expensive project, I had to look for an alternate solution. And I also still had this nagging thought that I should also support curl and wget at least. I was talking to my good friend asciimoo, and over the last few years he was a couple of times suggesting to implement a privacy/security proxy for browsers. Such a proxy would not be dependent on the whims of monopolistic kraakens and neither on their has-been monopol-buster accomplices waging a war on their users. I like that idea of a privacy/security proxy, and I'm very much looking forward to deploy his proxy whenever he gets around to implement one. (btw there are such a things already: privoxy and a newer privaxy)
And thus I decided: (deep voice) this is the way. It is quite easy to write simple plugins for mitmproxy. In a few hours I had a working mitmproxy add-on which enabled any HTTP client to authenticate using SASL and OPAQUE. Not just curl and wget, but also chrome and its derivatives, w3m, and all the other strange web client beasts out there.
The nice thing about mitmproxy is that it quite elegantly solves the matter of installing a MitM TLS certificate authority cert in your browser so that also HTTPS connections can be authenticated with SASL. I don't believe this is a big deal, since the proxy has to run on the same host as your browser anyway because when authenticating it throws up a dialog asking for a user name and password. Thus the TLS connection is terminated in the same host, just not in the browser anymore.
Another cool thing about this HTTP SASL OPAQUE authenticating proxy is that this essentially eliminates phishing attacks as long as the password entry dialog can be clearly distinguished from browser content.
There is one drawback though, with browsers when they throw up a HTTP authentication window, they tend to switch to the tab which requested this. With an (unauthenticated?) proxy you have no clue which program or tab initiated a connection and wants you to authenticate the request.
I guess users deploying personal privacy proxies MitM proxies to reclaim their power over blocking javascript and the advertising maffia, while installing their own MitM CA cert in chrome was not what Google anticipated when they decided to lock down the webrequest API and instead provide this declarativeNetRequest travesty under the sarcastic excuse of changing their attitude towards their users privacy and security.
Anyway, albeit there are a few loose ends - like specifying and implementing the secure layer, and the ephemeral HOTP SASL mechanism -, this more or less concludes my work on libopaque and its support for various languages and protocols. There's gonna be some polish added here or there, i'm gonna track any changes in the RFC draft, but in general libopaque will switch to maintenance mode.
Shouts, greets and kudos to Henri, Rick, asciimoo and dnet for their input for this last milestone.
This project was funded through the NGI0 PET Fund, a fund established by NLnet with financial support from the European Commission's Next Generation Internet programme, under the aegis of DG Communications Networks, Content and Technology under grant agreement No 825310.