Why do we run un-trusted code, and how do we stop?

Written by cnorthwood | Published 2018/01/06
Tech Story Tags: web | security | spectre | untrusted-code | coding

TLDRvia the TL;DR App

This started out as a Twitter thread which has been robustly challenged by my followers, but it probably deserves a more detailed exploration.

In this blog post I outline an approach which adapts code signing for use in the web. It aims to address issues with the current web security model, which predominantly relies on sandboxing, and as the web develops into a richer platform, finds a hard balance to strike between making powerful APIs which are necessary to build rich applications, and the potentially hostile environment of the Internet. There are issues with the proposed approach which will require further refinement before a workable solution is identified.

In the earliest days of computing, security was enforced physically. You had to be at an actual machine and the entirety of it was dedicated for your use. Using a single machine for one person quickly became wasteful and multi-user mainframes enforced security based on limiting access of one program to another, and of one user’s data to another. Moving away from this, the first personal computers were multitasking, but single user, and trust was once again the word of the day, giving adequate performance by doing away with security checks. Eventually the two worlds merged, but as security has never been perfect, and computers are complicated, privilege escalation and other sandbox escape type attacks are a forever present part of the modern world of computing. As a result, we have developed defence in depth approaches to computing to minimise the inevitable effects of security impacting bugs, and for many environments, bringing trust back in is yet another (imperfect) security layer as when sandbox breaking bugs do occur, it minimises the surface area open to attackers.

Sandboxing is definitely a good thing, but it can’t protect against all issues. Sometimes an app might want to do something that’s technically allowed, but then behaves unethically towards the user that ran it, something which sandboxes aren’t in a position to judge. Take for example Wannacry. A program running on your machine should technically be able to manipulate your files, and although it relied on a security flaw to be installed, it could alternatively social engineer its way in and then run as any other program. If a user was tricked into installing Wannacry, then sandboxing does no good. We also need trust.

On desktop machines, trust has been left to the user, often with some sort of anti-virus software looking over their shoulder to hopefully block any software that should not be trusted that’s snuck through. On mobile devices, trust is usually delegated completely to a single install source and the maintainers of that source are expected to ensure that the applications they make available to you are safe (this is often true of Linux distributions too). Desktop machines are also adopting this strategy too, requiring that code that runs must be signed by a trusted developer, with a developer that breaks that trust having their trust revoked.

On the web trust is similar to those mobile devices. We trust that the domain we’re accessing will behave ethically, and some people will enforce that with ad blocking software, which increasingly take on a mixture of ad-blocking and anti-malware roles, almost becoming mini-anti-viruses themselves. The key difference between this is that it involves trusting a lot more people. When you install an app from the App Store, you’re trusting that Apple trusts the people who submit to the App Store (Apple of course verify that as part of their review process), but on the web, you have to trust not only the owner of every domain you visit, but also that they have properly vetted the trust of every other web service they use to make their website (and if that sub-service is made up of other services, then it’s trust all the way down…). That’s a much taller order, and when that trust has fallen down (which it often does, mostly around ads, behavioural tracking, privacy, and even Bitcoin mining) often the user has no way of knowing, meaning a creeping set of ethical questions on the behaviour of web services towards users.

So what happens if we had some way of signing JavaScript like we sign native apps? And what happens if we had some way of trusting some signatures, but not others? A naive approach might be a popup for every single signed bit of code that needs to execute on a website.

A sample popup on bbc.co.uk

But, expecting all JavaScript to be signed is going to be a non-starter. Why would anyone let any ad ever run, or having to click ‘OK’ to make simple ‘DHTML’-esque things work will probably completely break the user experience of the web. Unsigned code could therefore get access to a limited set of JavaScript functionality (perhaps just DOM access, with no access to any APIs that give access to sensor data or even anything that causes network access). Perhaps by default code signed by the same certificate as the domain you’ve accessed is allowed, but any third-party code has to be authorised. Of course, third party code could be bundled into a single package, or signed by the originating domain, bypassing the control a user has over accepting it, and also making it harder for ad-blockers to work, but ideally developers should give third-party code more scrutiny before agreeing to sign it with their signature.

What are the problems?

  • Does it break the open web? Someone has to issue the certificates so that you can trust the identity they claimed to be signed with (And we’ve seen issues with EV certs previously)
  • Is it going to massively disruptive to implement? Yes, there’ll have to be significant engineering to work to make this a reality.
  • Don’t users just click yes to everything, so they’ll let all the bad code run? Users have been known to do that with certificate warnings, yes. There are some serious UX challenges to address to make this idea workable.
  • Is this idea dead before it gets off the ground? I don’t know. I think we can iterate round the problems, explore the problem space better and build a better web. But this idea definitely needs improving.

It’s non-trivial to make this work, but if it does work, it helps us build a better web. I fully believe in the power of the web as a platform. Please constructively criticise this idea to see if we can make it workable.


Published by HackerNoon on 2018/01/06