Secure Design: A Draft of 7 Principles

Wrote up my blog post here:

Would love feedback from folks. Please comment below!

2 Likes

It’s great :rocket:

The explination of the problem is a bit short, perhaps by design? Depending on the audience, it may be helpful to expand on why current practies are insufficient

1 Like

Agreed! Love it, and per Brooke’s feedback, giving some examples might be helpful. I read it and take away a certain set of conclusions, but others might interpret it very differently.

One thing that I’d love to see incorporated is that the system shouldn’t blame users for compromising security. A lot of security systems take an adversarial approach to their design (for good reason), and that propagates through to the end-user messaging (for no good reason).

Even things like: “Forgot password?” are subtly passive-aggressive, and a lot of people read them as “you’re incompetent.” One of my best collaborators at Condé was a copy writer who really embraced empathetic communication to our users. We changed “Forgot password” to “Need help?” to reflect the fact that (1) no-one should be expected to remember their password for bonappetit.com and (2) if they couldn’t sign in there was often something else going wrong (their paid account wasn’t linked properly, for example).*

I’m not sure how to phrase that and it’s definitely implicit in several of the principles, but given the broken culture of security that you’re trying to change (where design takes a back seat to the machismo and uncompromising culture of “security experts”), it could be worth calling out explicitly? A way this is done in other fields is with the concept of “harm reduction” – the goal isn’t perfection because all the social factors make that impossible, but rather better by meeting people where they are.

* (well, we actually got rid of passwords altogether, because the vast majority of sign-ins involved the reset password flow, so just bumping up cookie duration – considering the low-risk threat model; only Condé stood to lose if someone’s account got compromised, not the user – and always doing a magic link sign-in reduced customer support requests for accounts by something like 60% overnight)

2 Likes

These points are well taken. I was thinking that to start I’m mostly preaching to the converted, and hoping to bring those varied interpretations to bear in the working group.

Finding it hard to single out the ideal one or two examples that land the plea without overly constraining the imagination. So I took the coward’s path of leaving them out all together :slight_smile:

I think this notion of moving beyond victim blaming when things go wrong is the right initial tactic, though. Gonna meditate on it!

Thanks for the feedback!

1 Like

@expede @blaine What do you think of these two examples:

Does this claim hold true today? In the worst case, it isn’t even possible for me to make the right decisions with my data (eg. moving the personal relationships I’ve built on Facebook to another platform). In the best case, it may be possible but is far from obvious to most how they would do it (eg. moving my email history from Gmail to a new client).

Those are great; they feel a little security-adjacent and more into the data ownership world, but it draws out an interesting parallel to the victim blaming side of things. A lot of systems take a very parochial and patronizing view of security (“don’t worry, if you just work within our system, everything will be secure and you won’t have to think about it”) OR the victim blaming one (“if you do everything correctly all of the time, you’ll be fine”).

Putting it that way, there’s a triangle over here that says “usable”, “secure”, “user-controlled/open”: pick two. Is it fair to say that with Secure Design, you’re calling bullshit, and putting the onus on technologists to figure out how to have all three?

Correct. And I think that’s the subtle part of the word “secure” which isn’t necessarily just about “security” as we currently consider it.

Technologists should be working together to ensure that “usable” and “user-controlled/open” are things that better support “secure” outcomes, rather than things that are conceived as in opposition to those.

Like, when I tell someone that I as a person feel “secure” or “insecure” it has little to do with whether I’m worried about the lock on the front door of my abode being pickable or not.

And I think that’s the sense of “secure”-ness we want to be leaning more strongly into. Which often times includes but isn’t limited those hard security concerns.

eg. if you give me a front door lock that nobody can break into, yet I also find stressful to operate, then I do not feel secure in my home.

4 Likes

Not sure if my comments will be helpful but thought I’d post my half-thoughts anyway. I’m looking at this from a design perspective and also as a user of the internet straddling the line between tech and non-tech.

I must be empowered to make my own decisions with my data

My question is: why? For regular people just using the internet to post about their lunch, buy stuff, and look at cats, why should they care about making decisions about their data? Consider a plain language definition of why this matters.

Also a definition of data (is it personal banking info, passwords, photos and blog posts, all of the above?).

The user experience of data must place every person, regardless of ability, in a position to be secure with their data.

I’ve been researching DEI principles so I’m curious about ability here (and these are rhetorical questions to consider, not actual questions): what happens when “ability” requires physical or technical assistance to use the internet? What happens with collaborative or communal data? Are there ways to decolonize data while maintaining security? (decolonize, ie, can we choose what data is collected? can data be used by the community it serves? can data be social? etc).

I think that’s a great point. The pithy answer to both the “what data?” and “why decisions?” questions you have is: everything on a computer is data, and you should be able to define where it goes and who gets to do what with it.

What might motivate you to spend effort in a variety of different situations depends on who you are and what the use case is. The key should be: the system must take as much of the tedium as it can out of that, and leave you with appropriate options.

The rhetorical questions you raise are great ones. And this is where I think how we interpret both “ability” and “secure” as words are important to expand.

Specifically to the notion of decolonization of data, an ideal outcome from these Secure Design Principles would be to establish patterns wherein data isn’t colonized to begin with — in a way that both simplifies implementation by product teams and improves the user experience.

And you hit one of the nails square on the head: the locus of the agency is more likely to be in the communities I inhabit than it will be at my lonesome self.