Getting over fear of complexity

February 26, 2012 | Comments

A couple of weeks back I posted about how we might design for systems which we can't understand the inner workings of.

I ended up chatting about this to a neighbour over fondue at LIFT last week, and in the course of this it occurred to me that this isn't just a design problem, it's also a societal one.

The analogy I came up with (which my neighbour felt was a poor one, it has to be said) was the car: if you'd shown a car to someone in medieval times, they might have found the absence of a horse pulling it to be deeply disturbing. In much the same way, we expect to be able to understand what happens inside the software which we spend ever-larger chunks of our lives conversing with.

So if we fear technologies we don't or can't understand, we need to find ways to move beyond this fear.

If you're interested in this, Ben Bashford did a nice talk which is worth checking out.

Tight loops and feedback

February 19, 2012 | Comments

One of the interesting, and frustrating things about the course I'm doing in Adaptive Systems is its generality; being quite cybernetic, it can be applied or observed in many contexts (biology, robotics, environmental, organisational), and it's down to us to develop our own idea of what it means.

What this means is that I've been reading a few papers on robotics, and in particular developed a taste for the work of Rodney Brooks. At a time when the GOFAI crowd were obsessed with symbolic approaches with AI, he was rolling up his sleeves and encouraging his group at MIT to build stuff in the real world.

Battling Reality is one of my favourite papers so far, for its frankness ("Many of the preconceived notions entertained before we started building our robots turned out to be misguided"), and its clear message that the act of construction was in itself educational. In particular,

"Unless you design, build, experiment and test in the real world in a tight loop, you can spend a lot of time on the wrong problems"

and

"Understanding the environment and truly discovering the constraints on cognition are more readily done by building one robot than by thinking grand thoughts for a long time."

…which set a few agile/lean bells ringing for me (including the one marked "confirmation bias", of course).

There's also an air of Warwick-style showmanship to someone who authors a paper titled Fast, cheap and out of control: a robot invasion of the solar system, too…

Making Sense of Sensors, again

February 13, 2012 | Comments

If you happen to be in Worthing on February 28th, I'll be reprising my Making Sense of Sensors talk from Future of Mobile 2011; possibly with some updates courtesy of the Master's and some reading I've been doing around the topic since. More details here.

Designing for complex systems, with the Pope

February 05, 2012 | Comments

This comment from Amit Singhal of Google, tucked into an interview on SearchEngineLand, jumped out at me:

"But out of the gate, whereas we had limited users to train this system with, I’m actually very happy with the outcome of the personal results."

For me, it was a gentle reminder that the search algorithms most of us use unwittingly every day aren't predictable or understandable. As such it reminded me of this recent piece by Don Norman:

"It is no longer easy or even possible to understand why a machine has taken the action it did: not even the designer of the machine may know, because not only are the algorithms complex and difficult to understand in the realities of a dynamic, ever-changing real environment, but the learning algorithms may have adjusted weights and rules in ways not easy to decipher. If the designers cannot always predict or understand the behavior, what chance does the ordinary person have?"

Much of the effort put into designing user interfaces for software emphasises the consideration of, and deliberate design for, quite specific experiences. In some cases (Nudge and Persuasion Design spring to mind) we're trying to steer an audience in specific directions.

What techniques do we need when the system our audience interacts with is too complex to predict? Clear seams between the human-designed and machine-provided aspects of an experience, like those many call for between the real world and ubicomp? Total front-end simplicity á la Google and Siri seems obvious and ideal, but implies that as behind-the-scenes computing gets more complex, the front-of-house technology will need to keep pace. Right now, it's all proprietary…

(I'm also tickled by the idea that religious dignitaries - who've been designing and nudging around the unknowable for millennia - might be roped in to consult on all this)

Network operators and identity

January 27, 2012 | Comments

We had a big hoo-hah this week over O2 mis-sharing customer phone numbers. They've been sticking them in the HTTP headers for trusted partners for years (a few services FP built used them), but it looks like someone misconfigured a proxy and they leaked out on the wider web. They've been found, had a public slapping, and apologised.

It's a shame, really, because identity is probably one of the last places where operators could really do something useful. They've long prided themselves on their ownership of relationships with their customers, and part of that relationship is their knowing who you are (more for monthly subscribers than PAYG, but still). I'm a bit puzzled as to why they haven't done more with this: one problem that the web has is a complete lack of innate sense of identity, which is why we all have to either remember lots of passwords, use software to manage different passwords for different sites, or have one password we use everywhere - and all of these situations are painful.

(Aside: I can imagine passwords being one of those things that we have to explain to our incredulous grandchildren as an artefact of a Less Civilised Time)

I get that for many people and many situations, this anonymity is a feature not a bug, but I don't see why anonymity and convenience have to be mutually exclusive. Operators, of course, know who you are: it's not called a Subscriber Identity Module for nothing. And, just as they missed the boat with location services 5-7 years ago (by gathering useful location data and either refusing to release it, or trying to charge £0.10 per location lookup, ruling out some classes of application completely and making most of the others commercially unviable), they're probably doing, or have done, the same with identity.

Imagine if when you bought your Orange phone, you could opt in to a service which identified you to web sites (Facebook, ebay, Google, Hotmail) automatically. Perhaps it could do this by presenting them with unique token, a bit like a cookie, which they could use to get your personal details from your operator (with your permission, of course). It'd be great for them (easier sign-ups and logins means more customers and more use), great for the end user (no passwords, hooray) and a decent proposition for the operator ("never forget a password with Orange"). If you're worried about security - well, you can lock your phone already and control physical access to it as well as you can your wallet.

This needn't involve sharing your mobile number - the unique token could be a one-way hash of the number, or similar: something guaranteed to be you and only you, but of no value to spammers if they catch sight of it. As a customer you could control which web sites could use it, and which didn't. Parental controls could be used to restrict logins to specific web sites from the phones of children. It feels like this ought to be useful.

There are privacy issues, true, but if you're using a mobile then you're already trusting an operator with your calling circle, communications, logs of text messages, web pages accessed… a whole pile of very private stuff. Is offering management of your identity on top of all this really a step too far?