Extended Selves
October 13, 2011 | CommentsTo London, and a lecture at the LSE by Katalin Farkas on Extended Selves.
Katalin is a philosophy professor, and presented an interesting theory: the mind is composed of two sets of features, the stream of consciousness (everyday meanderings that bubble up and away) and what she calls "standing features": the long-held beliefs, desires, etc., which define us. When we're asleep, the former shuts off, the latter persists: we are still ourselves when unconscious.
So her thesis (the Extended Mind, or EM thesis) is that we are a set of beliefs or dispositions, and where these dispositions are stored makes no difference to whether they are beliefs: a man with poor memory who documents his beliefs in a notebook and retrieves them from there. This is controversial because the notebook is outside his brain ("spacial extension", as Katalin referred to it), but we accept prosthetic limbs, so this shouldn't be a problem. And what if the notebook were inside the cranium, would that make a difference?
How far can this go? Consider a student who passes her exams with the aid of a 24/7 consultant: she understands what she's writing but didn't originate it. The consultant is effectively an external prop. This might be morally problematic today, but many of us outsource some of our beliefs in specialist areas to specialists: you could consider accountants to be a repository of outsourced beliefs.
If the EM thesis is correct, the value of certain types of expertise may change over time. We see this happen today - like it or not! - with spelling, or driving directions. What it takes to be an expert changes, with the addition of expert devices. Katalin spoke about "diminished selves", which troubled me slightly: I couldn't help wondering if she would have considered our species to have diminished when we invented language, or the written word, and could start transmitting, storing and outsourcing our knowledge.
And I also observed that the problem many people have with digital prostheses might relate to lack of control over them: they require electricity or connectivity to function; they don't self-maintain as the human body heals, and the data flowing through them might be subject to interception or copying.
Interesting stuff, and the evening ended with a quite lively discussion between the audience and Professor Farkas...
Vexed acquires Future Platforms
October 10, 2011 | CommentsBig news from me: Future Platforms has been acquired by Vexed Digital.
Vexed is the agency that my old boss, Richard Davies, set up a few years after leaving Good Technology, and the core team there is the old team from GT. My first foray into mobile in 1999 was when I set up GT Unwired, a division of GT which Just Did Mobile, so this feels a little bit like a return home for FP.
Vexed are talented and experienced folks with some great customers and stronger management skills than I've been able to bring to FP. They also know us well - Richard has sat on our advisory board and provided a shoulder to cry on, on occasion. I'm looking forward to seeing what we can do together. The is *definitely* going to be the year that mobile takes off ;)
We're going to start by harmonising where we can (sharing tools and processes, mainly), and are already collaborating on some pleasantly large projects, but FP will remain a separate entity and brand - based in Brighton and employing everyone who works there today. We'll also have facilities in London available to us, which might please those of our customers who have trouble enjoying the Brighton/Victoria line.
As for me: I'm going to carry on working for the business part time, and am filling otherwise empty hours with a return to academia: last week I started on the Advanced Computer Science MSc which Sussex University have begun offering this year.
The ride's not over yet, but it feels remiss of me not to thank everyone who's worked hard to get FP to this point: that'll be everyone who's worked for us, everyone who's still there, and our advisors over the years. In particular I should call out my original co-conspirator Mr Gooby (who shares the blame for FP) and Mr Falletti, who's kept me upright and out of trouble for the last 6 years.
app.ft.com, and the cost of cross-platform web apps
October 05, 2011 | CommentsOne of the most interesting talks at OverTheAir was, for me, hearing Andrew Betts of Assanka talk about the work he and his company had done on the iPad web app for the Financial Times, app.ft.com.
It's an interesting piece of work, one of the most accomplished tablet web-apps I've yet seen, and has received much attention from the publishing and mobile industry alike. I've frequently heard it contrasted favourably with the approach of delivering native apps across different platforms, and held up as the shape of things to come. Andrew quickly dispelled any notion I had that this had been a straightforward effort, by going into detail on some of the approaches his team had taken to launch the product:
- Balancing of content across the horizontal width of the screen;
- Keeping podcasts running by having HTML5 audio in an untouched area of the DOM, so they'd not be disturbed by page transitions;
- Categorisation of devices by screen width;
- Implementing the left-to-right carousel with 3 divs, and the detail of getting flinging effects "just so";
- Problems with atomic updates to app-cache (sometimes ameliorated by giving their manifest file an obscure name);
- Using base64 encoded image data to avoid operator transcoding;
- How to generate and transfer analytics data with an offline product;
- Difficulties handling advertising in an offline product;
- Problems authenticating with external OAuth services like Twitter or Facebook, when your entire app is a single page;
- Horrendous issues affecting 8-9% of iPhone users, who need to reboot their phones periodically to use the product (I picked this up from a colleague of Andrew's in a chat before the talk);
- Lack of hardware-accelerated CSS causing performance problems when trying to implement pan transitions on Android;
- etc.
All of this really drove home the amount of work which had gone into the product: app.ft.com took a full-time team of 3 developers at Assanka 8 months to launch on iPad, and that team a further 4 months to bug-fix the iPad and ready for distribution to Android tables. (It's not available for Android tablets just yet, but that's apparently due to a customer-service issue: the product is there).
Seeing the quality of the product, I've no doubt that the team at Assanka know their stuff; and given the amazing numbers the FT report for paying digital subscriptions and their typical pricing for subscriptions, I'm sure the economics of this have worked well for them.
But the idea that it takes 24 developer-months to deliver an iPad newspaper product to a single platform using web technologies is, to me, an indication of the immaturity of these technologies for delivering good mobile products. By comparison, at FP we spent 20 months launching the Glastonbury app across 3 completely different platforms and many screen sizes; but this 20 months included two testers and two designers working at various stages of the project; development-wise I'd put our effort at 13 months total. Not that the two apps are equivalent, but I want to provide some sort of comparative figure for native development. I'd be very interested if anyone can dig out how long the FT native iPad app took…
And the fact that it's taken Assanka a further 12 man-months to get this existing product running to a good standard on Android is, to me, an indication of the real-world difficulties of delivering cross-platform app-like experiences using web technologies.
It strikes me that there's an unhelpful confusion with all this web/native argument: the fact that it's easy to write web pages doesn't mean that it's easy to produce a good mobile app using HTML. And the fact that web browsers render consistently doesn't mean that the web can meet our cross-platform needs today.
Adam Cohen-Rose took some more notes on the talk, here.
(At some point in the coming months I promise to stop writing about web/native, but it keeps coming up in so many contexts that I still think there's value in posting new insights.)
Making Sense of Sensors, Future of Mobile 2011
October 03, 2011 | CommentsI've been getting quietly interested in the sensors embedded in mobiles over the last few years, and Carsonified kindly gave me a chance to think out loud about it, in a talk at Future of Mobile a couple of weeks ago.
At dConstruct I noted that Bryan and Steph produce slides for their talks which work well when read (as opposed to presented), and I wanted to try and do the same; hopefully there's more of a narrative in my deck than in the past. And it goes something like this...
Our mental models of ourselves are brains-driving-bodies, and the way we've structured our mind-bicycles is a bit like this too: emphasis on the "brain" (processor and memory). But this is quite limiting, and as devices multiply in number and miniaturise, it's an increasingly unhelpful analogy: a modern mobile is physically and economically more sensor than CPU. And perhaps we're due for a shift in thinking along the lines of geo-to-heliocentrism, or gene selection theory, realising that the most important bit of personal computing isn't, as we've long thought, the brain-like processor in the middle, but rather the flailing little sensor-tentacles at the edges.
There's no shortage of sensors; when I ran a test app on my Samsung Galaxy S2, I was surprised to find not just acceleration, a magnetometer and orientation but light, proximity, a gyroscope, gravity, linear acceleration and a rotation vector sensor. There's also obviously microphone, touch screen, physical keys, GPS, and all the radio kit (which can measure signal strengths) - plus cameras of course. And in combination with internet access, there's second-order uses for some of these: a wi-fi SSID can be resolved to a physical location, say.
Maybe we need these extra senses in our devices. The bandwidth between finger and screen is starting to become a limiting factor in the communication between man and device - so to communicate more expressively, we need to look beyond poking a touch-screen. Voice is one way that a few people (Google thus far, Apple probably real soon) are looking - and voice recognition nowadays is tackled principally with vast datasets. Look around the academic literature and you'll find many MSc projects which hope to derive context from accelerometer measurements; many of them work reasonably well in the lab and fail in the real world, which leads me to wonder if there's a similar statistical approach that could be usefully taken here too.
But today, the operating system tends to use sensors really subtly, and in ways that seem a bit magical the first time you see them. I remember vividly turning my first iPhone around and around, watching the display rotate landscape-to-portrait and back again. Apps don't tend to be quite so magical; the original Google voice search was the best example I could think of of this kind of magic in a third-party app: hold the phone up to your ear, and it used accelerometer and proximity sensors to know it was there, and prompt you to say what you were looking for - beautiful.
Why aren't apps as magical as the operating system use of sensors? In the case of iPhone, a lot of the stuff you might want to use is tucked away in private APIs. There was a bit of a furore about that Google search app at the time, and the feature has since been withdrawn.
(In fact the different ways in which mobile platforms expose sensors are, Conway-like, a reflection of the organisations behind those platforms. iOS is a carefully curated Disneyland, beautifully put together but with no stepping outside the park gates. Android offers studly raw access to anything you like. And the web is still under discussion with a Generic Sensor API listed as "exploratory work", so veer off-piste to PhoneGap if you want to get much done in the meantime.
I dug around for a few examples of interesting stuff in this area: GymFu, Sonar Ruler, NoiseTube and Hills Are Evil; and I spoke to a few of the folks behind these projects. One message which came through loud-and-clear is that the processing required for all their applications could be done on-device. This surprised me, I expected some of this analysis to be done on a server somewhere.
Issues of real-world messiness also came up a few times: unlike a lab, the world is full of noise, and in a lab setting you would never pack your sensors so tightly together inside a single casing. GymFu found, for instance, that the case design of the second generation iPod touch fed vibrations from the speaker into the accelerometer. And NoiseTube noticed that smartphone audio recording is optimised for speech, which made it less adequate for general-purpose noise monitoring.
Components vary, too, as you can see in this video comparing the proximity sensor of the iPhone 3G and 4, which Apple had to apologise for. With operating systems designed to sit atop varying hardware from many manufacturers (i.e. Android), we can reasonably expect variance in hardware sensors to be much worse. Again, NoiseTube found that the HTC Desire HD was unsuitable for their app because it cut off sound at around 78dB - entirely reasonable for a device designed to transmit speech, not so good for their purposes.
And don't forget battery life: you can't escape this constraint in mobile, and most sensor-based applications will be storying, analysing or transmitting data after gathering it - all of which takes power.
I closed on a slight tangent, talking about a few places we can look for inspiration. I want to talk about these separately some time, but briefly they were:
- The Dark Materials trilogy of Philip Pullman, and daemons in particular. If these little animalistic manifestations of souls, simultaneously representing their owners whilst acting independently on their behalf, aren't good analogies for a mobile I don't know what is. And who hasn't experienced intercision when leaving their phone at home, eh…? Chatting to Mark Curtis about this a while back, he also reckoned that the Subtle Knife itself ("a tool that can create windows between worlds") is another analogy for mobile lurking in the trilogy;
- The work of artists who give us new ways to play with existing senses - and I'm thinking particularly of Animal Superpowers by Kenichi Okada and Chris Woebken, which I saw demonstrated a few years back and has stuck with me ever since;
- Artists who open up new senses. Timo Arnall and BERG. Light paintings. Nuff said.
It's not all happy-clappy-future-shiny of course. I worry about who stores or owns rights to my sensor data and what future analysis might show up. When we have telehaematologists diagnosing blood diseases from camera phone pictures, what will be done with the data gathered today? Most current projects like NoiseTube sidestep the issue by being voluntary, but I can imagine incredibly convenient services which would rely on its being gathered constantly.
So in summary: the mental models we have for computers don’t fit the devices we have today, which can reach much further out into the real world and do stuff - whether it be useful or frivolous. We need to think about our devices differently to really get all the possible applications, but a few people are starting to do this. Different platforms let you do this in different ways, and standardisation is rare - either in software or hardware. And there’s a pile of interesting practical and ethical problems just around the corner, waiting for us.
I need to thank many people who helped me with this presentation: in particular Trevor May, Dan Williams, Timo Arnall, Jof Arnold, Ellie D’Hondt, Usman Haque, Gabor Paller, Sterling Udell, Martyn Davies, Daniele Pietrobelli, Andy Piper and Jakub Czaplicki.
Open sourcing Kirin
September 30, 2011 | CommentsI've written a few times about the general dissatisfaction I (and the team at FP) have been feeling over HTML5 as a route for delivering great mobile apps across platforms.
Back in June, I did a short talk at Mobile 2.0 in Barcelona where I presented an approach we've evolved, after beating our heads against the wall with a few JavaScript toolkits. We found that if you're trying to do something that feels like a native app, HTML5 doesn't cut it; and we think that end-users appreciate the quality of interface that native apps deliver. We're not the only ones.
Our approach is a bit different: we take advantage of the fact that the web is a standard part of any smartphone OS, and we use the bit that works most consistently across all platforms: that is, the JavaScript engine. But instead of trying to build a fast, responsive user interface on top of a stack of browser, JavaScript, and JavaScript library, we implement the UI in native code and bridge out to JavaScript.
Back-end in JavaScript: front-end in native. We think this is the best of both worlds: code-sharing of logic across platforms whilst retaining all the bells and whistles. We've called the product which enables this Kirin.
Kirin isn't theory: after prototyping internally, we used it for the (as of last night) award-winning Glastonbury festival app (in the Android and Qt versions) and have established that it works on iOS too.
Now, we're a software services company; we aren't set up to sell and market a product, but we think Kirin might be useful for other people. So we've decided to open source it; and where better to do that than Over The Air, just after a talk from James Hugman (who architected Kirin and drove it internally).
You can find Kirin on GitHub here. Have a play, see what you think, and let us know how you get on.