John Strand on the iPhone

December 15, 2008 | Comments

Strand Consulting has a report on the iPhone: "The fact that the iPhone is currently receiving so much attention from the press is probably due to an uncritical press that have allowed themselves to be seduced by Apple's unique PR machine - and that have not analysed and examined the underlying business models and the financial success of the iPhone from an operator’s point of view."

I last saw John Strand talk a few years back at World Telemedia, and first came across him when he gave European operators launching I-Mode a right ticking off several years before that. History proved him right on European I-mode, but I'm not convinced that everything he said last time about niche MVNOs has come to pass here in the UK (though I suspect it has, *somewhere*). It's nonetheless interesting to hear some well-informed views on the iPhone from someone well grounded in telco-land.

Tasty bits

December 14, 2008 | Comments

An interesting presentation on agile design; a rerun of a good talk on Design Studios in the agile process; and an article about getting real with agile and design.

One of the things I was quite surprised about at XP Day was how little talk I found around integrating design and development. I suspected at first that this might be because (a) there's no hard-and-fast rules or (b) everyone was sick of talking about it and going round in circles, but Joh corrected me. I should've proposed an open space about it, but a combination of being fascinated by everything else there and a little ill on the morning of day two put a stop to that - ah well.

I'm doing a talk at UX Matters in January, and I think I might try and draw together a few lessons we've learned over the last few years at FP. In the interests of massaging these into a coherent form, a few thoughts:

  • I don't think that designers and developers are as far apart in aspirations as they're sometimes presented. I don't see a love of documentation, or producing documentation, from either side. Good people from both disciplines relish communication, create models and prototypes, and accept change (often managed through iteration).
  • My own output gets better when I work collaboratively and with folks from different disciplines (usually pigeonholed as design, development and business). I don't believe I'm atypical here.
  • The terms "design" and "development" are each placeholders for a set of activities, some of which are more easily estimated and managed than others.
  • Design, development and the business are heavily intertwingled: decisions made in one area frequently impact on the others. Reducing the cultural or geographic distance between them speeds decision-making.

Oh, and if you get a chance avail yourself of a copy of HCI Remixed - it's a series of essays from top-notch HCI types (Bill Buxton, Scott Jenson and Terry Winograd all stood out for me) on works that influenced them. Very dip-innable, and a few gems here.

Mobile and green

December 14, 2008 | Comments

Completely unformed and probably original thought: but is there something innately more environmentally friendly about a medium which is forced to deal with the limited power provided by batteries, and is therefore efficient by design; and which is deliberately minimal in its use of bandwidth and networked resources?

Agile and Corporate Strategy

December 13, 2008 | Comments

David StoughtonXP Day 2: David Stoughton

Is uncertainty really unmanageable?

All our plans are based on assumptions. Market changes break assumptions.

Strategic planning tries to plan for the long term. Someone does some inspection to model possible futures and construct scenarios. It's the assumptions that are critical, but they get forgotten in travelling along the lines set out by the main scenario.

Sensitivity analysis is another method: create a model, change key variables, and work out which variable affects outcomes the most. It's a good means of discarding some scenarios, but in general only looks at local events, rather than causes: "this set of customers stops buying", not why they've stopped.

Then there's emergent strategy, with folks on the shop floor making decisions. In practice this usually only allows for minor adjustments, anything major needs to go through planning: so it's flexible administration of rigid policy.

Three disabling assumptions:

  1. We predicate the configuration of our business on stable market conditions, as we can't respond to chaos;
  2. To change is to lose face;
  3. Change is difficult and expensive;

The result:

  1. All uncertainty must be treated as a risk;
  2. Change is only made in fits and starts as poor alignment between company configuration and market realities becomes unbearable;

Risk management isn't enough; the clash between technology and social change is accelerating: look at the holiday and music industries already. Organisations can't cope with sudden changes because they can't anticipate. They're reactive.

Action is preferred to inaction because management likes to be seen to act. Decisions are made before uncertainty is resolved. Changing strategies means losing face.

What's missing? We need to make our assumptions explicit: what people will pay, what the market is, etc. Financial ones tend to be more explicit than non-financial ones.

How can we model assumptions? Look at all of them and their knock-on effects. (Shows v complicate model of technical, political, competitive etc assumptions)

It's tough to model P&L, though you can model likely demand and likely costs of supply.

Use the model to reduce response times. You need to react before the outcome of a change in conditions occurs: the trigger point needs to be earlier. You need to plan systemic responses to predicted outcomes and assign or acquire enablers and resources for critical response capability - which is a cost.

What do we need to understand about markets? Complexity, competitors, substitutes and complements. Cumulative impact of remote events on local variables: several factors combining. Positive feedback loops and negative (damping) ones, abrupt changes caused by positive feedback.

Currently we work with a Bayesian network model: embed the dynamics of the system into a cause/effect structure. We have a pile of indicators for each variable: confidence, valid range, etc.

To plan:

  1. create systems to gather information about each variable
  2. establish a timebox, matching to the clock rate of market change
  3. automate data collection
  4. view current status through a dashboard
  5. review and prioritise

Deploy options according to trigger conditions.

This implies redundancy of effort and assets, which doesn't fit well with minimising costs. Audience member points out that maximising utilisation suboptimises for throughput (which I think is the thesis behind Slack).

Is it all worth it? The value of systems is enhanced. A lot of this is about protecting old assets rather than creating new ones. This is an agile system: incremental, evolutionary, frequent delivery.

There's prejudice remaining: redundancy is seen as expensive... when we know agile is cheaper.


XP Day: Nat Pryce, TDD and asynchronous systems

December 11, 2008 | Comments

XP Day: Nat Pryce, TDD and asynchronous systems

Case study of 3 different systems, dealing with asynchrony on system development and TDD.

Symptoms this can lead to in tests:

  1. Flickering tests: tests mostly succeed, but occasionally fail and you don't know why;
  2. False positive: tests run ahead of the system, you think you're testing something but your tests aren't exercising behaviour properly;
  3. Slow tests: thanks to use of timeouts to detect failures;
  4. Messy tests: sleeps, looping polls, synchronisation nonsense;

Nat PryceExample: system for administering loans. For regulatory reasons certain transactions had to be conducted by fax. Agent watches system, posts events to a JMS queue, consumer picks up events, triggers client to take actions.

Couldn't test many components automatically, had to do unit tests and manual QA. System uses multiple processes, loosely joined.

They built their own system: a framework for testing the Swing GUIs, using probes sent into Swing, running withing Swing threads, taking data out of GUI data structures and back onto test threads. Probes hide behind an API based on assertions.

Second case study: device receiving data from a GPS, doing something with this info, translating it into a semantically richer form and using it to get, e.g. weather data from a web service.

System structured around an event message bus. Poke an event in, you expect to get an event out: lots of concurrency between producers and consumers.

Tested with a single process running entire message bus (different from deployed architecture); tests sent events onto message bus, the testbed captured events in a buffer and the test could make assertions based on these captured events. Web services were faked out. Again, all synchronisation was hidden behind an API of assertions with timeouts to detect test failure.

Third case study: grid computing system for a bank. Instead of probing a swing app, used WebDriver to probe a web browser running out-of-process. Probes repeat, time out, etc. Slow tests only occur when failures happen, which should be rare. Assertion-based API hides these timeouts, and stops accidental race conditions caused by data being queried whilst it's being changed.

Question: the fact that you use a DSL to hide the nasties of synchronisation doesn't help solve the symptoms in the first slide, does it?

Polling can miss updates in state changes. Event traces effectively let you log all events so you don't miss anything. Assertions need to be sure that they're testing the up-to-date state of the system. You need to check for state changes.

Question: what about tests when states don't change? Tests pass immediately.

You need to use an idiom to test that nothing happens.

It's difficult to test timer-based stuff ("do X in a second") reliably. Pull out these parts into third party services. Pulled out scheduler, tested carefully, gave it a simple API, developed a fake scheduler for tests. To test timer-based events you need to fake out the scheduler.