Mobile and green
December 14, 2008 | CommentsCompletely unformed and probably original thought: but is there something innately more environmentally friendly about a medium which is forced to deal with the limited power provided by batteries, and is therefore efficient by design; and which is deliberately minimal in its use of bandwidth and networked resources?
Agile and Corporate Strategy
December 13, 2008 | CommentsIs uncertainty really unmanageable?
All our plans are based on assumptions. Market changes break assumptions.
Strategic planning tries to plan for the long term. Someone does some inspection to model possible futures and construct scenarios. It's the assumptions that are critical, but they get forgotten in travelling along the lines set out by the main scenario.
Sensitivity analysis is another method: create a model, change key variables, and work out which variable affects outcomes the most. It's a good means of discarding some scenarios, but in general only looks at local events, rather than causes: "this set of customers stops buying", not why they've stopped.
Then there's emergent strategy, with folks on the shop floor making decisions. In practice this usually only allows for minor adjustments, anything major needs to go through planning: so it's flexible administration of rigid policy.
Three disabling assumptions:
- We predicate the configuration of our business on stable market conditions, as we can't respond to chaos;
- To change is to lose face;
- Change is difficult and expensive;
The result:
- All uncertainty must be treated as a risk;
- Change is only made in fits and starts as poor alignment between company configuration and market realities becomes unbearable;
Risk management isn't enough; the clash between technology and social change is accelerating: look at the holiday and music industries already. Organisations can't cope with sudden changes because they can't anticipate. They're reactive.
Action is preferred to inaction because management likes to be seen to act. Decisions are made before uncertainty is resolved. Changing strategies means losing face.
What's missing? We need to make our assumptions explicit: what people will pay, what the market is, etc. Financial ones tend to be more explicit than non-financial ones.
How can we model assumptions? Look at all of them and their knock-on effects. (Shows v complicate model of technical, political, competitive etc assumptions)
It's tough to model P&L, though you can model likely demand and likely costs of supply.
Use the model to reduce response times. You need to react before the outcome of a change in conditions occurs: the trigger point needs to be earlier. You need to plan systemic responses to predicted outcomes and assign or acquire enablers and resources for critical response capability - which is a cost.
What do we need to understand about markets? Complexity, competitors, substitutes and complements. Cumulative impact of remote events on local variables: several factors combining. Positive feedback loops and negative (damping) ones, abrupt changes caused by positive feedback.
Currently we work with a Bayesian network model: embed the dynamics of the system into a cause/effect structure. We have a pile of indicators for each variable: confidence, valid range, etc.
To plan:
- create systems to gather information about each variable
- establish a timebox, matching to the clock rate of market change
- automate data collection
- view current status through a dashboard
- review and prioritise
Deploy options according to trigger conditions.
This implies redundancy of effort and assets, which doesn't fit well with minimising costs. Audience member points out that maximising utilisation suboptimises for throughput (which I think is the thesis behind Slack).
Is it all worth it? The value of systems is enhanced. A lot of this is about protecting old assets rather than creating new ones. This is an agile system: incremental, evolutionary, frequent delivery.
There's prejudice remaining: redundancy is seen as expensive... when we know agile is cheaper.
XP Day: Nat Pryce, TDD and asynchronous systems
December 11, 2008 | CommentsXP Day: Nat Pryce, TDD and asynchronous systems
Case study of 3 different systems, dealing with asynchrony on system development and TDD.
Symptoms this can lead to in tests:
- Flickering tests: tests mostly succeed, but occasionally fail and you don't know why;
- False positive: tests run ahead of the system, you think you're testing something but your tests aren't exercising behaviour properly;
- Slow tests: thanks to use of timeouts to detect failures;
- Messy tests: sleeps, looping polls, synchronisation nonsense;
Example: system for administering loans. For regulatory reasons certain transactions had to be conducted by fax. Agent watches system, posts events to a JMS queue, consumer picks up events, triggers client to take actions.
Couldn't test many components automatically, had to do unit tests and manual QA. System uses multiple processes, loosely joined.
They built their own system: a framework for testing the Swing GUIs, using probes sent into Swing, running withing Swing threads, taking data out of GUI data structures and back onto test threads. Probes hide behind an API based on assertions.
Second case study: device receiving data from a GPS, doing something with this info, translating it into a semantically richer form and using it to get, e.g. weather data from a web service.
System structured around an event message bus. Poke an event in, you expect to get an event out: lots of concurrency between producers and consumers.
Tested with a single process running entire message bus (different from deployed architecture); tests sent events onto message bus, the testbed captured events in a buffer and the test could make assertions based on these captured events. Web services were faked out. Again, all synchronisation was hidden behind an API of assertions with timeouts to detect test failure.
Third case study: grid computing system for a bank. Instead of probing a swing app, used WebDriver to probe a web browser running out-of-process. Probes repeat, time out, etc. Slow tests only occur when failures happen, which should be rare. Assertion-based API hides these timeouts, and stops accidental race conditions caused by data being queried whilst it's being changed.
Question: the fact that you use a DSL to hide the nasties of synchronisation doesn't help solve the symptoms in the first slide, does it?
Polling can miss updates in state changes. Event traces effectively let you log all events so you don't miss anything. Assertions need to be sure that they're testing the up-to-date state of the system. You need to check for state changes.
Question: what about tests when states don't change? Tests pass immediately.
You need to use an idiom to test that nothing happens.
It's difficult to test timer-based stuff ("do X in a second") reliably. Pull out these parts into third party services. Pulled out scheduler, tested carefully, gave it a simple API, developed a fake scheduler for tests. To test timer-based events you need to fake out the scheduler.
XP Day: Coaching self-organising teams, Joseph Pelrine
December 11, 2008 | CommentsXP Day: Coaching self-organising teams, Joseph Pelrine
AKA "zenzen wakari-masen"
AKA "how to be a manipulative bastard without anyone knowing"
But first... slime mold. Japanese scientists recently got one to solve a small maze, through cells self-organising. They communicate by secreting pheromones.
But we are not slime mold!
We don't think rationally, unless we're autistic. Our subconscious follows a first-fit (not best-fit) pattern matching algorithm based on past experience, which the conscious mind rationalises according to the dominant discourse. Our ancestors saw danger and ran; to do this fast, evolution optimised to bypass the conscious mind.
What is self-organisation? Amongst primates, it's the fight for alpha-male dominance: probably not the kind of self-organisation we want unless you're the alpha. What are the models to understand how teams work?
(Discussion exercise around how far we let a team self-organise)
There are 2 general directions to the questions Joseph placed on the board: type X personality (believes "most people are lazy, leave them alone and they'll do nothing"), and type Y (believes "working can be like learning, and fun, people left to their own devices will achieve great things").
One issue in getting a team to self-organise is letting them do this. One direction to take is making small changes and seeing how they go.
RUP: "the sound a project makes when it crashes against a wall".
RUP has a built-in mechanism: if it goes wrong, you got the process wrong. There's a similar problem with self-organisation: an organisation that doesn't work right is the fault of bad management. There is a theory: what if the organisation is dysfunctional but still doing what it needs to do? Most companies say "the customer comes first"; rubbish, the CEO does in practice!
The theory is that the main purpose of any organisation is to provide for the needs and desires of a group of people in that organisation. Look at AIG: got government bailout then sent its managers on a half-million dollar retreat. We need to get support at a high level when bringing agile in.
I propose to take this idea a step further: if you have a dysfunctional team, what would they be if they were doing exactly what they should be doing? Kurt Levin, the father of social psychology, theorises that
B = f(P,E)
i.e. behaviour is a function of a person and their environment. He tried to explain psychological interactions in terms of the mathematics of topology. We talk about self-organisation, but who or what is the self? Self-organisation of a system is the interactions between the agents of a system.
For example, at Christmas our 25y.o. son is visiting. My wife treats him like a child, but tries not to. The system has defined a set of roles that need to be played; we will gravitate to acting out these roles.
In a team, we play roles. Remove the pessimist from a team, and someone else takes up the role. It goes further: there are ghost roles. Ever get the feeling in a meeting that people are being careful about what they say, that there's someone else in the room?
These roles get set up by a self-organisational process. How do we get to a point where the fight for alpha-maleness doesn't dominate this process?
The answer lies in chicken soup: made from all sorts of ingredients. You need more than ingredients though, you need heat. For most people who aren't chefs, heat is boolean. A good cook learns to play with heat. To make people a good team, you need to do the same, to effect a change from outside - without working directly with them.
Consider these stages in the cooking analogy:
- Burning: feel threatened, panic, burn out.
- Cooking: the target level, where heat is high enough that ingredients blend but retain individual identities. You can still taste the carrot in good chicken soup.
- Stagnating: what was once soup is now a substrate for bacterial growth. In a team, this is where discipline stops: documentation isn't being written, meetings aren't being attended, etc.
- Congealing: where norms are established ("this is how we do it around here") and change is difficult.
- Solidifying: where change isn't possible.
The trick with self-organisation: determining where the team is now, what models can we use for them, what can we do with these?
So, looking at models, the equation for gas in a confined space:
PV = mRT
T: temperature
R: a constant, forget it
m: molecules of gas in the enclosed area
P: pressure
V: volume
Compress the gas and temperature rises. Increase amount of gas and the same happens. This gives us two easily understood variables: the size of our timebox and the number of tasks we have to do. With a large timebox and 1 task, there's not much stress. With 30-40 tasks there's more stress. Similarly if the number of tasks is fixed and the size of the timebox increased, people get more relaxed. Parkinsons law: tasks grow to fill the size of the time available.
The temperature is proportional to the number of people in your team: its' easier to stress out 1 person than a team of 20.
Teams will naturally cool down and stagnate. There's a need for constant "heating" (coaching). Capacity-based planning is great, but even if you only commit to a subset of overall work in a given sprint, there's more heat: the team are aware there's more to do.
Creative use of conflict can create heat. In a football team you have 26 on the roster and 11 on the field. Only those on the field get lots of the benefits of being on the team: coaches know this and use this to motivate.
(Exercise on how we view conflict)
Another model from a psychologist who invented the term "flow": being in a state where you're on with yourself and your work. Every person has a set of skills, and meets challenges. When skills and challenges are in balance, you're in flow. It's not an absolute model; wherever you are in skill, there's a level of challenge appropriate relative to you. If challenges are more difficult than skills, you feel anxious. If they're too easy, you get bored.
Challenge: your developers find the daily stand-ups boring. To motivate them, raise the challenge level.
The passive version of this is the Peter Principle: people are promoted beyond their level of competence.
(part 2)
Not all individuals are at the same point. We work on plotting activities and people on the skills/challenge matrix. Joseph theorises that people need to move around across the matrix.
Shows the classic Gorilla/basketball video.
Types of power: reward, coercive, legitimate, expert, referent.
Lots of complexity here, we're playing with many values. So we can't predict the outcome of changes. We need to implement small changes and make decisions based on empirical data. Probe, sense, respond. Inspect and adapt. We can't have fail-safe... we need safe-fail!
Matt Wynne & Rob Bowley, Evolving from Scrum to LEAN
December 11, 2008 | CommentsMatt Wynne & Rob Bowley, Evolving from Scrum to LEAN
Worked together at BBC Worldwide. Took a well-functioning Scrum team and improved it.
You need an enlightened organisation to do this: "it is possible to divide the work into small value-adding increments that can be independently scheduled" and an enlightened team where "it is possible to develop any value-added increment in a continuous flow from requirement to deployment".
Colocated team of 12, 2 year project.
Scrum was a success story for them. They were first team in BBC Worldwide to do agile; now there isn't a team there that doesn't do some sort of agile.
One key concept from lean is "eliminating waste". Where is time, money, resource, going needlessly? How do you optimise? They saw spending an afternoon predicting lengths of 1-4h tasks to be intensive, tiring, and frequently they found that in week 2 of the sprint their understanding of the tasks had changed. Their estimates weren't reliable, and the session is intense.
One lean concept: "value stream mapping".
They realised their definition of done was wrong. Things were going into tests but not getting tested in-sprint. Velocity had been increasing and suddenly flattened as they had to fix all the bugs. They realised they were missing a better visual representation of the flow: they added stages to their process.
One morning they decided to stop iterating, abandon task cards and burndowns. They started using a kanban board, with tokens representing single user stories. Team process is represented as a series of stages, from concept through to production code: customer desires, analysis, development, testing, awaiting deployment, deployed. Large numbers of tokens in any one stage show where blockages lie.
To replace burndown, team records daily where stories are in this process, on a colour chart. This gives a clear idea of where bottlenecks are in the process: e.g. system testing was at one point overloading the poor tester; too much work-in-progress shown visually told them they needed to limit amount of WIP.
Another lean concept: "stop the line".
The whole team could see the state of the project and cared about it. Continuous integration is a classic example of this: the build breaks and it's fixed immediately.
The after-effects: the team became more disciplined. They'd record checklists of things to keep an eye on from retrospectives. Became more flexible; there's an overhead to working in iterations when it comes to preparing for work. They weren't able to stop sprints when requirements changed previously - is anyone? The product owner was more empowered, she could rearrange priorities at will.
The downside: we miss the rhythm of the iteration and getting everyone around a set of goals. We became disconnected from one another. At Songkick (Matt's new job) they're seeing rhythm elsewhere: in show and tells, in the rate work comes off the production line, etc.
Conclusions: smaller is better, flow beats batch. It takes time to change; it was only in the last 3 months of the project that we reached this level of efficiency. Don't try and do everything at once. Lean/Kanban is not a new religion; be as reflective as possible in your team and organisations.
Question: without estimation, how did you give meaningful assurances of when the overall project would be done?
Answer: the chart gave us velocity for individual stages of stories, so we could extrapolate from that. They stopped estimating tasks, but kept estimating stories.
Question: how would this apply to a product development environment where you want features batched into a release?
Answer: OSS projects release an edge version every day and stable release more often: it's the same here.