Roles and Responsibilities BA vs PO vs SM

The Scrum Guide has this to say about the roles:

The Product Owner

The Product Owner is responsible for maximising the value of the product and the work of the Development Team. How this is done may vary widely across organisations, Scrum Teams, and individuals.

The Product Owner is the sole person responsible for managing the Product Backlog.

The Scrum Master

The Scrum Master is responsible for ensuring Scrum is understood and enacted. Scrum Masters do this by ensuring that the Scrum Team adheres to Scrum theory, practices, and rules.

The Business Analyst

Not actually mentioned…



The Business Analyst role is one that seems very common (and potentially very useful) addition to the team within Scrum. Their key role to facilitate understanding between the Product Owner/Business and the Development Team.

The Business Analyst should have the whole system view to reflect the single story view the developer/tester would have and so can answer the queries and fill in the details for the developer/tester.

We heard about a team where they were actively working to embed the BAs within the Scrum Team. As part of this journey the BAs were demoing stories at the Sprint Review!

In some instances the BA can proxy for the Product Owner, for example representing the PO in the Scrum ceremonies.

This does really depend on a significant level of trust from the Product Owner and it was not expected that the BA would sign anything off. Sign off of stories being the sole responsibility of the Product Owner as the single wringable neck.

We could also say that in effect the business analyst can be the person responsible for the mechanics – logging of stories, ensuring they are sprint ready etc – of the backlog while the product owner is responsible for the underlying content and priorities.


One common concern/problem is where the Product Owner and/or Business Analyst and/or Stakeholders are not fully engaged with the Scrum team. They tend to have a day job or are looking to the next project. In order for the Agile process to work we need someone who is empowered to make a decision – for example if a feature is Done, and we need some representative of the Business to at the very least engage with the ceremonies in Scrum in order that problems and progress can be be communicated in a timely fashion.

This lack of engagement can lead to delays in the delivery of the product – if a story isn’t signed off we would have to bring it forward into the next Sprint – this impacts average velocity and has further implications on any planning and delivery estimates. All to often this issue is never factored into why a project is apparently behind schedule and it’s the Scrum Team who are blamed.

Do we let the the project fail where we don’t have this engagement?

No Gods No Masters

One interesting case was of a team who had “lost” their Product Owner. This begged the question how did they determine what to work on?

The team received requests from the business and had an overarching road map of what projects were needed and there was a simple request of just do it. The team then determined the stories, the priority and when they expected to deliver. The Scrum Master controlled the Backlog adding the stories and tracking the work. Products were being delivered and the Team seemed happy with working this way.

The team had absorbed the Product Owner role and had enough subject matter knowledge to create viable stories.

Crazy? Well if it works…



How do you do long term planning when your organisation needs deadlines and lead times?

The first conversation for the 6th Practical Agile meeting considered Agile, Long Term Planning and Deadlines.

One common problem when planning seems to be that the deadline is often set from on high before a clear set of requirements are available.

There was a feeling that that is a spill over from the Waterfall world  where a set specification has a set end date (which, it was mentioned, was often missed).
Cart Before the Horse

The group felt that TShirt Sizing of a project (S, M, XL) was helpful in giving a finger in the air idea of how long it was felt a project would last but until the Project can be broken down into individual stories would a more realistic estimate of the time required be possible.

Once a backlog is in place prioritised and roughly estimated, if a team has an existing velocity, we are in a much better position to estimate (and by Estimate we mean Forecast rather than “Fixed Price Estimate“) what work is likely to be complete by any point in time. However a warning story came out that Estimates have been taken as Definite Dates to the point Marketing and Sales Teams started to use an Estimate as the basis for the selling the product!

There is a need to trust the teams to be honest as to what they feel the effort is (based on what they understand about a story at that time) and so using the velocity they can forecast what is likely to be completed by a given date. However this can only be a Snapshot. As the Backlog is continuously refined and potentially new features added to the project, the customer needs to be aware that what will be delivered will change. A Product Owner cannot expect to add a new feature into the backlog and expect no impact, and indeed it was suggested that the team need to emphasise almost one new story in by the project deadline, one story should be highlighted as dropping out for the project deadline. Which one of course would be up to the Product Owner. In any case the team, via the ScrumMaster, needs to make sure that these changes are reflected in the predicted delivery and that these revisions are communicated back to the Product Owner and stakeholders.

Story points are in key tool in any planning activity. An ideal is to have a full prioritised backlog of well constructed stories with acceptance criteria, which can then have the effort estimated using story points. We can use the established velocity of the team to forecast when stories are likely to be delivered and this can help drive the prioritisation of the backlog as the Product Owner can more easily see what value is likely to be delivered when. We can produce a burn up to illustrate a growth in effort and so highlight growth of a projects scope to the Product Owner in order to manage expectations and where necessary drive re-prioritisation and the understanding that if a new story will be worked on, other stories cannot be.

Burnup Example

Burn Up of a Project’s Scope

One issue that must be avoided (especially where the team has no established velocity) is to create a velocity that ensures that the estimated backlog will be completed by a specific deadline ie we have 5 sprints worth of time, the backlog is estimated at 200 points, therefore the team’s velocity gets set to 40. The team must, if necessary, be trusted to honestly come up with their own estimate of velocity based on what they feel is achievable in a sprint. Any forecast can (and should) be challenged, but the final call must be the people who will be doing the work.

One common factor everyone agreed to was that it is always best to have a Single Full time Product Owner who is ideally embedded with the team. It was also felt that far too often it was the Development Team who were blamed when a Project didn’t hit the deadline.

On the other hand it was felt developers (in particular) had a tendency to over estimate (although there were examples where they would under estimate). It must be recognised the estimation  is something people are bad at and though with time estimates will hopefully improve they are just a forecast and not a promise. This emphasises the need for continual refinement of the backlog and not just accepting any original estimate – with knowledge developed from experience in working on a particular project stories may require less effort (or more). Again the team needs to communicate clearly any impact of this refinement on what can be delivered.

For longer term planning Road Maps (an outline of what upcoming projects were due to be started) were considered to be very useful, not just in communicating the plan to the wider audiaence but also enabling the team(s) to start thinking about upcoming projects.

Release Planning was a very popular idea, even where it wasn’t being applied. It was felt that this would help communicate what features were to be delivered when for a project and could help with planning sales and training needs. Iterative Releases where only small changes were made was felt to be preferable to Big Bang Releases where much fuller training would be needed but there is often push back from the business who initially want to see everything released together. Feature switching was felt to be able to help find the best compromise between these two release strategies.

In summary – it felt from the discussion that Agile can provide useful forecasts as to what work is likely to have been completed by a deadline, but this must be subject to continuous review. The team can provide this forecast to enable the Product Owner (and so the business) to continuously evaluate what features will likely be available at any point and so judge the value of any further development.  What Agile cannot do is make is make the time and effort needed to complete a project arbitrarily match a deadline.

Reporting to management – what metrics do you use?

Due to the fact it was holiday season we had a small but enthusiastic group and the conversation took many interesting twists and turns.

Why Report?

The Product Owner needs to be seen to be delivering value for money. Agile is not a blank cheque and we need to justify what and why we are delivering. It is necessary to measure progress and enable early identification of issues.


Velocity seemed to be a popular metric (Story Points per Sprint).  However it can be problematic in that initially it has to be an estimate and Project Managers can get rather twitchy. Once the Team has a few Sprints under their belt and an average velocity is available then it’s much more useful to both the team for Sprint Planning and for the Product Owner in giving a view of when the backlog could be potentially completed much longer  – however it’s still just an estimate and will be constantly refined.

Reclaim the Estimate

The group put forward that the word “Estimate” tends to be regarded in the same way as a “Fixed Price Estimate” ie the price you are expecting to pay.  It was suggested a better term might be “Forecast”.

How Much to Estimate?

As an interesting aside CA Technologies (formerly Rally) recently produced a survey that showed that in terms of performance “Lightweight Scrum” (Story Points only) was best , followed by “Full Scrum” (Story Points and Task Hours),  then “No Estimates” and finally “Hour-Oriented” (Task Hours only).
The survey is available here:

(See also #NoEstimates)

An interesting example of estimating Tasks is rather than down to Hours is to estimate tasks in Half Days, as the it’s easier to think of finishing something this morning or by the end of the day – rather than I have x hours left to complete.

What to Report

In the Agile Manifesto we value  “Working software over comprehensive documentation” and this can be used to try and fob requests for documentation (and other metrics) off. We discussed that the Agile Manifesto also acknowledges “there is value in the items on the right”. So we need to ensure we have the Appropriate Level of Documentation and this needs to be determined up front so the Dev Team can account for it.

Any reporting needs to have value in that it is useful for the audience – if there is a need for Gannt Charts we needs to deliver them.

The group briefly touched on who should provide the metrics?  ScrumMaster seemed the obvious supplier but it was put forward that the ScrumMaster is a coaching and facilitating role and should it not be the Team delivering the metrics to the Product Owner who can then create the documentation for their audience. In reality it seemed that the ScrumMaster often provides any reports as needed.

And that was not all!

As well as the topic picked (and we never managed to get to the second as the group was in full flow) we also touched on

Discovery Phase – It is Valid as you only start Sprinting when you are ready.

Multi Skilled Teams – A title is your specialisation not your job.

Project Managers – Is a better term Delivery Managers?

Pair Programming – when training the learner should be doing the typing – muscle memory.

All in all it was a very enjoyable and intense session and I’m looking forward to the next one on September the 6th!

What’s the point of Scrum if you don’t “ship” regularly?

As everyone who’s looked even superficially into Scrum knows, there is a focus on creating a “potentially shippable increment of functionality”… but many projects or teams rarely actually ship incrementally or iteratively. In those cases, it’s sometimes hard to persuade teams of the importance of getting work “done” inside the timeboxed sprint because there may be another 10 sprints coming up before anything actually gets shipped.

For example, if you are approaching the last two or three days of a sprint and one of the developers informs the team at the standup that he’s finished all their work and that  they want to pull another story into the sprint.

However, there’s no chance that this story can be developed *and* tested within the remaining days in the sprint, you don’t have the resources to do this,  but they are a developer and they want to develop! They do not want to help out with testing, or help another developer finish their stories and they don’t see why it’s so important that things get tested in the same sprint as they are developed – after all this project isn’t going to ship until September, and all these stories still need to be developed.

Only when it comes towards the last couple of sprints of the project and the impending release to end users does the sharpness and focus really kick in, rather than having a constant focus and pace over the duration of the project.

This is a challenge, how do you as a scrum master persuade the team of the importance of driving work to a completed state within the timebox,  when from their perspective, it may not matter  – there’s always another sprint? It was observed that the rollover of work between sprints becomes habit forming if the boundaries of each sprint are regularly breached.

In this session, we started off discussing this  and unanimously agreed that releasing frequently to your customer is a very good thing and should be encouraged wherever possible, however as discussed in the very first session in April, that some businesses are just not able to handle regular releases through a range of factors.

A lot of the discussion centered around the importance of commitment – to the goals of the sprint and to the team and to the product owner and customer.It was felt that without such commitment, any team is potentially just  paying lip service to agility.

Two key messages came out of the discussions :

i)   Always ship in some way. Even if the organization can’t ship working software to the end user every sprint, look for another form of shipping… create a UAT environment and deploy the output of each sprint to that environment and have acceptance testing take place each sprint. The act of shipping even internally will reinforce the habit of producing “complete” functionality

ii)  Treat internal users and stakeholders with the same respect you would treat real customers. If the team takes internal stakeholders seriously then the focus on delivering to them will provide the required focus to get people to drive work to completion.




How do you Solve a Problem like Technical Debt?

Technical debt is a concept in programming that reflects the extra development work that arises when code that is easy to implement in the short run is used instead of applying the best overall solution

– techopedia

What is it?
The first issue addressed was what the group felt was meant by Technical Debt.

The first opinion was that it’s where the code was not fit for purpose – in that it sort of worked but was Spaghetti Code, hacked together to solve the problem but no underlying structure.  There was a strong sentiment that it’s not just programming and code but should also include a lack of adequate testing and  documentation.

Some of the traits of Technical Debt identified are:

  •  NOT Properly Developed
  •  NOT Architecturally Sound
  •  NOT Tested
  •  NOT Documented

Some of the sources of Technical Debt we identified came from quick Hacks to get things to work, Prototypes that become the Product, and Legacy Code.

A very valid point raised was whether we should necessarily use the term “technical debt” as it was felt the negative connotations of the “debt” implied fault and profligacy on the part of the team.

Some Solutions…
Stories to handle Tech Debt
Agile with its Iterative approach should be a good method to tackle Technical Debt as we should be able to add stories to the backlog to address it.
However can be problematic to sell these technical stories if there is no obvious benefit to the Business. For example if we spend a sprint tackling Technical Debt Stories – any demonstration will appear no different to the previous iteration (to the outside observer). If no obvious progress seen by PO/Business they will question the value of tackling Technical Debt “It works so what are you fixing?”

Prototype become the Product
A very interesting idea to prevent prototypes becoming the product was to keep it ragged enough that it’s obviously not finished article.

A Technical Debt Sprint
One proposal was to tackle Technical Debt as a Sprint at the end of the project. However there is potentially a lack of awareness that this does have a real Commercial Cost.

Or Not…
Just Incur the Debt
There was a 180 degree difference in what some participants felt about dealing with Technical Debt. One side saw Technical Debt as something that the team are happy to incur and let slide for the sake of expediency, and the other side that it’s something that the team are battling to get prioritized.

Selling it to the Business
It was felt that to a large extent that getting the opportunity to tackle Technical Debt is very much a Business Decision and it’s up to us to sell the need to address Technical Debt (see above for why it’s not obvious that we need to).
We need to be clear that putting off dealing with the Technical Debt will (not might) cause problems further down the road. In particular the discussion identified:
Support Teams costs money – the worse state the code base is in the greater the time to identify and correct issues
Any new Development inherits the Technical Debt as a default overhead, which will slow down and so increase the cost of development.
This is going to happen. But these cost can appear very hypothetical to a Product Owner.
What we need is a real cost; If we can say Technical Debt is adding an additional 30% (say) on top of our development costs then that is a figure we can give to the Product Owner that they can relate to.

How can we put a number on it?
Our Agile Metrics can prove very useful here. Two great examples were:
Decreasing Velocity.
Can we identify that the Technical Debt is (at least partially) responsible?

Declining Velocity
Why is the Velocity Decreasing? Technical Debt?

Comparative Size of Stories.
If we can see a trend that the average Story Estimates are increasing in size – is Technical Debt responsible?

In Conclusion
Technical Debt needs to be dealt with or the team will find itself deeper and deeper into a Fire Fighting leading to more Technical Debt spiral. The Scrum team has to give the business options, explain (and lobby) the impacts and put the decision firmly in the hands of the customer (or Product Owner).

Outsourcing in an Agile World

Or… How to make this work without it destroying your teams

We had a very interesting discussion in the most recent Practical Agile Workshop around how to make software development in an agile framework still work whilst utilising outsourced resources.

This is a subject that much has be written on and the initial temptation from a business is that the use of outsourced resource is equivalent to having a onsite team, only the time zones are different, after all big companies like Microsoft do this so why can’t we?

Well the general consensus was that there are two models that people think about when outsourcing is referred to:


This is where you “buy” in resources to work on part of a project, either as part of a hybrid team consisting of resources from both geographic locations and different companies ( so 2 developers from OutCo limited and a lead developer from PrimeCo Ltd who are engaging OutCo for resource) or that the entire development team (Developers and Testers) is completed by a different company (so Development team in Slovakia but the ScrumMaster is in Newcastle and the QA testers are in Latvia for example).

This is the ‘classic’ model and is usually done to provide resource or to reduce overall development costs.

Captive Service

This is where the company wanting the work done sets up or buys a company in a different geographical location (usually a different country) to do the development work.

This is usually done to get the benefits of a development team that is grown in your companies culture but can also be used to add additional resource at a reduced cost.

The differences:

The major differences are the type of relationship and what that means to your project, and to a certain extent what you can expect. The vendor model is just that, you are ‘buying ‘ a service and the vendor provides that service (in this instance development resource) whereas with the captive service you are working with people who are part of the same company.

The issues:

So the major issue that came out of the discussion was communication. As everyone at the session was working in an agile or semi-agile way there would be little point in reverting to a more waterfall approach to implement a very detailed specification to the outsourced team, who would then only develop what was in scope and anything outside of that scope would have to go through a change process.

So to get around this the following things were mentioned as needed:

Communication Technology:

This may seem like a given but it is absolutely vital. Agile is all about that communication and the freedom for development teams to question and communicate with the product owners in order to deliver the best possible solution in the time.

Without an effective communication process, your projects will suffer and will potentially not deliver what was required, or deliver it much later. In order help with this,you need effective communication and effective technology to do this.

Email is obviously one way but it lacks the immediacy of that face-to-face conversation particularly when you have a team in one country waiting on a product owner in another to enable the

development to continue. Email has a tendency to either be ignored or the priority of it can sometimes be overlooked. The general consensus was that the following methods were best

Video Chat – This gives you the ability to communicate at a much more interactive level and greatly improves the understanding. Nonverbal communication is not quite the same as with real face-to-face but it is present to some extent.

Chat/IM/Phone – This is also a good communication tool, however, you do miss out on the non-verbal side of communication which is very important

Email – Although this is a good way to get information to a person, there is certain fire-and-forget approach to email that can delay a project as you wait for a response to that urgent question around acceptance criteria.

Access to Product Owners:

In the discussion, we were fortunate enough to have a team from Poland who were about to start work for PHG as part of their development team but as part of a captive service. They had experience of working in the vendor model and said that access to the product owner made or broke a development product. They found the best projects they worked on were when the product owner would visit to discuss future work and future sprints in a face-to-face fashion.

There is also a tendency to treat vendors as just a resource that does not need to know the whys behind any work item or even the project itself. This leads to a barrier to communication so the recommendation was that Product Owners should ensure that the business reasons for work items and the larger project should be communicated to the whole team, both on-site and offshore at the beginning and during the project

Add communication points to your process:

As well as the normal processes around agile communication particularly in a Scrum team, there was a tendency on some projects for the requirements to be misunderstood or misinterpreted from the offshore team.

This can lead to a toing and froing between the development team offshore and the team on site as bugs get raised against the code that really are a misinterpretation of the requirements, and this ultimately leads to longer development times.

In order to mitigate this people have added an additional meeting between the offsite person picking up a work task, a business analyst or product owner and a tester where they discuss in a meeting (usually no more than 15 minutes) where the BA/PO goes through the acceptance criteria with the off-site person to make sure that the requirement is really well understood prior to any code begin written.

This also works with solution designs so before the code is written it is also a good idea to have a similar catch up between the offsite person and the software designer (if they are on site) to ensure that the technical requirements of the work are understood.

Although this seems like an additional overhead, these things would happen naturally in a fully on-site team but it is just to ensure that the work item has the best possible opportunity of being understood by all parties.


So fundamentally the three key take away points are, if you are going to outsource any development work, you need to ensure that:

  1. You have good technology for communication and feedback
  2. You have a product owner who is engaged with the team and available for the external team to ask questions of and to get answers from
  3. Add additional communication points where needed to reduce the tendency for work items to “bounce” back and forth between sites.

Many thanks for all those who attended this workshop and thank you for your suggestions. If you can think of any others or if we missed something, please add it to the comments below.


MVP -Miracle Cure or Myth?

The discussion centred upon whether “MVP” – Minimum Viable Product – is a concept that has been usefully applied in practice.

In theory this approach focuses on getting a subset of the full requirements of a project implemented and delivered early into the hands of users and then this is iterated upon and incrementally developed in response to feedback. The view from around the table was pretty gloomy in terms of real world examples.

While one team had a successful experience in providing a system fulfilling the requirements of a single customer and releasing before extending the system to other customers, most of the group had less happy stories:

Business Moves On…

In some situations the business has funded an MVP approach and the initial stage has been completed and deployed, but instead of this forming the basis to gain feedback and steer further development, the business has been champing at the bit to move on to other priorities and the original project never gets beyond the MVP.

While stopping a project after the MVP stage is a perfectly valid outcome if feedback dictates that there’s no value in delivering anything more, in some cases the decision had already been taken to stop the project regardless of what customer feedback indicated.


MVP = “Maximum Viable Product”

Possibly borne out of the first scenario – where past experience has burnt people and they have a strong suspicion that anything not in the MVP will never get delivered, or perhaps will be delivered but subsequent phases go to the back of the queue, stakeholders push go get all their requirements into the first release. This is especially likely in cases where there are fewer development teams than there are projects/stakeholder groups.

This develops into a self-fuelling situation where Project X’s MVP encompasses all possible requirements so that they get everything up front and don’t get stuck behind Project Y for a second phase with useful but lower priority requirements. Project Y in turn is expanded in exactly the same way because they are worried that Project X phase 2 will take a long time before resources are free for their own next phase.

This leads to teams working on features that are low priority/low value to the business but high priority for their stakeholders, rather than always focussing on delivering the highest value to the business.


“Get out of jail free”

Some members of the group described a situation where the teams have embarked on a project to deliver a full scale project and then when things started to go off track they pulled out the “MVP Card” to reduce the scope of the project retrospectively. It was agreed that to undertake a project using the MVP approach this has to be declared up front and not just used to bail out a team who are struggling to deliver.


Horizontal scope vs Vertical scope

Another similar example of abusing the MVP approach was given where the term MVP was used was as a justification for cutting corners in the implementation of features, for example bypassing some types of testing. For an MVP approach to work the features developed should still be of the same quality as a full release.



In the group the MVP approach has suffered from manipulation from both business and development teams and was widely seen as a great theory but rarely put into practice effectively.