Agility, Empirical Process, and Falsification

Agili­ty only makes sense in the light of uncer­tain­ty. If you know exact­ly what the cus­tomer wants or what the mar­ket needs, you don’t have to be agile and can save your­self a lot of detours and aber­ra­tions. Only who real­ly knows? And even if we think we know it here and now, when the prod­uct is ready it can be very dif­fer­ent. Agili­ty makes a lot of sense in today’s world for which VUCA (for Volatil­i­ty, Uncer­tain­ty, Com­plex­i­ty, and Ambi­gu­i­ty) is more and more true. “Respond­ing to change over fol­low­ing a plan,” the Agile Man­i­festo says. But how do you actu­al­ly respond to change and how do you rec­og­nize that you have made a wrong turn?

That’s why agili­ty has a lot to do with empir­i­cal research. In an agile approach, one con­stant­ly makes hypothe­ses and one tries to con­firm them as well as pos­si­ble by mea­sure­ments and feed­back loops. In addi­tion to this empir­i­cal process at the prod­uct lev­el, agili­ty is also empir­i­cal at the process lev­el. “At reg­u­lar inter­vals, the team reflects on how it can become more effec­tive and adjusts its behav­ior accord­ing­ly.” is one prin­ci­ple behind the agile man­i­festo. In Scrum, for exam­ple, each sprint is a hypoth­e­sis of col­lab­o­ra­tion, which is con­firmed or refut­ed in the ret­ro­spec­tive at the end.

The term empir­i­cal is derived from the Greek word εμπειρία (empeiría), which means expe­ri­ence or knowl­edge through expe­ri­ence. This refers to the method­i­cal and sys­tem­at­ic col­lec­tion of data with the pur­pose of ver­i­fy­ing or refut­ing the­o­ret­i­cal assump­tions about the world. Agili­ty begins with hon­est­ly acknowl­edg­ing the uncer­tain­ty of the ven­ture and its envi­ron­ment. The log­i­cal con­se­quence of this aware­ness of uncer­tain­ty is to work with hypothe­ses. Every pri­or­i­ti­za­tion, every sprint plan­ning is a hypoth­e­sis of the assumed cus­tomer val­ue. And good hypothe­ses have to prove them­selves. That’s why agile teams cap­ture all the data about them­selves and their pro­duc­tiv­i­ty as well as about the prod­uct and its users.

An empir­i­cal-sci­en­tif­ic sys­tem must be able to fail based on expe­ri­ence.
Karl Pop­per

In prin­ci­ple, most assump­tions about prod­ucts and cus­tomers can nev­er be com­plete­ly ver­i­fied in the sense of gen­er­al valid­i­ty. In this respect, all our hypothe­ses are pre­lim­i­nary and just not yet refut­ed. The bet­ter is the ene­my of the good. The prod­uct team must focus on find­ing this bet­ter solu­tion faster than the com­pe­ti­tion. And this bet­ter solu­tion can only be found if one works con­stant­ly on the fal­si­fi­ca­tion and not so much on the con­fir­ma­tion of one’s own pre­vi­ous hypothe­ses. There­fore, it is a good prac­tice when agile teams inten­sive­ly use A/B test­ing.

Stay Current!

You nev­er want to miss an arti­cle on my blog again? With our Newslet­ter you will receive the lat­est arti­cles in your inbox once a week.

Leave a Reply