Scrum is an agile method (framework) that when instantiated can be rather ornate. Most developers, when I talk to them, tell me that when given a choice they would not be doing Scrum. While Scrum may have felt much lighter than the competition back in the nineties, today it weighs in as rather heavy.
Given this, I wanted to reflect on why I still teach Scrum (and have a blog post to point any of my students to).
Last weekend, I ventured into unchartered territory (for me) and attended the Berliner Methodentreffen, a research conference mostly frequented by social scientists. I participated in a workshop on mixed methods, where the presenter discussed different models of mixing methods with each other (“Methodenpluralität” in German).
She omitted one model that I thought is often used: First to perform exploratory data analysis to detect some interesting phenomenon and then to do some explanatory qualitative research to formulate hypotheses as to why this phenomenon is, was, or keeps happening.
During my question, the temperature in the room dropped noticeably and I was informed that such exploratory data analysis is unscientific and frowned upon. Confused I inquired some more why this is so, but did not get a clear answer.
From my excursion into qualitative research land (the aforementioned Berliner Methodentreffen) I took away some rather confusing impressions about the variety of what people consider science. I’m well aware of different philosophies of science (from positivism to radical constructivism) and their impact on research methodology (from controlled experiments to action research, ethnographies, etc.) I did not expect, however, for people to be so divided about fundamental assumptions about what constitutes good science.
One of the initial surprises for me was to learn that it is acceptable for a dissertation to apply only one method and for that method to only deliver descriptive results (and thereby not really make a contribution to theory). In computer science, it is difficult to publish solely theory development research (let alone purely descriptive results) without any theory validation attempt, even if only selective. The limits of what can be done in 3-5 Ph.D. student years are clear, but this shouldn’t lead anyone to lower expectations.
In our research, we often work with industry. In software engineering research, this is a no-brainer: Industry is, where there the research data is. That’s why we go there. For many research questions, we cannot create adequately, in a laboratory setting, a situation that lets us do our research.
Once a researcher realizes this, they need to decide on whether to charge the industry partner for the collaboration. Many researchers don’t, because sales is not exactly their strength. Also, many shy away from asking for money, because it is an additional hurdle to overcome, once an interested industry partner has been found.
I often discuss with my Ph.D. students how to structure their work and publish the results. There are many pitfalls. It gets more difficult, if we bring in other professors, who may have a different opinion on how to structure the work. Over time, I have found there are really only two main dimensions, though:
Separate work into theory building and theory validation, and publish one or more papers on each topic
Merge work on theory building and validation for an important aspect (hypothesis) into one and publish that
A researcher-friend recently complained to me that her research paper had been rejected, because the reviewers considered it “boring”. I recommended that she complain to the editor-in-chief of the journal, because in my book, “boring” is no acceptable reason to reject a paper. (“Trivial” may be, but that is a different issue.)
The reasoning behind rejecting a paper, because it is “boring”, is as follows: Research should be novel and provide new and perhaps even unintuitive insights. Results that are not surprising (in the mind of the reviewer, at least) are not novel and therefore not worthy of publication.
Traditional science has a clear idea of how research is to progress, rationally speaking: First you build a theory, for example, by observation, expert interviews, and the like, and then you generate hypotheses to test the theory. Over time, some theories will stand the test of time and will be considered valid.
Sadly, most software engineering research today, even the one published in top journals and conferences, often skips the theory building process and jumps straight to hypothesis testing. Vicky the Viking, from the accordingly named TV series of way back comes to my mind: Out of the blue, some genius idea pops into the researcher’s mind. This idea forms the hypothesis to be tested in research.
Researchers often use the term “qualitative research” to mean research without substantial empirical data, and use “quantitative research” to mean research with substantial empirical data. That doesn’t make sense to me, as most “qualitative researchers” will quickly point out, because qualitative research utilizes as much data in a structured way as it can. Everything else would not be research.
My primary goal in becoming a professor was to turn my (hoped-for excellent) research and teaching into startups. For that reason I created the Startupinformatik program and set-up my teaching to support it. Sadly, I’ve been noticing over the years that things don’t seem to get easier but harder. Specifically, “the system” (I’ll explain below) seems to view professors with mistrust rather than as the natural allies they should be when it comes to leading students to create a startup.