Last weekend, I ventured into unchartered territory (for me) and attended the Berliner Methodentreffen, a research conference mostly frequented by social scientists. I participated in a workshop on mixed methods, where the presenter discussed different models of mixing methods with each other (“Methodenpluralität” in German).
She omitted one model that I thought is often used: First to perform exploratory data analysis to detect some interesting phenomenon and then to do some explanatory qualitative research to formulate hypotheses as to why this phenomenon is, was, or keeps happening.
During my question, the temperature in the room dropped noticeably and I was informed that such exploratory data analysis is unscientific and frowned upon. Confused I inquired some more why this is so, but did not get a clear answer.
Continue reading “Is Exploratory Data Analysis Bad?”
I just returned from the Berliner Methodentreffen. One of the initiatives that was most interesting to me is a new attempt at agreeing on and standardizing an open exchange format for qualitative data analysis projects between the different QDA tools. As of today, it is not possible to take your data from one vendor’s tool to another; you are locked-in to one product. The Rotterdam Exchange Format Initiative (REFI) is trying to change that using the budding QDA-XML format.
There are three common reasons for why such an exchange format (and hence the initiative) is important.
Continue reading “On the Importance of an Open Standard Exchange Format for QDA Projects”
Georg Grütter of Bosch recorded my keynote at the Inner Source Commons summit in Renningen, Germany, on May 16th, 2018, and put it on Youtube. Please watch it below (original video, local copy).
According to Georg, the video is licensed under CC BY-SA 3.0 (for the Bosch part) and I agree (for my part). Hence © 2018 Dirk Riehle, Robert Bosch GmbH (Georg Grütter and perhaps some other undetermined parties). The original title of the video recording that Georg gave it is “Prof. Dr. Dirk Riehle on the ISC.S6 – Ten Years of InnerSource Case Studies And Our Conclusions”.
I often discuss with my Ph.D. students how to structure their work and publish the results. There are many pitfalls. It gets more difficult, if we bring in other professors, who may have a different opinion on how to structure the work. Over time, I have found there are really only two main dimensions, though:
- Separate work into theory building and theory validation, and publish one or more papers on each topic
- Merge work on theory building and validation for an important aspect (hypothesis) into one and publish that
Continue reading “How to Slice Your Research Work for Publication”
A researcher-friend recently complained to me that her research paper had been rejected, because the reviewers considered it “boring”. I recommended that she complain to the editor-in-chief of the journal, because in my book, “boring” is no acceptable reason to reject a paper. (“Trivial” may be, but that is a different issue.)
The reasoning behind rejecting a paper, because it is “boring”, is as follows: Research should be novel and provide new and perhaps even unintuitive insights. Results that are not surprising (in the mind of the reviewer, at least) are not novel and therefore not worthy of publication.
Continue reading “Why “Boring” is no Reason for Rejection”
Actually, I just notice it is the fourth time within the last two months, but tomorrow is the first time I’ll present our research on inner source in a public venue. If you are interested in ten years of case studies on how to use open source best practices within companies (called inner source), come see me at the Bitkom working group meeting on open source at Design Offices, Unter den Linden 26-30, 10117 Berlin.
Traditional science has a clear idea of how research is to progress, rationally speaking: First you build a theory, for example, by observation, expert interviews, and the like, and then you generate hypotheses to test the theory. Over time, some theories will stand the test of time and will be considered valid.
Sadly, most software engineering research today, even the one published in top journals and conferences, often skips the theory building process and jumps straight to hypothesis testing. Vicky the Viking, from the accordingly named TV series of way back comes to my mind: Out of the blue, some genius idea pops into the researcher’s mind. This idea forms the hypothesis to be tested in research.
Continue reading “How the Lack of Theory Building in Software Engineering Research is Hurting Us”
Abstract: The creation of domain models from qualitative input relies heavily on experience. An uncodified ad-hoc modeling process is still common and leads to poor documentation of the analysis. In this article we present a new method for domain analysis based on qualitative data analysis. The method helps identify inconsistencies, ensures a high degree of completeness, and inherently provides traceability from analysis results back to stakeholder input. These traces do not have to be documented after the fact, but rather emerge naturally as part of the analysis process. We evaluate our approach using four exploratory studies.
Keywords: Domain modeling, Domain model, Requirements engineering, Requirements elicitation, Qualitative data analysis
Reference: Kaufmann, A., & Riehle, D. (2017). The QDAcity-RE method for structural domain modeling using qualitative data analysis. Requirements Engineering, 1-18.
The paper is available as a PDF file.
I got asked the other day why there are only two research groups working on inner source world-wide. Inner source is the use of open source best practices within companies, and it is a hot topic with many companies who want to go beyond agile. There was varied research around the world in the past 15 years, but only two groups really have been consistently working on this: Brian’s group at LERO and my research group at FAU.
Continue reading “Why are There Only Two Research Groups Working on Inner Source?”
Some of my colleagues like to talk about how research that involves programming is “hard”, while research that involves human subjects is “soft”. Similarly, some colleagues like to call exploratory (qualitative) research “soft” and confirmatory (quantitative) research “hard”. Soft and hard are often used as synonyms for easy and difficult, and this is plain wrong.
Pretty much any research worth its salt is difficult in some way, and working with human subjects makes it even more difficult. I find research methods like qualitative surveys, involving interview analyses, for example, much harder than the statistical analysis of some data or some algorithm design. The reason is the lack of immediate feedback.
While you can (and should) put quality assurance measures in place for your interview analyses, ranging from basic member checking to complex forms of triangulation, it might take a long time until you learn whether what you did was any good. So you have to focus hard in your analysis without knowing whether you are on the right track. It doesn’t get more difficult than this.