Is Exploratory Data Analysis Bad?

Last weekend, I ventured into unchartered territory (for me) and attended the Berliner Methodentreffen, a research conference mostly frequented by social scientists. I participated in a workshop on mixed methods, where the presenter discussed different models of mixing methods with each other (“Methodenpluralität” in German).

She omitted one model that I thought is often used: First to perform exploratory data analysis to detect some interesting phenomenon and then to do some explanatory qualitative research to formulate hypotheses as to why this phenomenon is, was, or keeps happening.

During my question, the temperature in the room dropped noticeably and I was informed that such exploratory data analysis is unscientific and frowned upon. Confused I inquired some more why this is so, but did not get a clear answer.

Continue reading “Is Exploratory Data Analysis Bad?”

We May not Know What We are Doing…

From my excursion into qualitative research land (the aforementioned Berliner Methodentreffen) I took away some rather confusing impressions about the variety of what people consider science. I’m well aware of different philosophies of science (from positivism to radical constructivism) and their impact on research methodology (from controlled experiments to action research, ethnographies, etc.) I did not expect, however, for people to be so divided about fundamental assumptions about what constitutes good science.

One of the initial surprises for me was to learn that it is acceptable for a dissertation to apply only one method and for that method to only deliver descriptive results (and thereby not really make a contribution to theory). In computer science, it is difficult to publish solely theory development research (let alone purely descriptive results) without any theory validation attempt, even if only selective. The limits of what can be done in 3-5 Ph.D. student years are clear, but this shouldn’t lead anyone to lower expectations.

Continue reading “We May not Know What We are Doing…”

Why You Should Ask for Money When Working With Industry

In our research, we often work with industry. In software engineering research, this is a no-brainer: Industry is, where there the research data is. That’s why we go there. For many research questions, we cannot create adequately, in a laboratory setting, a situation that lets us do our research.

Once a researcher realizes this, they need to decide on whether to charge the industry partner for the collaboration. Many researchers don’t, because sales is not exactly their strength. Also, many shy away from asking for money, because it is an additional hurdle to overcome, once an interested industry partner has been found.

Continue reading “Why You Should Ask for Money When Working With Industry”

How to Slice Your Research Work for Publication

I often discuss with my Ph.D. students how to structure their work and publish the results. There are many pitfalls. It gets more difficult, if we bring in other professors, who may have a different opinion on how to structure the work. Over time, I have found there are really only two main dimensions, though:

  • Separate work into theory building and theory validation, and publish one or more papers on each topic
  • Merge work on theory building and validation for an important aspect (hypothesis) into one and publish that

Continue reading “How to Slice Your Research Work for Publication”

Why “Boring” is no Reason for Rejection

A researcher-friend recently complained to me that her research paper had been rejected, because the reviewers considered it “boring”. I recommended that she complain to the editor-in-chief of the journal, because in my book, “boring” is no acceptable reason to reject a paper. (“Trivial” may be, but that is a different issue.)

The reasoning behind rejecting a paper, because it is “boring”, is as follows: Research should be novel and provide new and perhaps even unintuitive insights. Results that are not surprising (in the mind of the reviewer, at least) are not novel and therefore not worthy of publication.

Continue reading “Why “Boring” is no Reason for Rejection”

How the Lack of Theory Building in Software Engineering Research is Hurting Us

Traditional science has a clear idea of how research is to progress, rationally speaking: First you build a theory, for example, by observation, expert interviews, and the like, and then you generate hypotheses to test the theory. Over time, some theories will stand the test of time and will be considered valid.

Sadly, most software engineering research today, even the one published in top journals and conferences, often skips the theory building process and jumps straight to hypothesis testing. Vicky the Viking, from the accordingly named TV series of way back comes to my mind: Out of the blue, some genius idea pops into the researcher’s mind. This idea forms the hypothesis to be tested in research.

Continue reading “How the Lack of Theory Building in Software Engineering Research is Hurting Us”

On the Misuse of the Terms Qualitative and Quantitative Research

Researchers often use the term “qualitative research” to mean research without substantial empirical data, and use “quantitative research” to mean research with substantial empirical data. That doesn’t make sense to me, as most “qualitative researchers” will quickly point out, because qualitative research utilizes as much data in a structured way as it can. Everything else would not be research.

Continue reading “On the Misuse of the Terms Qualitative and Quantitative Research”

Professors and Startups

My primary goal in becoming a professor was to turn my (hoped-for excellent) research and teaching into startups. For that reason I created the Startupinformatik program and set-up my teaching to support it. Sadly, I’ve been noticing over the years that things don’t seem to get easier but harder. Specifically, “the system” (I’ll explain below) seems to view professors with mistrust rather than as the natural allies they should be when it comes to leading students to create a startup.

Let me illustrate this using two experiences:

Continue reading “Professors and Startups”

Call for Papers: 1st Workshop on Innovative Software Engineering Education (ISEE 2018)

http://www1.in.tum.de/isee2018

In conjunction with the Software Engineering Conference 2018 in Ulm, March 6, 2018

Motivation

The number of students continuously increases and presents ever greater challenges for instructors in software engineering. In courses with a huge number of students, it is particularly difficult to motivate students to actively participate. At the same time, practice-oriented and project-related training is becoming increasingly important, but project courses in cooperation with industry are often associated with high costs.

Digital teaching, online courses and new teaching concepts complement the curriculum. They offer a wide range of possibilities for modern and attractive teaching, but pose methodical, technical and organizational challenges for instructors.

Continue reading “Call for Papers: 1st Workshop on Innovative Software Engineering Education (ISEE 2018)”