Internal vs. External Validity of Research Funding

So far, most of my research funding has been from industry. Sometimes, I have to defend myself against colleagues who argue that their public funding is somehow superior to my industry funding. This is only a sentiment; they have not been able to give any particular reason for their position.

I disagree with this assessment, and for good reason. These two types of funding are not comparable and ideally you have both.

In research, there are several quality criteria, of which the so-called internal and external validity of a result are two important ones.

  • Internal validity, simplifying, is a measure of how a result is consistent within itself. Are there any contradictions within the result itself? If not so, than you may have high internal validity (which is good).
  • External validity, simplifying, is a measure of how a result is predictive or representative of the reality outside the original research data. If so, your result may have high external validity, which is also good.

Public grants have high internal validity but low external validity. In contrast, industry funding has high external validity, but low internal validity. The following figure illustrates this point:

Continue reading “Internal vs. External Validity of Research Funding”

Fraudulent Publishers not Missing a Beat in 2015

Unbelievable. About everything in this Call for Papers and the website being linked to is screaming fraud. However, it is so badly done that I can only assume that someone is turning the Scigen experiment on its head.

Once Again Natural vs Engineering Sciences Struggeling over Definitions #FSE2014

I’m in Hong Kong, attending FSE 2014. I had signed up for the Next-Generation Mining-Software-Repositories workshop at HKUST but missed it for (undisclosed) reasons. Apparently there were two main topics of dicussion:

  • Calls by colleagues to make mining work “useful” rather than “just” interesting
  • Calls by colleagues to build tools rather than “just” generate insight

Both issues are joined at the hip and an expression of a struggle over the definition of “what is good science” in software engineering. As someone who started out as a student of physics, I have an idea of science that views “interesting insights” as useful in their own right: You don’t need to build a tool that shows your insight improves the world. On the other end is the classic notion of engineering science, where there is no (publishable) research if you don’t improve the world in some tangible way.

Continue reading “Once Again Natural vs Engineering Sciences Struggeling over Definitions #FSE2014”

How I Write Reviews

As a professor of computer science I get to write a lot of reviews: For Bachelor and Master theses, for dissertations, for grant proposals, and for conference and journal paper submissions. I’d like to explain the logic of the reviews I write, using conference and journal submissions as the example. It is pretty simple:

The purpose of a review is to make a recommendation to a committee (or an editor) on how to handle a particular paper submission.

In my mind, a good review starts with the actual recommendation to the committee or the editor. All that follows is a substantiation of this recommendation.

Continue reading “How I Write Reviews”

Software Architecture is a (Poor) Metaphor

At FAU, we are now holding our so-named “software architecture” seminar for the second time. It is important to realize (for students as well as the general public) that “software architecture” is a metaphor (or, maybe more precisely, an analogy). Architecture is a discipline several thousand years old, while software architecture is only as old as software engineering, probably younger, if you make Shaw and Garlan’s book the official birth date of software architecture, at least in academia.

Continue reading “Software Architecture is a (Poor) Metaphor”

Do Engineering Researchers Care About Truth?

So ICSE, the top software engineering conference, rejected our paper, again. The reviewers were actually quite positive: high-quality work, little or no flaws, interesting. One of the reviewers found the paper’s results surprising, asked for more details, and suggested new research directions. The final conclusion of both reviews, however, was the same: The work has no merit because it only explains the world, it does not improve it.

Our paper provides a high-quality model of a key aspect of programming behavior in open source, basically the modeling behind this earlier empirical paper. As such, it is a descriptive empirical paper. It takes a large amount of data and provides an analytically closed model of the data so that we can explain or predict the future (better). That’s pretty standard operating procedure in most of natural and social sciences.

Continue reading “Do Engineering Researchers Care About Truth?”

Why I’m Interested In Computer Games Research

Just before my inaugural lecture at University of Erlangen, a broad panel of scientists was debating the merits of computer games. Except for a computer games researcher and a games professional, all participants thought that computer games are of no particular interest. When I asked: “But isn’t there anything to learn from computer games?” I got a full rebuke by the M.D. on the panel: “No, there is no recognizable value whatsoever.”

Continue reading “Why I’m Interested In Computer Games Research”

A Broader Notion of Computer Science

A recent article in the CACM complained about the dominance of reductionists’ views in computer science research.

“We are sorry to inform you that your paper has been rejected, due to the lack of empirical evidence supporting it.” [1]

Continue reading “A Broader Notion of Computer Science”