I’m seeking advice on how to frame the research question for a research project (Ph.D. thesis) on software product management and open source. The simple heuristic “non-differentiating -> open source it, competitively differentating -> keep it closed” doesn’t cut it because of secondary effects like development efficiency resulting from open sourcing, market opportunities resulting from platform compatiblity, etc.
The best I could come up with so far are three different but related questions. These are:
- For a non-differentiating function, which open source component to use?
- For a chosen open source component, how to manage this dependency?
- For a competitively differentiating function, when to open source?
Questions 1 and 2 are well-defined. Question 3 remains unwieldy. The heuristic mentioned above would answer “never”, but this is not true, as explained. Overall competitive situation and compatibility considerations may still lead to open sourcing unique intellectual property.
I’m seeking comments as to how practitioners (or other researchers) would look at this question. Any comments are appreciated.
On the PBS Newshour Duke University biologist Sheila Patek just made a passionate plea for “why knowledge for the pure sake of knowing is good enough to justify scientific research” using her own research into mantis shrimp as an example. While I support public funding for basic research, Patek makes a convoluted and ultimately harmful to her own case argument.
Continue reading “The Downside of the “Knowledge for Knowledge’s Sake” Argument”
“AI” (or just smart algorithms, if you will, where smart will be plain in a few years and dumb in 10 years) is on the rise, no doubt about it. As a consequence, I’ve been having fun with “AI challenges” of the sort: Could a computer figure this out? As an example, take a look at the advertisement below. It is for a conference of University chancellors in Germany (administrative leaders of their universities). Could a computer figure out the disconnect between the depicted young people, presumably students, and the more advanced-in-years chancellors of their universities?
Continue reading “Having Fun Thinking About AI Challenges”
So far, most of my research funding has been from industry. Sometimes, I have to defend myself against colleagues who argue that their public funding is somehow superior to my industry funding. This is only a sentiment; they have not been able to give any particular reason for their position.
I disagree with this assessment, and for good reason. These two types of funding are not comparable and ideally you have both.
In research, there are several quality criteria, of which the so-called internal and external validity of a result are two important ones.
- Internal validity, simplifying, is a measure of how a result is consistent within itself. Are there any contradictions within the result itself? If not so, than you may have high internal validity (which is good).
- External validity, simplifying, is a measure of how a result is predictive or representative of the reality outside the original research data. If so, your result may have high external validity, which is also good.
Public grants have high internal validity but low external validity. In contrast, industry funding has high external validity, but low internal validity. The following figure illustrates this point:
Continue reading “Internal vs. External Validity of Research Funding”
Unbelievable. About everything in this Call for Papers and the website being linked to is screaming fraud. However, it is so badly done that I can only assume that someone is turning the Scigen experiment on its head.
I’m in Hong Kong, attending FSE 2014. I had signed up for the Next-Generation Mining-Software-Repositories workshop at HKUST but missed it for (undisclosed) reasons. Apparently there were two main topics of dicussion:
- Calls by colleagues to make mining work “useful” rather than “just” interesting
- Calls by colleagues to build tools rather than “just” generate insight
Both issues are joined at the hip and an expression of a struggle over the definition of “what is good science” in software engineering. As someone who started out as a student of physics, I have an idea of science that views “interesting insights” as useful in their own right: You don’t need to build a tool that shows your insight improves the world. On the other end is the classic notion of engineering science, where there is no (publishable) research if you don’t improve the world in some tangible way.
Continue reading “Once Again Natural vs Engineering Sciences Struggeling over Definitions #FSE2014”
As a professor of computer science I get to write a lot of reviews: For Bachelor and Master theses, for dissertations, for grant proposals, and for conference and journal paper submissions. I’d like to explain the logic of the reviews I write, using conference and journal submissions as the example. It is pretty simple:
The purpose of a review is to make a recommendation to a committee (or an editor) on how to handle a particular paper submission.
In my mind, a good review starts with the actual recommendation to the committee or the editor. All that follows is a substantiation of this recommendation.
Continue reading “How I Write Reviews”
At FAU, we are now holding our so-named “software architecture” seminar for the second time. It is important to realize (for students as well as the general public) that “software architecture” is a metaphor (or, maybe more precisely, an analogy). Architecture is a discipline several thousand years old, while software architecture is only as old as software engineering, probably younger, if you make Shaw and Garlan’s book the official birth date of software architecture, at least in academia.
Continue reading “Software Architecture is a (Poor) Metaphor”
So ICSE, the top software engineering conference, rejected our paper, again. The reviewers were actually quite positive: high-quality work, little or no flaws, interesting. One of the reviewers found the paper’s results surprising, asked for more details, and suggested new research directions. The final conclusion of both reviews, however, was the same: The work has no merit because it only explains the world, it does not improve it.
Our paper provides a high-quality model of a key aspect of programming behavior in open source, basically the modeling behind this earlier empirical paper. As such, it is a descriptive empirical paper. It takes a large amount of data and provides an analytically closed model of the data so that we can explain or predict the future (better). That’s pretty standard operating procedure in most of natural and social sciences.
Continue reading “Do Engineering Researchers Care About Truth?”
Just before my inaugural lecture at University of Erlangen, a broad panel of scientists was debating the merits of computer games. Except for a computer games researcher and a games professional, all participants thought that computer games are of no particular interest. When I asked: “But isn’t there anything to learn from computer games?” I got a full rebuke by the M.D. on the panel: “No, there is no recognizable value whatsoever.”
Continue reading “Why I’m Interested In Computer Games Research”