From a Student Business Plan

In PROD, my course on software product management, students can choose to develop a business plan for a software product. Not all of my students seem to take this as serious as I wished. Here is the opening sentence of the exec summary from one of the teams:

With a total loss of 388,987.50 Euros in the period of 2017 to 2019, we will increase profit by 2,123,121 Euros and the customer base by 1392% […] Break even will be reached by mid 2018.

Reminds me off the bubble days: “We will make 80 cents for one dollar spent and will be profitable in no time!”

Challenges to making software engineering research relevant to industry

I just attended FSE 2016, a leading academic conference on software engineering research. As is en vogue, it had a session on why so much software engineering research seems so removed from reality. One observation was that academics toil in areas of little interest to practice, publishing one incremental paper of little relevance after another. Another observation was that as empirical methods have taken hold, much research has become as rigorous as it has become irrelevant.

My answer to why so much software engineering research is irrelevant to practice is as straightforward as it is hard to change. The problem rests in the interlocking of three main forces that conspire to keep academics away from doing interesting and ultimately impactful research. These forces are:

  • Academic incentive system
  • Access to relevant data
  • Research methods competence

Continue reading “Challenges to making software engineering research relevant to industry”

Follow-up on the Discussions about Knowledge for Knowledge’s Sake

I’ve been enjoying the discussion around Patek’s recent video argument for knowledge for knowledge’s sake in several forums. I thought I’d summarize my arguments here. To me it looks all pretty straightforward.

From a principled stance, as to funding research, it is the funder’s prerogative who to fund. Often, grant proposals (funding requests) exceed available funds, so the funder needs to rank-order the grant proposals and typically will fund those ranked highest until the funds are exhausted. A private funder may use whatever criteria they deem appropriate. Public funding, i.e. taxpayer money, is more tricky as this is typically the government agencies setting policies that somehow rank-order funding proposals for a particular fund. It seems rather obvious to me that taxpayer money should be spent on something that benefits society. Hence, a grant proposal must promise some of that benefit. How it does this, can vary. I see at least two dimensions along which to argue: Immediacy (or risk) and impact. Something that believably provides benefits sooner is preferable to something that provides benefits later. Something that believably promises a higher impact is preferable to something that provides lower impact.

Thus, research that promises to cure cancer today is preferable over research that explains why teenage girls prefer blue over pink on Mondays and are generally unapproachable that day. Which is not to say that the teenage girl question might not get funded: Funders and funding are broad and deep and for everything that public agencies won’t fund there is a private funder whose pet peeve would be solving that question.

The value of research is always relative, never absolute, and always to be viewed within a particular evaluation framework.

Continue reading “Follow-up on the Discussions about Knowledge for Knowledge’s Sake”

The Downside of the “Knowledge for Knowledge’s Sake” Argument

On the PBS Newshour Duke University biologist Sheila Patek just made a passionate plea for “why knowledge for the pure sake of knowing is good enough to justify scientific research” using her own research into mantis shrimp as an example. While I support public funding for basic research, Patek makes a convoluted and ultimately harmful to her own case argument.

Continue reading “The Downside of the “Knowledge for Knowledge’s Sake” Argument”

Why you should not cite research work on Wikipedia that is not freely available

I recommend that Wikipedia articles do not reference research papers that are not freely available, just like research papers should not cite research work that is not freely available. Anyone who cites non-open-access, non-free research bases their work and argument on materials not accessible to the vast majority of people on this planet. By doing so, authors exclude almost everyone else from verifying and critiquing their work. They thereby stop science and progress dead in their tracks.

My advice is that authors need to understand that non-open-access, non-free research articles have not been published, they have been buried behind a paywall. With the vast majority of people not having access to such paid-for materials, any such buried article is not a contribution to the progress of science and should be ignored.

Continue reading “Why you should not cite research work on Wikipedia that is not freely available”

The reviews are in and it ain’t pretty

From the first review: Best application of grounded theory that I have seen in a long time!

From the second review: I have seen grounded theory; this ain’t it.

From the third review: What is grounded theory?

Conclusion: No more grounded theory.

PS: Those reviews are a synthesis of prior experiences.

What makes a great “grand challenge”?

Ten years ago DARPA, the US defense research agency, first organized the DARPA Grand Challenge. The challenge was to build a car that could drive 150 miles through the Mohave desert autonomously. The key is the car’s autonomy: No human brain was to steer it and make decisions, it was computers and other technology all the way. The associated technical challenges were manifold: read and interpret the environment correctly, predict behavior of the environment and the car while interacting, plan the route and get to the goal fastest, don’t kill anyone on the way. In its second attempt in 2005, the challenge was first fulfilled and won by the team of Sebastian Thrun of Stanford University.

Since then a series of other “grand challenges” has followed, some of which were reasonable, some of them less so. Here is what I think makes a great grand challenge:

  1. It is relevant for society and a non-expert recognizes the relevance
  2. It is cross-disciplinary, combining many different problems into one
  3. It is a useful application with a clear tangible communicable result
  4. It is measurable and achievable so that we know it when we solved it

I would avoid any “grand challenge” that an expert recognizes but not a layman. A grand challenge needs to stir emotions. Looking at example grand challenges proposed around the Internet, I find many too abstract, too focused on disciplinary issues. My vote goes to “the secure cell phone” as a grand challenge, but there are many others.

For Posteriority and Reuse

FIWare is a large EU-sponsored program. It has a mission (“about”) statement. Specifically:

FIWare is an open initiative aiming to create a sustainable ecosystem to grasp the opportunities that will emerge with the new wave of digitalization caused by the integration of recent Internet technologies. […]

Continue reading “For Posteriority and Reuse”

Internal vs. External Validity of Research Funding

So far, most of my research funding has been from industry. Sometimes, I have to defend myself against colleagues who argue that their public funding is somehow superior to my industry funding. This is only a sentiment; they have not been able to give any particular reason for their position.

I disagree with this assessment, and for good reason. These two types of funding are not comparable and ideally you have both.

In research, there are several quality criteria, of which the so-called internal and external validity of a result are two important ones.

  • Internal validity, simplifying, is a measure of how a result is consistent within itself. Are there any contradictions within the result itself? If not so, than you may have high internal validity (which is good).
  • External validity, simplifying, is a measure of how a result is predictive or representative of the reality outside the original research data. If so, your result may have high external validity, which is also good.

Public grants have high internal validity but low external validity. In contrast, industry funding has high external validity, but low internal validity. The following figure illustrates this point:

Continue reading “Internal vs. External Validity of Research Funding”