I just attended FSE 2016, a leading academic conference on software engineering research. As is en vogue, it had a session on why so much software engineering research seems so removed from reality. One observation was that academics toil in areas of little interest to practice, publishing one incremental paper of little relevance after another. Another observation was that as empirical methods have taken hold, much research has become as rigorous as it has become irrelevant.
My answer to why so much software engineering research is irrelevant to practice is as straightforward as it is hard to change. The problem rests in the interlocking of three main forces that conspire to keep academics away from doing interesting and ultimately impactful research. These forces are:
- Academic incentive system
- Access to relevant data
- Research methods competence
With all the hoopla on Google Daydream coming up, I thought I’d share two photos of people high on Samsung’s Gear VR. I think Samsung chose a better name for their product. The second photo clearly shows a person with a gearface. Can’t imaging calling this a daydreamface. The future is so bright, you’ll have to wear a mobile.
I’m at a loss over the recent reports on the requirement for all research publications to be open access by 2020. Open access means that the research papers are accessible openly without a fee. There are plenty of confusing if not outright wrong statements in the press, but I’m not so much concerned with poor journalism than with the actual proposed policies.
Sadly, I couldn’t find more than this one sentence on page 12 of the report linked to from the meetings website:
Delegations committed to open access to scientific publications as the option by default by 2020.
I’d like to understand what this means and then how this is supposed work. Specifically, I’d like to know how this is not going to either break free enterprise or make predatory publishers like Elsevier laugh all the way to the bank.
I’ve been enjoying the discussion around Patek’s recent video argument for knowledge for knowledge’s sake in several forums. I thought I’d summarize my arguments here. To me it looks all pretty straightforward.
From a principled stance, as to funding research, it is the funder’s prerogative who to fund. Often, grant proposals (funding requests) exceed available funds, so the funder needs to rank-order the grant proposals and typically will fund those ranked highest until the funds are exhausted. A private funder may use whatever criteria they deem appropriate. Public funding, i.e. taxpayer money, is more tricky as this is typically the government agencies setting policies that somehow rank-order funding proposals for a particular fund. It seems rather obvious to me that taxpayer money should be spent on something that benefits society. Hence, a grant proposal must promise some of that benefit. How it does this, can vary. I see at least two dimensions along which to argue: Immediacy (or risk) and impact. Something that believably provides benefits sooner is preferable to something that provides benefits later. Something that believably promises a higher impact is preferable to something that provides lower impact.
Thus, research that promises to cure cancer today is preferable over research that explains why teenage girls prefer blue over pink on Mondays and are generally unapproachable that day. Which is not to say that the teenage girl question might not get funded: Funders and funding are broad and deep and for everything that public agencies won’t fund there is a private funder whose pet peeve would be solving that question.
The value of research is always relative, never absolute, and always to be viewed within a particular evaluation framework.
On the PBS Newshour Duke University biologist Sheila Patek just made a passionate plea for “why knowledge for the pure sake of knowing is good enough to justify scientific research” using her own research into mantis shrimp as an example. While I support public funding for basic research, Patek makes a convoluted and ultimately harmful to her own case argument.
My rant on what’s wrong with Industrie 4.0 argued that it focuses too narrowly on too incremental a domain.
The real tectonic change of the last 20-30 years in my opinion is the speed of innovation that software gives you over any other technology domain. Whatever the gadget or concept, if you can add software to it, you can speed up innovation by a major factor. The reason for this is that software can be modified and brought to market within seconds, rather than weeks or months. This is the result of the last ten years of development of “continuous delivery”.
A lot. The overly narrow focus on a particular domain of innovation starves the support for innovation is other domains, making Germany lose out in those domains.
This has been bugging me for some time now.
Somehow German politics declared “Industrie 4.0” (industry 4.0) to be a major area of innovation for Germany. Focus, attention, and funding followed. Industrie 4.0 is supposed to be the next evolutionary step in industrial production based on the convergence of the various technology streams we are currently witnessing (software, biotech, hightech, what have you).
Wikipedia has long been suffering from its rather raw “wiki markup” editing experience. The reason is that the underlying software is stuck in the mud and any progress is slow and painful. Right now there is some excitement over progress on the “visual editor” of Mediawiki. As you can see in the video below the look and feel is 2016, while the functionality is still 1999. How we will catch-up with Google Docs or Medium or any reasonable editing experience this way remains a mystery to me.
In case there was any doubt, IT / High-Tech / New Economy / Can’t-find-the-name is so mainstream it is pushing the same basic buttons that make spectators watch the WWF or reality TV shows. Coming to a city near you soon.