Ten years ago DARPA, the US defense research agency, first organized the DARPA Grand Challenge. The challenge was to build a car that could drive 150 miles through the Mohave desert autonomously. The key is the car’s autonomy: No human brain was to steer it and make decisions, it was computers and other technology all the way. The associated technical challenges were manifold: read and interpret the environment correctly, predict behavior of the environment and the car while interacting, plan the route and get to the goal fastest, don’t kill anyone on the way. In its second attempt in 2005, the challenge was first fulfilled and won by the team of Sebastian Thrun of Stanford University.
Since then a series of other “grand challenges” has followed, some of which were reasonable, some of them less so. Here is what I think makes a great grand challenge:
- It is relevant for society and a non-expert recognizes the relevance
- It is cross-disciplinary, combining many different problems into one
- It is a useful application with a clear tangible communicable result
- It is measurable and achievable so that we know it when we solved it
I would avoid any “grand challenge” that an expert recognizes but not a layman. A grand challenge needs to stir emotions. Looking at example grand challenges proposed around the Internet, I find many too abstract, too focused on disciplinary issues. My vote goes to “the secure cell phone” as a grand challenge, but there are many others.
FIWare is a large EU-sponsored program. It has a mission (“about”) statement. Specifically:
FIWare is an open initiative aiming to create a sustainable ecosystem to grasp the opportunities that will emerge with the new wave of digitalization caused by the integration of recent Internet technologies. […]
Continue reading “For Posteriority and Reuse”
So far, most of my research funding has been from industry. Sometimes, I have to defend myself against colleagues who argue that their public funding is somehow superior to my industry funding. This is only a sentiment; they have not been able to give any particular reason for their position.
I disagree with this assessment, and for good reason. These two types of funding are not comparable and ideally you have both.
In research, there are several quality criteria, of which the so-called internal and external validity of a result are two important ones.
- Internal validity, simplifying, is a measure of how a result is consistent within itself. Are there any contradictions within the result itself? If not so, than you may have high internal validity (which is good).
- External validity, simplifying, is a measure of how a result is predictive or representative of the reality outside the original research data. If so, your result may have high external validity, which is also good.
Public grants have high internal validity but low external validity. In contrast, industry funding has high external validity, but low internal validity. The following figure illustrates this point:
Continue reading “Internal vs. External Validity of Research Funding”
Today, PeerJ announced the creation of a new open access computer science journal. After a bit of back and forth a while ago I had accepted the invitation to be on the editorial board. (My main concern was that PeerJ is a for-profit organization but co-founder Pete Binfield convinced me that this will only be used to the benefit of the authors.) A key distinguishing criterion of PeerJ when compared with other open access publishers is an all-you-can-publish rate of US$ 99 forever.
My social media stream is full of comments, positive and negative. What seems to rile people mostly are the editorial criteria, summarized succinctly as
“Rigorous yet fair review. Judge the soundness of the science, not its importance.”
Continue reading “Soundness vs. Importance in Publishing (PeerJ Computer Science Journal Announced)”
I just read this review of how professors spend their time while working. It struck me that a key component that I spend a substantial amount of time and energy on is missing: Fund raising. Here is a visual summary of the article courtesy of someone on reddit:
I first looked through other practices like “letter writing” and “research development” but these require no time at all so I don’t think that’s where fund raising is hiding.
I then thought that perhaps fundraising hides in meetings, making fund raising talking to industry (rather than grant proposal writing). Here is what the article says about meetings:
Continue reading “How Academics Spend Their Time? Not Me.”
Germany is the best place I know to be a professor if you value your independence. Your rights have been codified in the German Basic Law (Constitution) and no dean can tell you what to do. You are your own person.
On the downside, German professors and universities have been (for the most part) blissfully ignorant of how the rest of the world evaluates universities. Common sentiments in computer science are that “Journal publications are for wimps, real researchers publish in the leading conferences” and “University evaluations? Those are all fraudulent, focusing on crappy criteria that have no connection with reality”.
Some of these critiques are proper. For example, almost all German universities are public universites and many have a unique and positive symbiosis with industry, fueling Germany’s economic growth—where is that being accounted for in these rankings? But for the most part, Germany’s hesitance to join the international ranking game has been harmful.
In one experiment, two German universities recently decided to report their numbers to the Times Higher Education (T.H.E.) ranking with the goal of optimizing their rank. That is nothing uncommon, Northeastern University, for example, has undertaken a multi-year effort to game the US News and World report ranking, much to their benefit, apparently.
Continue reading “German Universities to Take University Rankings Serious”
I’m in Hong Kong, attending FSE 2014. I had signed up for the Next-Generation Mining-Software-Repositories workshop at HKUST but missed it for (undisclosed) reasons. Apparently there were two main topics of dicussion:
- Calls by colleagues to make mining work “useful” rather than “just” interesting
- Calls by colleagues to build tools rather than “just” generate insight
Both issues are joined at the hip and an expression of a struggle over the definition of “what is good science” in software engineering. As someone who started out as a student of physics, I have an idea of science that views “interesting insights” as useful in their own right: You don’t need to build a tool that shows your insight improves the world. On the other end is the classic notion of engineering science, where there is no (publishable) research if you don’t improve the world in some tangible way.
Continue reading “Once Again Natural vs Engineering Sciences Struggeling over Definitions #FSE2014”