A colleague earlier today showed me this student answer from one of his exams:
The student answer for “name a design pattern” is “hotel” and the answer for “that pattern’s intent” is “book hotel”. Repeat for a second pattern called “flight” and its intent “book flight”.
The Wall Street Journal provides a nice infographic on the “billion dollar club“, that is, startups of valuation $1B or above. In Europe, the WSJ counts six >$1B startups, of which one is in Amsterdam (Adyen), one is in Stockholm (Spotify) and two are in London (Powa, Shazam) and Berlin (Delivery Hero, Home24). In addition, the WSJ lists two more Berlin-based companies (Rocket Internet, Zalando), which exited (to the public markets). So 50% of companies worth counting are based out of Berlin.
Now comes the print version of the WSJ with a Feb 19, 2015, article on “Europe’s Tech Startup Landscape for 2015″. The writer of the article discusses the general situation and then proceeds to present Wall Street Journal’s non-obvious picks of companies to watch out for, or in their words, “a useful map of the EMEA tech startup landscape for 2015″. The twelve company list is all over the map and includes Stockholm (KnCMiner), Berlin (Wooga), Serbia (Nordeus), Stockholm (Klarna, Truecaller), France (SigFox, Withings), Great Britian (Kano, Oxbotica), and Israel (SiSense, Fiverr, Lumus).
Somehow this list of companies to watch out does not vibe with the $1B club… If anything, Berlin has been taking up much more speed from the time those in the $1B club were founded. Go figure.
So far, most of my research funding has been from industry. Sometimes, I have to defend myself against colleagues who argue that their public funding is somehow superior to my industry funding. This is only a sentiment; they have not been able to give any particular reason for their position.
I disagree with this assessment, and for good reason. These two types of funding are not comparable and ideally you have both.
In research, there are several quality criteria, of which the so-called internal and external validity of a result are two important ones.
- Internal validity, simplifying, is a measure of how a result is consistent within itself. Are there any contradictions within the result itself? If not so, than you may have high internal validity (which is good).
- External validity, simplifying, is a measure of how a result is predictive or representative of the reality outside the original research data. If so, your result may have high external validity, which is also good.
Public grants have high internal validity but low external validity. In contrast, industry funding has high external validity, but low internal validity. The following figure illustrates this point:
A bit belated, I’m happy to announce two upcoming talks:
- Tomorrow, 2015-02-05, 16:00, at Mills college (California Bay Area, United States) (flyer) about Sustainable Open Source
- On 2015-02-19 at Lero, the Irish Software Engineering Research Centre (Galway, Ireland) (flyer) about Inner Source at SAP
Both talks are accessible to the public, see the flyers.
Today, PeerJ announced the creation of a new open access computer science journal. After a bit of back and forth a while ago I had accepted the invitation to be on the editorial board. (My main concern was that PeerJ is a for-profit organization but co-founder Pete Binfield convinced me that this will only be used to the benefit of the authors.) A key distinguishing criterion of PeerJ when compared with other open access publishers is an all-you-can-publish rate of US$ 99 forever.
My social media stream is full of comments, positive and negative. What seems to rile people mostly are the editorial criteria, summarized succinctly as
“Rigorous yet fair review. Judge the soundness of the science, not its importance.”
According to this article, Google’s 20% time never really existed. I’ve always guessed as much, joking with Google friends that their 20% time really could only be taken on Saturday and Sunday. Which is all the same: Engaged employees do what they feel needs to be done no matter what and when.
Hackathons, however, exist. Facebook, SAP, and Suse are example companies that organize them with the purpose of prototyping potential new products. For all that I can say, the dirty little secret is that there are no successful hackathons without 20% time (make it +/- 15%). I’m betting that rarely was there a successful hackathon without a run-up to the hackathon that involved significant preparation, that is, “20% time”.
As a consequence, for hackathons to succeed, employees must not be disenfranchised or overworked. Otherwise they won’t spend their personal time talking about and preparing for a hackathon. Also, they should have a purpose, that is, be tuned in to the company’s mission. I guess this means that the better the company is doing, the more likely it is to get something out of their hackathons.
According to the WordPress summary of my site, the most popular post in 2014 was “Should You Learn to Code?”, beating out the perennial favorite “The Single-Vendor Commercial Open Source Business Model”. Obviously, the broader the interest, the more readers.
This morning I read about the call by a German politician to introduce mandatory programming courses into elementary (primary) school. The idea is that being able to program is such a basic culture technique these days that kids should learn it early on.
In my prior piece on learning to code, I answered mostly in the negative. If you are an adult and don’t aim for a career in programming, don’t bother. With children, the story is quite different: I agree that children should learn to program, but as a boost to early acquisition of abstraction skills, and not for programming skills in themselves.
Let me explain.