Messages from Practice to Academia, Part 2: Reasons for the Research-Practice Gap
(Updated: )Reading time: 8 minutes

Content Outline
Update (7/2025): The “Dear Researchers” column “Overcoming the research-practice gap: Root cause analysis and topics of practical relevance in software architecture and distributed systems” (JSS, open access) features a subset of this post and the following Part 3. It is discussed on LinkedIn.
This post shares my personal thoughts regarding reasons for the research-practice gap in software engineering, in the spirit of the “Dear Researchers” column in the Journal of Systems and Software (JSS). It follows up on Part 1 introducing the column and summarizing messages from selected columns.
Read on if:
- You are a researcher that wants to have an impact on practice sooner or later.
- You consider yourself a software engineer who is interested to learn what happens in universities (and why) or who has accumulated some frustration about the lack of usefulness of research work that came your way (if any).
Let me start with a brief whoami.
My research, development and professional services journey
I spent about twelve years in “true” industry (in product development, professional services and technical sales support), six years in industrial research (as PhD student, research staff member, senior principal scientist), and twelve years lecturing, consulting and researching in a university of applied sciences. Software architecture, distributed systems integration and Application Programming Interfaces (APIs) have been with me throughout this time.
I co-authored and reviewed countless project deliverables, moderated and participated in design workshops, attended and gave presentations both at industry conferences and at academic ones. I got engaged in open source projects such as Markdown Architectural Decision Records (MADR) and Context Mapper. I wrote 50+ blog posts and about 100 articles, from practitioner reports to research papers.1 After twelve years in industry, I went for a PhD. From 2015 to 2025, I co-edited 37 installments of an “in practice” column in IEEE Software, an academic magazine also targeting industry professionals. Along the way, I saw academic work completely disconnected from practice; however, I also encountered successful collaborations and technology transfers — so the gap can be overcome.
Mind the gap: a root cause analysis
Titus Winters and Eoin Woods gave examples of gap symptoms in their “Dear Researchers” columns (see Part 1 of this article series for links and summaries); they also looked into reasons for the gap. I will share my view on root causes here and save additional examples for another post.
I group my analysis into goal and measurement conflicts, different way(s) of working and pickup problems, followed by a clarification of terminology mismatches.
Goal and measurement differences
The goals and concerns in the two communities differ, even though they work on the same technical topics. In industry, development teams, their management and their clients care about shipment, purchase/rental and actual usage of their software, often articulated as SMART goals. Research, in contrast, is all about publications. Their amount and the prestige of the publishing workshops, conferences or journals matter; so do their citations. Citation count, actually. These number games are sometimes referred to as “publish or perish”.2
Note that the goals and concerns in the two communities are not necessarily identical to the intrinsic motivation of the respective community members. Systemic incentives and their measurements come into play. They drive the priorities required to satisfy the respective incentive system.
Fund raising is a good example:
- In industry, client satisfaction leading to follow-on business is valued and evaluated; the net promoter score is an example of a metric. Request for proposal and request for quotation are two of the magic words in presales (more on terminology later). Software vendors and larger professional services firms have dedicated marketing and sales roles that have their own targets and measurements (but that’s another story).
- In research, senior researchers (and, nowadays, even the young ones) are their own marketing and sales force (and they also manage project delivery). Due to the importance of raising funds, the top priority after having won a research grant often is… writing the next grant proposal. As a consequence, executing the won project successfully and making its results known to the public tend to receive less attention than they deserve.3
When working together, two incentive systems and their measurements have to be pleased to create win-win situations. Dichotomies are worth knowing about; Goodhart’s law might be at work two times: “When a measure becomes a target, it ceases to be a good measure”.
I could go on identifying goal-metric mismatches; but let’s look into the respective journeys rather than their destinations now.
Different way(s) of working
Not only goals, incentives and metrics differ, also the approaches taken to achieve them. Three related observations are:
- Dealing with uncertainty is the norm in industry but makes some researchers feel uncomfortable. To give an example: a software engineer in industry should not be surprised about terminology differences or semantic ambiguities and handle them properly during analysis and design work; in contrast, I have witnessed researchers suffer from “analysis paralysis” in this situation, trying to define all terms diligently and to resolve the ambiguities before entering any problem solving (my evidence is anecdotal, and admit that I run the risk of stereotyping here). ⚡ As a consequence, an industry professional might view a researcher that tries to be diligent and accurate as a pedantic perfectionist and creativity blocker who will never get anything profound done; a researcher might view an industry professional that is eager to release immature work results early as a chaotic anarchist who does not value quality and rigor. 😮
- There are different types of flow (ways of getting productive, that is). I have seen many researchers working deeply focused in isolation for long periods of time while highly interactive workshop formats are more common in industry. Compare systematic literature reviews with “stormings” of various kinds, yielding events, user stories, quality attributes (yes, there are notable exceptions!). ⚠️ Hybrid teams might not be able to function well if the “flow mode” of one half of the team interrupts that of the other half. Mutual respect to the remedy!
- Presentation duration and style. Presentations at practitioner conferences typically run for 45 to 60 minutes; academic conferences often bundle three (or more) presentations in 90-minute sessions, which leaves very little time for a critical discussion of feedback from empirical validation or adoption in practice; attendees are referred to the paper for that. At industry conferences, many presenters illustrate their key messages by telling stories; they interact with the audience to get a discussion going. In contrast, many researcher presentations report results in factural, rather detailed ways without interactive elements (again, we can agree to disagree here). Attending a presentation following the default style of the other community for the first time can be confusing, but also inspiring. ✔️ Researchers presenting to industry people should adjust their style.
Note that I am not saying that either community is wrong; their practices and habits developed for a reason. I merely want to point out that the differences can lead to mismatched expectations, causing collaborations to stall or be less effective as they could be. Knowing about the differences makes it possible to better understand the other side and then work together to overcome them. Or at least accept and joke about them. 😉
On a positive note, personal reputation has a key role to play in both communities when deciding whether to try out and adopt a work result.
Visibility, transfer and adoption problems
Lack of adoption of research results in practice often has rather mundane reasons. Readers in industry might encounter the following issues with research papers and the tools/methods featured in these papers (ordered chronologically into steps towards results adoption):
- No awareness or motivation in target audience. Typical questions are: What is a journal or a research paper anyway? Where would I find interesting papers? Why bother (and invest time and money)?4
- Interest but no time budget. Industry people tend to be busy. Very busy.5 They face a high cognitive and mental load already, being flooded with information of all kinds. For instance, a severity 1 problem might create a real pain point; those that know the details about it get heavily caught up in business-as-usual work, often because of the pain point (or troubleshooting it). Hence, they might not find the time to think out of the box; the knowledge how to solve the problem, possibly supported by research, remains locked up in their minds.6
- Interest and time but frustrating reading experience. Unfamiliar writing styles may contribute to a lack of understanding or provoke a “not for me” reaction. For instance, extremely narrow focus, deep technical jargon that remains unexplained and excessive use of TLAs are some of many other block(er)s for understandability.7
- Paper found, read and understood but content comes across as immature. It is a bad sign if there is one and only one commit in a public git repository disseminating research prototypes, merely fulfilling a formal obligation to open source. If it takes more than 1-2 hours to download a tool, get a first example to run and understand it, many prospective adopters will walk away and try something else.8
- Asset installed and tried out but insufficient benefit-cost ratio. If a research result made it here, that is a success! The reasons for a lack of sustainable, lasting adoption can become a subject of future work.
Issues 1 to 5 have to be overcome if research results are supposed to get over the research-practice gap. For instance, transfer from research to practice has much better chances to succeed if the research work is embedded in a community, where there is less dependence on the longer-term support/interest of a single individual (e.g., “the PhD student”). I’ll talk more about this in Part 3 of this article series.
Lost in translation: terminology mismatches
I encountered the terminology mismatches between industry/practice and research/academia the hard way twelve years into my career, when moving from developer, consultant, architect roles in professional services to industrial research. It took some time, patience and practice to become acquainted with the new vocabulary (and other differences); I am still grateful for all the help that I received.9
Here is my humble attempt to translate between the two communities:
Concept | Academia | Industry |
---|---|---|
Worker (in industry) | Practitioner | Professional (and other terms), engineer |
What to work on (coarse grained) |
Research problem | Project/product goal and vision |
What to work on (fine grained) |
Research questions | Requirements, work items, issues |
Approach | Research method, empiricism | Method, practice, technique |
Outcome | Research result, research contribution | Solution, deliverable, product capability |
Technical writing artifacts | Research paper (short: paper) and supporting artifacts | Article, white paper, blog post, wiki page, manual |
Prior art | Related work (peer-reviewed vs. grey literature) |
Existing solution, candidate asset (inhouse, market, public domain) |
Not in scope (yet) | Future work | Backlog, roadmap element/entry |
External delivery | Paper publication, conference presentation (aka talk) | Release, shipment, rollout |
Advertising | Dissemination (generic term) | Marketing, other activities |
Note that some of the table entries are just about terminology, while others list entirely different concepts (in a particular category).
Let’s take a closer look at a few table entries:
- Worker (in industry). Requirements and domain analysts, architects and other designers, developers, testers, operators and maintainers of software in industry usually do not call themselves “practitioners”, but professionals or engineers; sometimes, they are also called subject matter experts.10
- What to work on. “Problem” is a term used with several meanings or associations, good vs. bad. Researchers welcome problems because they give them something to work on. Practitioners use requirements artifacts such as backlog items/issues, features or stories to shape their work; problems come their way unexpectedly, for instance bugs in their own code or software products, cloud services or APIs not working as promised.
- Approach. Generally speaking, method is a way of investigating and solving problems systematically. Two types of methods are at work here, research method and software engineering method. The latter can either be a study object and research outcome or a means to an end, applied to design and implement something.
- Outcome. To qualify as a research contribution, a research result has to solve a problem for the first time or solve the problem in a new way departing from the already existing body of published knowledge, also known as existing work. Ideally, certain software qualities are improved, for instance tool usability or complexity and computational effort of an algorithm are reduced. A solution in industry, on the other hand, simply has to work and meet the client and end user expectations (which may include both functional and non-functional requirements). Due to the risks involved, a completely novel approach often is unwelcome, at least for mission-critical software running in production.
The term “workshop” has two meanings here:
- In academia, a workshop is a small conference (standalone or co-located with a larger one) with paper presentations and proceedings (collections of published papers, that is). Academic workshop typically last half a day or a full day.
- In industry (and the arts), a workshop is an intense, interactive group event serving an analysis, design or problem-solving purpose. Such workshops typically run from a few hours to 1-2 days and up to a week.
Excursion: In Domain-Driven Design (DDD), one of my software engineering areas of interest, the above concepts could be modelled as s single subdomain of computer science, software engineering. This subdomain is a core domain that has (at least) two bounded contexts realizing it, software engineering research and software engineering practice. This model expresses that two sets of domain models and ubiquitous languages are at work here.
Bridging the gap: “Dear Researchers”
In this article, I analyzed some of the root causes for the research-practice gap in software engineering perceived by many, including goal and incentive mismatches, different ways of working in presales and project delivery as well as transfer problems of five kinds. I also offered a research-practice dictionary; researchers write and publish papers and call their target audience practitioners; industry professionals write and ship software and might not even know that their topic area is also being researched in universities.
My apologies if the academic side of the gap received a little more attention (and criticism) than the industry side. Success stories do exist, and I plan to feature some in future posts. I can also extend the root cause analysis to the practitioner side upon request. 😉
Let me end Part 2 of my humble attempt to help close (or narrow) the research-practice gap with a pointer (for researchers) and a kind request (for industry professionals). If this post whetted your appetite for the “Dear Researchers” column in JSS…
- … please check it out at Science Direct yourself if you are a researcher. In reverse order: “Dear researchers step 1: Find a team with a problem” and “Thoughts on applicability”.
- … please consider a submission if you are opinionated or experienced industry practitioner (or both). See my previous post for information on how to submit.
Let’s team up and work towards a joint “comfort zone” for the two respective communities!
– Olaf (aka socadk aka ZIO)
Part 3 of this article series has concrete suggestions how to improve the visibility and impact of software engineering research and identifies practically relevant topics in my areas of interest, starting with software architecture.
Acknowledgements.
I thank Mirko Stocker, Gerald Reif, Oliver Kopp, Hagen Voelzer and Eoin Woods for their input, reviews and constructive criticism of earlier versions of this post.
Notes.
-
Google Scholar lists most of them. ↩
-
Believe it or not: entire research careers are distilled into a single natural number called h-index. ↩
-
If you react “not for me!” at this point, kudos! You either decided not to participate the “the more, the merrier” game or are so successful in fund raising that you actually have time for research execution (and I have not even mentioned teaching). ↩
-
You also come across sweeping judgements such as “all academic research is theoretical and irrelevant” and prejudices such as “researchers are happy to be left alone in their ivory towers”. Some researchers do work in towers, but typically these towers are made of more mundane material such as concrete. 😉 ↩
-
Researchers busy too, but of course that does not help either. ↩
-
This even affects researchers in the same organization as the practitioners. ↩
-
TLA stands for Three Letter Acronym, by the way. “Technical Writing Tips and Tricks” provides general advice regarding technical writing and points at related resources, for instance a highly recommended lecture video by Steven Pinker. ↩
-
Missing installation instructions in READMEs and relying on external dependencies that no longer are maintained also reduce the chances for pickup and adoption. ↩
-
A huge “thank you very much indeed” goes out to my official advisors and informal mentors at Stuttgart University (IAAS) and IBM Zurich Research Lab (ZRL) 2006 to 2009 for their guidance, advice and examples. ↩
-
At least I have not come across many business cards that had the word “practitioner” in the job title. ↩