Hans-Peter Hoidn
Hans-Peter Hoidn The Open Group Distinguished Architect

Fifty Years of IT: Evolutions and Constants — A Personal Retrospective


Reading time: 12 minutes
Fifty Years of IT: Evolutions and Constants — A Personal Retrospective

In this guest post, my former colleague Hans-Peter Hoidn looks back on 50 years in Information Technology (IT) and the development and architecting experiences he gained. He comments on technologies that came and went and on concepts that stayed, and compares puts them in perspective with the current state of the art and the practice. Enjoy the read! – zio/socadk

Timeline

I have a history of about 50 years in IT and about 40 years in the IT business. Reflecting on the development of IT in the last 40 to even 50 years, it seems that it is comparable to the evolution of transportation from horse carriage to cars. Both are disruptive changes w.r.t. efficiency, but some things also stayed the same.

50 years ago, IT was one machine with a punch card reader, some disk, tape machines and printers, thus input was punch cards and output was prints. My first computer was one of the 25 Telefunken TR4 that had been build.1 Next up were two computers from Control Data Corporation, CDC 6400 and CDC 6500, while studying at ETH Zürich.

40 years ago, when I started working in the IT business, the computer had now terminals. I used an HP3000 with the IMAGE3000 network database. Disk space was rare and very expensive; it was a relief when we installed the new 404MB disk drive; its size was like a washing machine.

35 years ago, it was the time for a major jump, we had the first workstations on the desks. My first workstation was a SYMBOLICS machine, on which I was designing and building an expert system for job control at a bank with an object-orientated analysis and design approach but programming functionally with LISP. A second major shift took place soon after when workstations had been integrated in computing environments with the upcoming distribution of processing of software systems running on multi-tier computer environments including PCs, mid-range servers and mainframe computers in the backend. “Client/Server Architectures for Business Information Systems” provides an overview of the distribution options, called distribution patterns and client-server cuts in the paper [1].

Evolutions

Let me reflect and report how software development in general, programming paradigms and distributes systems evolved in the last 40 to 50 years.

Software Development in General/Infrastructure

Today’s programmers can use powerful tools. Working in teams is supported by software engineering practices such as agile software development and DevOps. Short iterations and automated builds and tests allow errors to be identified and corrected immediately.

50 years ago, our code — mostly using the imperative language FORTRAN — was punched on cards, and the interaction with the computer was by the punch card reader (which sometimes scratched cards). We got the results of a program run around after 20 minutes, because we had to wait until the operator was distributing the printouts on the shelfs. Debugging was mostly reading hexadecimal core dumps and finding in one of the registers the number sequence 1717 that meant “unknown”. Then we wrote the correction on paper, queued at the machine for punching the needed cards, went to the card reader and finally after approximately one hour we had the results of the modifications.

My early “DevOps” experience was after I learned to run the Hönggerberg satellite of the ETH computer center on my own when I was FORTRAN programmer at the Laboratory of Atmospheric Physics. I was operator and developer at the same time back then: I started the satellite by entering the binary address of the tape machine with toggle switches to load the operating system, read in my cards, got my printouts immediately, made the necessary corrections and started the next round.

40 years ago, I joined a software engineering company in Zurich having a HP3000 computer and terminals with 80 green characters in one line and approximately 20 lines on the screen at the workplaces. We programmed in BASIC, and there was not much support for debugging. Because most of the processing was with the IMAGE3000 network database (which was a database of the CODASYL type), it often helped me analyze the logs of the database modifications.

A major shift happened around 35 years ago, when the terminals were replaced by workstations with more tools supporting the development and debugging. My first workstation was the SYMBOLICS-machine. I got it when I worked in a group exploring new possibilities for developing applications, written in LISP. This was my start with object-oriented programming using a very sophisticated development and runtime environment for building expert systems.

30 years ago, I joined DEC (Digital Equipment Corporation), providing consulting for UNIX and CASE (Computer-Aided Software Engineering) tools. At that time, we already had sophisticated CASE tool sets supporting the programming languages C and C++. CASE-Tools were getting better and were augmented by well-established practices of software engineering (like debugging), architectural methods (like defining functional and non-functional requirements as well as architectural decisions), and project management (including agile development).

From Procedural Programming/Languages/Paradigm to Object-Orientation

FORTRAN was my first major programming language 50 until 40 years ago. FORTRAN emphasizes on scientific computing and solving problems by mathematics; thus, we solved problems mainly by programming algorithms, storing data temporarily in arrays and matrices, using processing loops. We structured programs by function and procedure calls. My first programming job was as a FORTRAN programmer for the Laboratory of Atmospheric Physics for processing data of ozone measurements in the atmosphere. For my diploma (today/EN: master) as well my PhD thesis, I worked on numerical analysis tasks, like conformal mapping, approximation algorithms, integral equations and large linear equation systems. FORTRAN was designed having punch card structures in mind, e.g., that first characters of the line are reserved such/so that a ‘C’ in position ‘1’ indicates that this line is a comment.

I programmed my early commercial applications — financial administration, accounting, and production planning — in BASIC 40 years ago. Coding with this procedural language was a struggle with a lot of loops and GOTO statements, which was often error-prone and harmful, actually, as Dijkstra mentioned in 1968 already [2].

Getting in touch with LISP, the new development environment, a workstation with graphical capabilities opened a new world. We learned how to address real-world problems with objects that wrap data and logic (instead of procedures accessing arrays with value lists etc.), using new thoughts about program design. Moving to object-orientation and LISP programming, I learned that the new concepts better support complex structures of the real world and are more suitable for new designs of user interfaces.2

In the following years, I also learned C++ and design methods for object-orientation. At the beginning there were multiple design methods for object-orientation. I studied methods from Booch, Rumbaugh, Jacobson and Coad/Yourdon. An initiative of the OMG (Object Management Group) led to a single notation, UML (Unified Modeling Language).3 Rational UP (RUP) was a popular process with UML as its notation.4

Moreover, we have in the meantime well-established design methods, like DDD (Domain Driven Design). All these approaches are much closer to real-world things than the basis we had earlier with the machine-oriented and algorithmic-oriented programming languages like FORTRAN and BASIC.

Distributed Computing

Processing a program by using punch cards includes no distribution. However, already in early years there were program suites with multiple programs running one after the other. Data was transferred by files. The job control system we developed with LISP was managing the nightly processing of the bank’s business by defining appropriate flows of program executions. Input was written via terminals, output often printed.

When we got workstations and PCs with their own processing power, we soon had a three-tier environment of PCs, Unix-machines and mainframes, which somehow had to work together. This required distribution because programs on different machines had to interact (instead processing and sending results to the next one). Suddenly, at commercial companies, distribution was getting very important, and the integration of application components was a major issue. I worked for a concept that replaced terminals by workstations interacting with mainframes such that end-users would have better functionality. We had chosen CORBA (Common Object Request Broker Architecture) from the OMG (Object Management Group) for implementing the integration of processing between PCs of the users, UNIX computers and UNISYS mainframes. This technology for the integration, allowed the generation of a so-called “plumbing” of the protocol handling.

The next wave was Enterprise Application Integration (EAI), where the integration layer also included the routing and transformation of information (as covered by the “Enterprise Integration Patterns” website and book). This was my focus when I worked nine months as an EAI architect developing the design and the usage of an integration platform for FACT, the “Financial Accounting of Container Terminals” at a container-shipping company, providing the interaction of independent applications in the harbors around the globe with the central financial accounting application. The underlying technology was IBM WebSphere Message Broker (as the integration bus product from IBM was named these days) combined with IBM WebSphere MQ as transport channel (providing asynchronous message queuing across multiple hardware and software platforms).

An overwhelming move was the concept of “services” and, later on, “microservices” such that well-defined functionality can be provided within an application or to completely unknown consumers. Now client and server could be designed, implemented and deployed independently; separating concerns allows a completely new approach to distribution from a conceptional as well as an implementation view. However, we soon learned that an architecture of the whole information system has to ensure a consistent holistic view.

Let’s view the evolution of distribution step by step and look at their impact on the four autonomy types reference, platform, time and format:

  • The first step is direct connectivity provided by a transport protocol such as TCP/IP; all integration logic must be provided by the application components that communicate. Only platform autonomy is achieved.
  • Next, the integration logic is moved from the application components to a middleware such as queue-based messaging systems or object brokers such as CORBA. This step brings reference and time autonomy.
  • In EAI (Enterprise Integration Application), mediation logic (transformation and routing, that is) also is removed from the application components; all participating endpoints still have to install the appropriate technology and software. This step improves format autonomy.
  • The usage of (micro-)services reduces application components to their core business functions. Introducing service registries further improves reference autonomy.

Over time, the underlying technologies have been providing more support for distribution. From a business view, this reduces the complexity to be managed down to essential complexity, a term from Fred Brooks.

When using RESTful services and JSON, we need from a common understanding between client and server about the messages and data transferred (taking a design viewpoint). This can be achieved by proper architecture and supporting project management.

In addition, we recognize that over the last years the processing environment of software systems is shifting from on-premises environments to cloud environments. Moreover, there is a much better awareness as before for the risks of distribution, also known as fallacies.

Constants

Having discussed major changes over the last 40 to 50 years, the question arises whether some aspects remained constant. From my viewpoint there are several: data persistence, modularization and (foundations of) software engineering.

Data Persistence

Almost from the beginning we could use file systems and soon databases for data storage. At the very beginning of the IT journey, disk space was rare and expensive. The applications processing ozone measurements 50 years ago were at that time very “data intensive”. The initial input with the original measurements was still on punch cards, but I reworked to programs such that all the processed data was now stored on files and physically on tapes. Finally, I replaced cabinets of punch cards with some sort of database in a file system. Specific procedures allowed to store four integer values within one 60-bit CDC-word such that only minimal disk space was used. Nonetheless I needed almost all available disk space of the computer center for the statistics for which I restored the files from tapes (this was possible due to a special agreement with the director of the ETH computer center responsible for operations, who trusted the young developer and allowed using the needed storage when necessary).

Already 40 years ago we had a network database, and the commercial applications worked heavily with this database. Firstly, we designed the database and after this the programs, where input screens and printouts had been designed along the table definitions. We already defined entities and relationships following methods, which are still used. For RDBMS (Relational Database Management Systems) I sometimes used product-specific forms tools such as FORMS to generate screens for data entry and modification as well as for queries.

With the move to object-orientation for processing and user interfaces, there was a lot of research for OODBMS (Object-Oriented Database Management Systems). But OODBMS stayed in their niches, and relational databases were not replaced. I see no reason that they would be replaced in future. The design with entities and relationships we use today is in principle the same as 40 years ago. However, in the meantime we had an evolution of transaction processing, and a lot of progress is made for backup and recovery. In addition, there is much more functionality for security like sophisticated functions for encryption (such that also the backups are encrypted).

In addition, there are now additional database management paradigms and systems specifically tailored for other and new purposes like large data sets, or data lakes, including but not limited to, grouped under the NoSQL label; e.g., document-oriented systems such as MongoDB, key-value stores such as Redis, cloud storage such as DynamoDB and even Kafka, at its core a distributed transaction/message log. I consider these approaches as complementary to RDBMS systems. Moreover, Artificial Intelligence (AI) technologies can also be seen complementary, as they use large data sets for training. Disk space is no longer an issue in most cases.

Because operational data of commercial systems are still stored in RDBMS, applications must bridge the gap between object-orientation of processing and database storage. Today we can use powerful tool sets for this purpose, various technology products including development tool sets are available. In the early days of LISP programming and building expert systems with KEE (Knowledge Engineering Environment), we used the toolset KEEconnection for data mappings between the expert system and RDBMS. However, designing and implementing of the mappings was not straightforward. Thus, we designed mappings for our specific purposes, which gave me the opportunity for publishing papers about our work [3],[4].

Modularization

I have always applied the principle of modularization. My early FORTRAN programs, for instance, used function and procedure calls heavily. Moreover, at that time well-programmed library procedures were available. e.g., for solving linear equation systems. Therefore, from the beginning we used some sort of separation of concerns. Separation of concerns and information hiding often are achieved with services today, which can be seen as an adoption, adaptation and refinement of these early concepts.

In addition, I recognize that “splitting a monolith” is not a new concept. When a well-formed design was done in early IT times, the structures were always split into pieces and the interaction between these pieces was done by appropriate interfaces. The evolution is that the split can now be distributed, and that the user interface part is much more elaborated, mainly when we see the rich functionality of our smartphone apps.

Application Programming Interfaces (APIs) are everywhere; already the very first programs needed a careful design of local interfaces, which was mainly the design of the parameters of functions and procedures called within the programs. I remember well that an error, e.g., overwriting a function parameter within the function, was hard to find.

We have a better understanding of the importance of local and remote interfaces today. RESTful HTTP is popular and there are other protocol options (e.g., gRPC and GraphQL) — also because APIs are a well-proven practice (which was not the case for functions and procedures). By the way, there is a comprehensive collection of “Patterns of API Design” [5]; some of these patterns were appropriate in the early days of IT too (which is a good thing for patterns).

Software Engineering

The careful thinking about the purpose and the use of a program at the beginning of work is a constant from the beginning of IT until today. I like to mention that working in IT is mainly problem-solving and some good practices are independent of IT. Generally speaking, we still work the same way as years ago: however, we execute and iterate through it much faster and with more people involved, possibly spread around the globe. Careful and appropriate governance and management guidance are required. The purpose of an application still must be known from the beginning.

Version control systems, CI/CD pipelines, package/dependency managers and library repositories allow us to easily integrate existing code, for instance built and maintained in open-source projects. As an example, hardly anybody writes logging code or JSON to Java converters anymore these days; frameworks and existing external libraries take care of that. The IEEE Software Insights article “Software Reuse in the Era of Opportunistic Design” discusses the pros and cons and challenges of this approach.

Because an application serves the purposes of users, developers must talk to these users; this was already necessary in early days. The emphasis has changed a bit, less care is needed for the size of data, more care needs the richness of today’s user interfaces. 40 years ago, we had to discuss how the 24 to 80 characters form a meaningful screen and how printed lists should be structured. Today, we discuss the appearance of a well-structured screen on their workplaces, as well as accessibility issues, with the users.

Closing Remarks

Overlooking what happened in IT in the last 40 to even 50 years shows a huge progress and development. However, there were some constants: as the horse carriage as well today’s person cars have four wheels (just note that the Maya culture had no wheels); computers have always central units for the processing, storage and are run by code. On the other hand, overlooking that the two CDC computers installed at the ETH had a weight of 60 tons and less processing power than a today’s 50-gram smartwatch reminds us that we had huge evolutions; this is true also for other areas (and this with a brain which was mostly formed in the stone age).

Hans-Peter Hoidn (LinkedIn profile)
February 2025

My earlier guest post: “How to Build and Run a Decision-Making Architecture Board”.

About the Author

Hans-Peter Hoidn is retired from IBM. He has worked as an IT Architect since this profession became known within IT; he is certified by the Open Group as Distinguished IT Architect. He started programming in 1971 and completed his education at the ETH Zurich as Dr. sc. math. ETH. He served as programmer, project leader, IT architect, team lead, and manager throughout his career. A consultant and IT architect, he helped customers in Switzerland, Eastern Europe, Middle East and Africa to understand the potential of Service-Orientation and Business Process Management for their business problems.

References

[1] Klaus Renzel and Wolfgang Keller, “Client/Server Architectures for Business Information Systems — A Pattern Language”, In: Proceedings of the Conference on Pattern Languages of Programs (PLoP ‘97), Hillside, 1997.
[2] Edsger Dijkstra (March 1968), “Go To Statement Considered Harmful”, Communications of the ACM 11 (3): 147–148. DOI: 10.1145/362929.362947, S2CID 17469809.
[3] Hans-Peter Hoidn, “Practical Experiences in Coupling Knowledge Base and Database in a Productive Environment”, In: Karagiannis, D. (eds) Information Systems and Artificial Intelligence: Integration Aspects. IS/KI 1990. Lecture Notes in Computer Science, vol 474. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-53557-8_32
[4] Hans-Peter Hoidn and Riccardo von Vintschger, “Coupling Knowledge Base and Database in a Productive Environment”, In: Tjoa, A.M., Wagner, R. (eds) Database and Expert Systems Applications. Springer, Vienna. https://doi.org/10.1007/978-3-7091-7553-8_86
[5] Olaf Zimmermann, Mirko Stocker, Daniel Lübke, Uwe Zdun and Cesare Pautasso, “Patterns for API Design: Simplifying Integration with Loosely Coupled Message Exchanges”, Addison-Wesley Professional, 2022.

Notes

  1. A TR4 is on display at “Deutsches Museum”, Munich. 

  2. It seems to me that the move to object-orientation happened in parallel to the move to more suitable user interfaces on workstations overcoming the restrictions of terminals. In my opinion, object-orientation is a major enabler for today’s rich user interfaces. 

  3. Note that Rumbaugh was later still active in the OMG for UML 2.0. 

  4. The Wikipedia page on UML reviews the history of the methods.