ZIO
ZIO Consulting IT architect and software architecture coach.

AI Resources, Part 3: Software Architecture Use Cases, AI-Assisted Coding, Agentic AI


Reading time: 4 minutes
AI Resources, Part 3: Software Architecture Use Cases, AI-Assisted Coding, Agentic AI

I shared “Ten Resources about AI in Software Engineering You Do Not Want to Miss” and “More Resources about AI You Might Want to Review” in two blog posts in 2025. This post provides more pointers, grouped by the topics in the title of the post plus Prompt Engineering.

Generative AI for Software Architects

This blog is a software architecture blog after all, so let’s see what the community currently thinks about use cases for AI when architecting software-intensive systems:

The CloudWay Digital comparison draws the following conclusion:

“At least for now, GenAI won’t be replacing architects anytime soon.” “But they will be replaced by architects who know how to use AI better than anyone else.”1

🤔 Do you agree? What will the landscape look like in 12 months?

AI-Assisted and AI-Based Coding

Vibe coding has been discussed very actively since my last post:

These are just a few resources I found informative and thought-provoking; there is much more out there of course.

Amazing results are reported, as well as lessons learned. Disillusionment is reported too. Time will tell how fit and how maintainable the generated code bases are — and who will look after them.

🤔 One gets the impression that not trying, learning, using does not seem to be an option if one does not want to be left behind. What is your opinion?

Prompt Engineering

Assuming that input specifications matter (and will continue to matter), the garbage in, garbage out principle still holds. Getting the prompts right is critical to success in generative AI use cases:

  1. Alex Chesser argues that “Attention Is the New Big-O: A Systems Design Approach to Prompt Engineering”: An “LLM doesn’t read in the same order as you or I. Instead, it weights relationships between all tokens at once, with position and clustering dramatically changing what gets noticed”.
  2. Fadeke Adegbuyi shares “Prompt Engineering Best Practices: Tips, Tricks, and Tools” in an article on the DigitalOcean website.
  3. MIT Sloan Teaching & Learning Technologies contributes “Effective Prompts for AI: The Essentials”.
  4. Compact advice can be found in online articles from TechTarget, PromptHub and DEV.
  5. OpenAI Platform: “Prompt engineering: Enhance results with prompt engineering strategies.” and “Best practices for prompt engineering with the OpenAI API”
  6. Google Cloud: “Prompt engineering: overview and guide”.
  7. Anthropic Claude Docs: “Prompt engineering overview” and Prompting best practices for Claude 4.x.

In summary, the task description should be specific/narrow and include an explicit output specification:

  1. Define the usage scenario and role the AI should assume. Provide a template and/or example(s) for the expected output. Specify the desired writing style, expected quality level and audience expectations.
  2. Structure your input and the output example/template with semantic anchors, i.e., well-defined terms that serve as reference points in the conversations with the AI.
  3. Treat prompts as engineering artifacts like source code. For instance, review and version them (“prompts as code”).

🤔 Is this obvious? Or worth being reminded of? Will prompts (and context) engineering still matter in “generative AI 5.0”?3

Agentic AI, AI Integration, MCP

Enterprise Application Integration (EAI) and Extract-Transform-Load are evolving into AI agent integration:

The Model Context Protocol (MCP) receives a lot of attention at present (and there are alternatives such as A2A):

🤔 Will AI agent integration bring additional effort (and risk)? Will integration cost still matter or will the tedious parts be done in “vibe integrating”?

If more integration happens when agents start talking, additional integration capabilities will be required, needing ownership.

Final thoughts

The innovation speed has been, and continues to be, remarkable. So is the perception of the good, the bad and the ugly.

Has everything else become irrelevant? The following sources say “no”:

🤔 How do you respond to the authors’ arguments?

Two important concerns seem to be underrepresented in the public discussion, architect/developer privacy and the environmental impact of AI:

🤔 Is it acceptable to be logged in (and observed) all the time, with code and documentation possibly going to a cloud provider on every edit (with a price plan attached)?

🤔 What is the sustainability footprint of training and using LLMs?

While not answering the above questions, the resources referenced in this post will hopefully help you to use AI effectively and efficiently — as well as reliably and responsibly.

– Olaf

My previous posts on the subject are:

Notes

  1. The Awesome GitHub Copilot repository has an ADR Generator agent

  2. This paper pointer comes from a post in Austin Henley’s blog

  3. Expected to arrive still this year? 😏 

  4. The MADR maintainers came across this article when discussing agentic architecting, architetcural decision making in particular. 

  5. This article is referenced in Martin Fowler’s bliki