AI Resources, Part 3: Software Architecture Use Cases, AI-Assisted Coding, Agentic AI
Reading time: 4 minutes
I shared “Ten Resources about AI in Software Engineering You Do Not Want to Miss” and “More Resources about AI You Might Want to Review” in two blog posts in 2025. This post provides more pointers, grouped by the topics in the title of the post plus Prompt Engineering.
Generative AI for Software Architects
This blog is a software architecture blog after all, so let’s see what the community currently thinks about use cases for AI when architecting software-intensive systems:
- A fairly detailed LLM comparison by CloudWay Digital uses Grok as one of the tested LLMs: “Can AI Replace Software Architects? I Put 4 LLMs to the Test” (April 2025).
- Stefan Toth asks “Kill the Vibe? Architecture in the Age of AI” (November 2025). He looks into AI review agents, LLM coding workflows, and custom analysis agents.
- Humberto Cervantes, Rick Kazman, Yuanfang Cai report two case studies feeding ADD 3.0 into AI/LLMs: “An LLM-assisted approach to designing software architectures using ADD” (June 2025).
- “LLMs for software architecture: possibilities, pitfalls, limitations and solutions” is a rather long article on Michael Stal’s blog accompanying a conference presentation (November 2025).
- “Software Architecture Meets LLMs” provides a Systematic Literature Review (SLR) (eight authors, May 2025). The SLR is available on ResearchGate.
The CloudWay Digital comparison draws the following conclusion:
“At least for now, GenAI won’t be replacing architects anytime soon.” “But they will be replaced by architects who know how to use AI better than anyone else.”1
🤔 Do you agree? What will the landscape look like in 12 months?
AI-Assisted and AI-Based Coding
Vibe coding has been discussed very actively since my last post:
- Uwe Friedrichsen reports “Solving the wrong problem: The nagging feeling that something does not fit” (October 2025).
- Class Busemann observes “Vibe coding is fast. Until it isn’t.” on LinkedIn (January 2026).
- Lionel Briand summarizes his experience with generative AI and LLMs to support software engineering tasks on LinkedIn too (January 2026).
- “The Design Space of LLM-Based AI Coding Assistants: An Analysis of 90 Systems in Academia and Industry”, a paper by Sam Lau and Philip Guo, asks about the design decisions involved in building an AI coding assistant (VLHCC, October 2025).2
- The book “Vibe Coding” by Gene Kim and Steve Yegge (October 2025) is on my reading list.
These are just a few resources I found informative and thought-provoking; there is much more out there of course.
Amazing results are reported, as well as lessons learned. Disillusionment is reported too. Time will tell how fit and how maintainable the generated code bases are — and who will look after them.
🤔 One gets the impression that not trying, learning, using does not seem to be an option if one does not want to be left behind. What is your opinion?
Prompt Engineering
Assuming that input specifications matter (and will continue to matter), the garbage in, garbage out principle still holds. Getting the prompts right is critical to success in generative AI use cases:
- Alex Chesser argues that “Attention Is the New Big-O: A Systems Design Approach to Prompt Engineering”: An “LLM doesn’t read in the same order as you or I. Instead, it weights relationships between all tokens at once, with position and clustering dramatically changing what gets noticed”.
- Fadeke Adegbuyi shares “Prompt Engineering Best Practices: Tips, Tricks, and Tools” in an article on the DigitalOcean website.
- MIT Sloan Teaching & Learning Technologies contributes “Effective Prompts for AI: The Essentials”.
- Compact advice can be found in online articles from TechTarget, PromptHub and DEV.
- OpenAI Platform: “Prompt engineering: Enhance results with prompt engineering strategies.” and “Best practices for prompt engineering with the OpenAI API”
- Google Cloud: “Prompt engineering: overview and guide”.
- Anthropic Claude Docs: “Prompt engineering overview” and Prompting best practices for Claude 4.x.
In summary, the task description should be specific/narrow and include an explicit output specification:
- Define the usage scenario and role the AI should assume. Provide a template and/or example(s) for the expected output. Specify the desired writing style, expected quality level and audience expectations.
- Structure your input and the output example/template with semantic anchors, i.e., well-defined terms that serve as reference points in the conversations with the AI.
- Treat prompts as engineering artifacts like source code. For instance, review and version them (“prompts as code”).
🤔 Is this obvious? Or worth being reminded of? Will prompts (and context) engineering still matter in “generative AI 5.0”?3
Agentic AI, AI Integration, MCP
Enterprise Application Integration (EAI) and Extract-Transform-Load are evolving into AI agent integration:
- Tim Berners-Lee stated that “AI Integration Is the New Moat” (October 2025).
- “Building effective agents” by Erik Schluntz and Barry Zhang on the Anthropic blog shares learnings and gives practical advice (December 20204).4
- Lilian Weng’s “LLM-powered Autonomous Agents” has the technical details (Lil’Log, June 2023).5
The Model Context Protocol (MCP) receives a lot of attention at present (and there are alternatives such as A2A):
- A Latent Space post explains “Why MCP Won” (March 2025).
- There is a public GitHub repository with a List of MCP servers.
- An Architecture overview can be found on modelcontextprotocol.io.
- Learn about “10 strategies to reduce MCP token bloat” on The New Stack (February 2026).
- FastMCP implements the protocol specification in Python and provides test tools.
🤔 Will AI agent integration bring additional effort (and risk)? Will integration cost still matter or will the tedious parts be done in “vibe integrating”?
If more integration happens when agents start talking, additional integration capabilities will be required, needing ownership.
Final thoughts
The innovation speed has been, and continues to be, remarkable. So is the perception of the good, the bad and the ugly.
- Lex Fridman reports excitement but also identifies an AI agent security problem (LinkedIn, February 2026).
- “The lethal trifecta for AI agents” is private data, untrusted content, and external communication.
Has everything else become irrelevant? The following sources say “no”:
- Austin Henley asks “Dear Researchers: Is AI all you’ve got?” (January 2026).
- Diomidis Spinellis discusses “Vibe coding toward the incident horizon” (March 2026).
- Jordi Cabot calls for “A special conference track for endangered research topics” (February 2024).
🤔 How do you respond to the authors’ arguments?
Two important concerns seem to be underrepresented in the public discussion, architect/developer privacy and the environmental impact of AI:
🤔 Is it acceptable to be logged in (and observed) all the time, with code and documentation possibly going to a cloud provider on every edit (with a price plan attached)?
🤔 What is the sustainability footprint of training and using LLMs?
While not answering the above questions, the resources referenced in this post will hopefully help you to use AI effectively and efficiently — as well as reliably and responsibly.
– Olaf
My previous posts on the subject are:
- “Ten Resources about AI in Software Engineering You Do Not Want to Miss”
- “More Resources about AI You Might Want to Review”
Notes
-
The Awesome GitHub Copilot repository has an ADR Generator agent. ↩
-
This paper pointer comes from a post in Austin Henley’s blog. ↩
-
Expected to arrive still this year? 😏 ↩
-
The MADR maintainers came across this article when discussing agentic architecting, architetcural decision making in particular. ↩
-
This article is referenced in Martin Fowler’s bliki. ↩