AI in Construction: Why the Hype Is Harming the Next Generation
The construction industry is drowning in AI-generated noise. LinkedIn is full of posts about digital construction written by people who've never set foot on a manufacturing floor, let alone managed a complex MEP installation. The content looks impressive long-form, well formatted, peppered with industry terms, but it's hollow. And it's doing real damage.
Gary Cowan, Head of Digital Construction at Kane Group, has spent two decades building high-precision MEP systems. He's watched AI transform from a promising tool into a crutch that's undermining young engineers' development. His perspective is blunt: AI is a double-edged sword, and most people are holding it by the blade.
AI can only make you better if you already know enough to catch it when it's wrong.
What most people get wrong
The fundamental mistake is treating AI as a shortcut to expertise rather than a tool to accelerate it. A fresh graduate asks ChatGPT about digital construction workflows, receives a confident sounding answer, and assumes it's correct. They have no baseline to judge whether the information is accurate, outdated, or completely fabricated.
Someone with twenty years of experience can ask the same question and immediately spot the errors. They know when AI is citing non-existent standards or suggesting workflows that would fail in practice. The difference isn't the tool, it's the knowledge required to use it responsibly.
This creates a dangerous feedback loop. Young engineers post AI-generated content on professional platforms. AI scrapes these posts and incorporates them into its training data. The misinformation compounds with each cycle, and the people spreading it don't even realize they're part of the problem.
Why AI struggles with construction standards
Construction and engineering are governed by specific standards, most of which are locked behind paywalls. ISO standards, building codes, industry-specific guidelines, AI has no access to these unless someone has uploaded pirated versions, which is rare and inconsistent.
Instead, AI learns from what's publicly available: blog posts, social media, forums, and marketing materials. These sources aren't authoritative. They're often wrong. When someone asks for guidance on LOD requirements, fabrication tolerances, or coordination workflows, AI returns answers based on repetition, not accuracy.
If you're asking AI about something governed by a standard or regulation, verify the answer against the actual document. If you don't have access to that document, you're not ready to trust the AI's response.
The lazy engineer problem
AI makes it too easy to avoid hard work. Why spend hours reading technical documentation when you can ask a question and get an instant answer? Why draft an email carefully when AI can write it for you? Why think through a design problem when AI can generate three options?
This sounds convenient, but it prevents learning. The difficult work, reading dense specifications, debugging coordination issues, understanding why one approach works and another doesn't, is where expertise develops. Skipping that work means staying permanently junior, regardless of your title.
In practice, engineers who rely too heavily on AI plateau early. They can complete tasks quickly but struggle with novel problems. They can't distinguish good advice from bad. They become dependent on the tool and brittle without it.
The bluffing epidemic on LinkedIn
One of the most visible symptoms is the explosion of AI-generated thought leadership. People with two years of industry experience are posting thousand-word essays about digital transformation, MEP coordination strategies, and the future of construction technology.
The posts follow a predictable pattern: generic opening hook, bullet-pointed insights, inspirational closing. The writing is smooth, the formatting is clean, but the content is superficial. When challenged in the comments, the author responds with more AI-generated text, same tone, same structure, same lack of substance.
This isn't just annoying. It actively harms the industry. Genuinely knowledgeable professionals are leaving platforms like LinkedIn because the signal to noise ratio has collapsed. Young engineers looking for mentorship are finding AI-generated content instead of hard-earned wisdom. The experts who could teach them are walking away.
How to use AI responsibly
AI isn't inherently bad. Used correctly, it's a powerful accelerator. The key is ensuring you have the foundation to use it safely.
Build knowledge first. Spend your first few years doing the hard work: reading standards, making mistakes on real projects, learning from experienced colleagues. Reach for AI when you can evaluate its answers critically.
Use AI for speed, not shortcuts. If you already know how to write a technical specification, AI can help you draft it faster. If you don't know what should be in the specification, AI will produce something plausible but wrong.
Verify everything. Cross-reference AI outputs against authoritative sources. If you're working on a project governed by specific codes or standards, check the actual documents. AI's confidence level has no correlation with accuracy.
Don't hide behind AI. If you're unsure about something, say so. Asking for help from a senior engineer will teach you more than asking ChatGPT. Being honest about gaps in your knowledge builds trust and accelerates learning.
The wider cost
The real tragedy isn't just individual careers stunted by over-reliance on AI. It's the erosion of expertise across the industry. Construction is already struggling with knowledge transfer as experienced professionals retire. AI-generated content creates the illusion of expertise while preventing its actual development.
Young engineers need to see what real mastery looks like: the depth of understanding, the careful judgment, the ability to explain not just what works but why. They need mentors who can teach them to think, not just to execute. AI can't provide that. LinkedIn posts can't provide that. Only time, effort, and genuine experience can.
AI is here to stay. The question isn't whether to use it, but how. The engineers who thrive will be those who treat it as a tool to amplify their expertise, not replace it. The ones who fail will be those who never built the expertise in the first place.
If you can't tell when AI is lying to you, you're not ready to use it professionally. That's not gatekeeping. That's basic professional competence.