Discussion about this post

User's avatar
Spud Taters's avatar

I like your "Comprehension coverage" metric. This is especially important for microservices requiring high availability. Error handling and attribution, backoff, retries, fail over, all needs to be understood and manually tested. I can't imagine outsourcing the thinking of a distributed system to an AI. It's a good final lint check, but not an author.

Hania's avatar

I can understand Microsoft, Meta, and Anthropic talking about lines of code written using AI, because they sell or publish products that are related to AI-assisted code generation. For everybody else not in the business of AI copilots, perhaps focusing on the AI generated lines of code as a metric is really proxy for "we too are using the latest, greatest tools available in software engineering, like the coolest teams out there". However, as you explain, focusing on lines of code signals a misunderstanding of what is actually meaningful in software engineering, which in addition to being unfocused at the business level, also erodes trust in the engineering leadership itself.

Low trust, high amount of AI generated code, with a low understanding of what that code is really doing may perhaps be the real AI bubble we are about to witness burst.

No posts

Ready for more?