Some great commentary from Bruce Schneier and Nathan Sanders on the mistakes made by AI:

Over the millennia, we have created security systems to deal with the sorts of mistakes humans commonly make. These days, casinos rotate their dealers regularly, because they make mistakes if they do the same task for too long. Hospital personnel write on limbs before surgery so that doctors operate on the correct body part, and they count surgical instruments to make sure none were left inside the body. From copyediting to double-entry bookkeeping to appellate courts, we humans have gotten really good at correcting human mistakes.

Humanity is now rapidly integrating a wholly different kind of mistake-maker into society: AI. Technologies like large language models (LLMs) can perform many cognitive tasks traditionally fulfilled by humans, but they make plenty of mistakes. It seems ridiculous when chatbots tell you to eat rocks or add glue to pizza. But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do.

Much of the friction—and risk—associated with our use of AI arise from that difference. We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes.

The insights from this article are truly thought-provoking! In my professional role, I am focused on developing software environments enhanced with AI capabilities to significantly boost the efficiency and productivity of software developers. However, we must remember that developers need to validate the input, ensure the application compiles, and rigorously test it to confirm it is fit for purpose. Essentially, you are the pilot, and you must validate the decisions from your Copilot.

One of AI's greatest strengths is its ability to elevate software skills at all levels, from beginners to experts. However, it is the new developers that concern me the most. AI systems, like human developers, will inevitably make mistakes. But I believe that the nature of AI errors will differ sufficiently from those typically made by humans. For new developers, this raises the question: how will they recognize these mistakes.

Veteran developers get a feel for this type of thing, they have seen almost every human variation of mistakes you can imagine, I simply wonder if our AI infused releases will create more variation in our down stream issues. This is where I think systems like Azure Code optimizations come into play, with optimizations we get help analyzing runtime behavior of our applications, and comparing the behavior to performance engineering best practices, check out this video for more details.

Ultimately, our path forward requires a synthesis of AI and the practical knowledge gleaned from first hand human experience, focusing on the development stage alone will inevitably kick the can down the road, we will need to assistance at every step of the development life cycle.

Four metal horses crafted in great detail, they are lined up abreast and appear to be trotting. They look aged with hues of brown and green.


Comment Section