Panel: Code Quality in the AI Age

Post-talk discussion, hosted by Matt at Codurance

Codurance intro: case study

Q) Is there a chance for this new AI age to work?

Q) What does code quality mean to you, and why is it important to keep these high standards?

  1. Code quality means a business running with the right product, and easy to maintain
  2. Code quality means easy to understand. Change is the only constant.
  3. Code quality is not about what it is for me or you, but in context of a product and application. That has a lot of parameters, so it's "for the purpose that it serves"
  4. An element of correctness but also over time: is it easy to evolve? There is an element of familiarity as well. Frameworks, paradigms of coding. Are we able to evolve it well in small increments and sustain that pace.

(3) Just to clarify: from working with Amazon Bedrock team, a demo that will serve a purpose

Q) What are the needs for new skill sets?

What types of skill sets are necessary for engineers to develop, individually and as a team?

A) We need to get better at defining what we want.

A) A lot of what we do hinges on prompt engineering.

A) We're in challenging times

A) Have processes to take the opportunity, institutionally

Q) Will this cut beginner/junior programmers out of the learning process?

A) Programming language building blocks have evolved throughout history

A) Computer history has always been about abstractions

Q) Do you think that there will be more use of a general purpose tool like GitHub Copilot or individual companies using personalised tools?

A) It depends

A) Definitely, and it already happens

A) Precision is the problem

Q) Would it be possible to do that fact-checking for new code like in TDD?

A) It's not deterministic

Q) Speaking of the different abstractions, will programming change visually, the way we write?

A) We wouldn't need a programming language if AI didn't

A) The IDE is also involved

A) You can't know the future

Q) In regulated environments, what would need to happen in terms of security reliability for their adoption?

A) Up until now we've been discussing in context of software development

A) Babylon Health was exactly that system

Q) What about safety critical applications: flight software, nuclear controls?

Q) Medicine already has a strong ethical framework

A) Babylon Health again, AI was playing a small part

Q) Safety critical: automating processes of architecting the software, coming up with right requirements

A) More context adds latency

Q) Are we paralysing ourselves as implementers?

A) The operating system didn't replace the operating system writers

Q) Paint a picture of what code quality with AI will look like in 5 years time?

A) Building blocks and text editors haven't evolved

A) History is important but only if you're going to learn from it

A) AI will be better, not an order of magnitude

A) Most excited about is how it does: the psychology of how we adopt it