- Codurance intro: case study
- Q) Is there a chance for this new AI age to work?
- Q) What does code quality mean to you, and why is it important to keep these high standards?
- Q) What are the needs for new skill sets?
- Q) Will this cut beginner/junior programmers out of the learning process?
- Q) Do you think that there will be more use of a general purpose tool like GitHub Copilot or individual companies using personalised tools?
- Q) Would it be possible to do that fact-checking for new code like in TDD?
- Q) Speaking of the different abstractions, will programming change visually, the way we write?
- Q) In regulated environments, what would need to happen in terms of security reliability for their adoption?
- Q) What about safety critical applications: flight software, nuclear controls?
- Q) Medicine already has a strong ethical framework
- Q) Safety critical: automating processes of architecting the software, coming up with right requirements
- Q) Are we paralysing ourselves as implementers?
- Q) Paint a picture of what code quality with AI will look like in 5 years time?
- 4212609 on slido.com
Codurance intro: case study
- Codurance were doing small incremental changes behind feature flags
- Not the fastest, but valued for consistency
- A few months ago, stakeholders told everyone they want more screens, pixel-perfect
- Hearing stories of 'heroes' generating tons of screens
- Turned out, especially in testing suite, they were AI generated
- Official guidelines were not to use code generation
- Native language comments by developers trying to understand what AI had generated for them
- The team eventually were let go... not a gold standard tool
- Hand-over period of 2-3 weeks, became clear there were a lot of things missing
- Backend and frontend were completely siloed, individuals not collaborating
- No communication with product owner, more on the junior side of the spectrum
- Handing over to AI and expecting speed, if teams not ready/mature enough, not collaborating, and don't have good practices, then something is not clicking.
Q) Is there a chance for this new AI age to work?
- We always say we will lose jobs to AI, in this case they lost theirs from its use
Q) What does code quality mean to you, and why is it important to keep these high standards?
- Code quality means a business running with the right product, and easy to maintain
- Code quality means easy to understand. Change is the only constant.
- Code quality is not about what it is for me or you, but in context of a product and application. That has a lot of parameters, so it's "for the purpose that it serves"
- An element of correctness but also over time: is it easy to evolve? There is an element of familiarity as well. Frameworks, paradigms of coding. Are we able to evolve it well in small increments and sustain that pace.
(3) Just to clarify: from working with Amazon Bedrock team, a demo that will serve a purpose
- Would we measure cyclomatic complexity?
- Ephemeral purpose, not going to live on.
- Certain standards of testing and automation.
- If that were to live longer that wouldn't be enough.
- Software is always an evolving thing. You have to always be determining quality.
Q) What are the needs for new skill sets?
What types of skill sets are necessary for engineers to develop, individually and as a team?
A) We need to get better at defining what we want.
- We need to be more methodical, not just experiment.
- You must define what we want to do well. Not have incomplete threads.
- Must be precise and methodical, to instruct the machine.
A) A lot of what we do hinges on prompt engineering.
- Must tell the machine exactly what we want.
- Will that evolve over time and become more basic? Probably.
- Even driving skills may be something you don't necessarily need.
- The problem is at what point this comes?
- We are not at the age where this is all solves.
- The GenAI revolution has pretty much just started.
- ChatGPT captured the imagination of the consumer market, so now we're all in this together.
- We are in a transition state.
- Earlier story is indicative of our current stage.
- We are all trying to make this accelerate and achieve something we're happy with.
- The advancements have been incredible
A) We're in challenging times
- We have a technology that's promising, but it's clear it won't replace developers.
- For the foreseeable future we'll have a hybrid model.
- We need developers to understand code written by a machine.
- We need to emphasise very traditional software practices, with safety caution and guardrails
- Test automation, code reviews
- Never use AI for something I don't understand.
- It's so easy to get the perception that we're learning something.
- We might end up with a whole generation of developers who don't really know how to code.
- Interesting and dangerous time
A) Have processes to take the opportunity, institutionally
- In a management position, from a distance, unsure AI is that different to your job as a whole
- Role of developers has always been to adapt to new ways of working.
- From an individual perspective, not changed that much.
- Less control: like with the move to the cloud, we moved to a place we know less about our infra
- From a team perspective there's a lot we can do
- Main thing we can do is have a framework to embrace that
Q) Will this cut beginner/junior programmers out of the learning process?
A) Programming language building blocks have evolved throughout history
- We were always working with coarser grained building blocks over time
- What is changing now is we're losing control of the building block
- The grouping of building blocks changed: 'generate this code', 'refactor it'
- Developer loses control over what goes on behind the scene
- These tools are doing things we don't understand
- Team gives impression of being extremely productive to one of the worst
- The danger is things being generated we don't understand
A) Computer history has always been about abstractions
- Do you need to understand the full depth to use something?
- Maybe you can just take that abstraction and run with it
- We have to be careful not to (be reactionary) about 'the old days'
- Humans are all about culture and process, machines are not
Q) Do you think that there will be more use of a general purpose tool like GitHub Copilot or individual companies using personalised tools?
A) It depends
- Looking at TrainLine, we're already using AI from a lot of the tools we use
- We also have specific use cases not provided
- In our product itself we use AI for some parts
- We are doing it more as we wanted an AI capability and we built an AI lab
- It was a bit of a solution looking for a problem
- We built a product not close to the revenue/UX, so not a problem if it doesn't work
A) Definitely, and it already happens
- These tools already come with a customisation option
- Consumer market: ChatGPT, Copilot, Amazon, etc
- Whether it's RAG, whatever, customisation is out there
- General knowledge can be useful, much like in humans
- It goes back to precision
A) Precision is the problem
- Senior developers are benefitting more from AI than juniors as they understand what's being generated, and most importantly are able to verify
- In a craftsmanship world, Test Driven Development is dear to us
- We start from the definition of the problem
- With AI this becomes a bit harder
- Not only the techniques of writing the test, it's about being extremely precise about the behaviour of the code we wish to have
- Fact-checking is something very central
Q) Would it be possible to do that fact-checking for new code like in TDD?
A) It's not deterministic
- Had to read more AI generated refactorings when developing than ever wished
- Superficially looks good but fails the test
- The AI is creative in that sense: there's no end to how subtly it can break your code
- It can do subtle things like negating a condition
- AI is really good at messing up
this
in JavaScript
Q) Speaking of the different abstractions, will programming change visually, the way we write?
A) We wouldn't need a programming language if AI didn't
- If we get to the point where a machine can generate it, there's no need for a PL
A) The IDE is also involved
- I just today watched the video from ChatGPT-4o, they speak to the video
- Maybe it's going to go in a conversational direction
- IDE is still a text editor
- Maybe we could use more modularly
- We still talk about AI at line/class level
- A lot of the applications we build are really "map redesigns", separation of contexts, bounded concerns
- That human aspect of mapping code to the domain of the business
- We are still far far away from resolving that, still in small details
A) You can't know the future
- Quantum baby! New highly pure silicon just announced
- New super-pure silicon chip opens path to powerful quantum computers
- AI may become super basic once quantum computing comes in
Q) In regulated environments, what would need to happen in terms of security reliability for their adoption?
A) Up until now we've been discussing in context of software development
- We have a lot of clients in regulated industries like healthcare
- Patients have allergies, family history, etc.
- AI being able to prescribe treatments and drugs is scary
- Not only finance etc., how about using AI to do those things?
- The precision and testability is not only the testability of the code, but of the outcomes
- How do you validate prescriptions?
A) Babylon Health was exactly that system
- The system we built was more accurate than when you go see a doctor
- "70% of good diagnosis vs. 98%"...
- There's a psychological element here of being OK with a person's fallibility rather than a machine
Q) What about safety critical applications: flight software, nuclear controls?
- There is software provability
- Writing the code is such a minor aspect of the work
- ...that a single point of failure can never crash two trains.
- We use LLMs to integrate specs, a quick pass to
Q) Medicine already has a strong ethical framework
- Noone is going to take the absurdly reductive view of Geoffrey Hinton on radiologists
- The failure modes are the problem (e.g. brain glioma scan mistaken for prostate cancer being OK'd)
- In the developing world where you're in a dilemma of using AI vs not seeing anyone you're in another question.
A) Babylon Health again, AI was playing a small part
- Rwanda lacked regulation, allowed to innovate a lot more
- Medicine is such an interesting space because the cost of humans is huge (e.g. doctors)
- Especially in sparsely populated areas like Rwanda, having a primary care system is impossible
- Way beyond the means, having an AI system that people can call has massive value
- It's a lot easier to drive adoption in those countries
Q) Safety critical: automating processes of architecting the software, coming up with right requirements
- Then you are creating data that can help generate code automatically
- You are creating a good context
- Have you thought about providing extra context to improve the performance of the tool
A) More context adds latency
- It's also about making decisions based on code, we are adapting to it not in opposition to it
- Pair programming etc., these questions have always existed
Q) Are we paralysing ourselves as implementers?
- No, we can build in ethical safeguards etc.
- You can always use it as assistant or delegator
- Trust but verify
A) The operating system didn't replace the operating system writers
- We didn't get the internet right first time, so why would we think we'll get AI right?
Q) Paint a picture of what code quality with AI will look like in 5 years time?
A) Building blocks and text editors haven't evolved
- I like the creative process of defining code
- I would expect more coarse building blocks, working with different types
- Also a way to validate all of that
- Better technical practice
A) History is important but only if you're going to learn from it
- In 5 years time AI will be a commodity, normality
- It's a new world with AI in it
A) AI will be better, not an order of magnitude
- We will learn how to use it
- Every technical revolution has brought need for more programmers
- The only thing we do is take on larger problems
A) Most excited about is how it does: the psychology of how we adopt it
- Technology will have evolved, take that for granted
- The change is not just engineering, but us as a society, what do we need to change?