Getting Better, Not Better Than
Jan 19, 2026
This article looks at why debates about experience versus qualifications are so common — and why the real risk sits in how competence is developed and supported, not in the route someone took into surveying.
What happened
I recently read a post asking what makes a better surveyor.
It’s a fair question, and one that comes up often in a profession that cares about standards and competence. But the discussion quickly fell into familiar territory: experience versus qualifications, practical routes versus academic ones, with people defending the path they took into surveying.
Both matter. What concerned me wasn’t the difference of opinion, but how easily the conversation shifted away from how competence is built, supported, and tested once someone is in practice.
Why it matters
To do the job well, a surveyor needs more than a route in. They need technical skill to understand what they are seeing; professional autonomy to apply judgment rather than follow scripts; and the confidence to act in the moment — often when information is incomplete, and the consequences matter.
Competence in practice is rarely a single thing. It rests on the combination of experience, knowledge, skills, and behaviours—and on how those elements are developed, tested, and supported over time.
In principle, most of us would agree with that. In practice, this is where risk starts to surface.
Technical skill can exist without autonomy. Autonomy can exist without confidence. Experience can accumulate without reflection. Qualifications can be held without readiness.
When debates focus on who is “better”, attention shifts away from how competence is actually built and sustained in real work. And when that happens, the risk doesn’t sit with the debate. It sits with individual practitioners — and with the people relying on their judgment.
One place this difference shows up clearly, is in how surveyors are mentored and in the feedback surveyors receive on report writing. Within the networks I’m part of, trainees describe very different experiences. Some feel they are flying. Others feel held back. Some mentors focus feedback on compliance. Headings are corrected, standard paragraphs added, and wording tightened to better match templates. Reports improve visually and become more consistent.
Others focus on reasoning. Why a particular level of concern was chosen. What uncertainty existed on site. What the client actually needs to understand — and what can safely be left unsaid.
Both approaches can be useful. But they develop different things.
One develops conformity. The other develops judgment.
Over time, that difference matters — especially when evidence is incomplete, templates don’t quite fit, or risk isn’t obvious. That’s where competence is really tested: not in how tidy a report looks, but in how well its conclusions can be explained, defended, and stood by.
What I’d ask instead
The question that stays with me is this:
How do we create and seek out opportunities for people to develop the skills, judgement, and confidence this work demands — rather than arguing about whose route was better?
That question doesn’t lend itself to quick answers.
It asks us to look beyond principle and into practice. Beyond entry routes and towards supervision, exposure, reflection, and professional support. It asks where responsibility sits when things go wrong — and whether that’s properly understood.
Surveying, actually, doesn’t get stronger through comparison.
It gets stronger when we invest in development and allow people to grow into the responsibilities the role entails.
Marion
If you found this useful, you’re welcome to share it with others who may benefit.
Surveying, Actually articles are shared by email when new posts are published. You can subscribe to Surveying, Actually here.