codeX is an agile-first vocational coding programme in South Africa, with a mission to increase the talent pool that the software industry can draw from.
We focus on reaching groups who are under-represented in tech, provide them with a solid agile web developer skill set, then connect them with jobs where they can flourish. Diversity is what we live and breathe.
A Social Enterprise now in our fourth year of operations, we have shaped our programme year-on- year, based on the needs and insights of all our customers (applicants, mentors, coaches and employers).
This paper introduces our learnings about the fundamental indicators of developer talent, hiring and diversity, and how we are working with organizations to create environments that support team members from different socio-economic backgrounds.
Beyond gender and ethnicity filters, one of the greatest barriers to diversity lies in the infrastructure and environment differences in unequal societies, and particularly the hidden biases these create in recruitment processes and performance measurement, which ultimately hold them in place..
This set of stories draws from codeX’s experience in creating demographic change in the software industry, by changing the way we approach education.
In this paper I’m focusing on new indicators we developed for finding high-potential candidates, which go well beyond academic filtering, and how we are helping companies review the profiling-type filters that their HR departments are lumbered with, to find homes for developers who bring both agile thinking and diversity of perspective to their teams.
codeX tackles an interconnected set of social challenges affecting the South African software industry. Easily identifiable are the skills shortage and the absence of diversity in the industry – a global problem that remains largely unsolved. Less visible and as powerful is the role that poverty, poor infrastructure, lasting structures of segregation, and the challenges produced by failing education facilities play in maintaining that absence of diversity. In these circumstances, access to skills training and experience appropriate to software teams is all but unattainable.
codeX set out to make a dent in this complex set of problems, by making full stack development training accessible to people who are new to tech.
We developed an experiential learning programme that is consciously Agile First, with an appreciation that ‘wicked’ problems are subject to the laws of complex adaptive systems, so change will be slow and successful practices emergent.
codeX started as a pilot 3 month bootcamp in 2014, subsequently fleshed out to a one year programme more suited to developing computational thinking for those new to tech, with a focus on inclusive education that caters for candidates living in conditions of economic scarcity.
Being agilists ourselves, we believe in safe-to-fail, accelerated learning. Starting small, our pilot had 8 participants, and for the next year we took on new participants each term, the year programme running on a rolling start holding from 16 – 32 people. In 2016 we consolidated to a one year programme, with 40 candidates in 2017 and 2018.
We have mentors rather than lecturers, the curriculum is self-paced, and rooted in agile practices – personal kanban, daily standups, weekly sprint reviews and retrospectives.
Our graduates are placed with software teams in a range of different organization sizes and industries, leveraging a network of people who share our vision to transform education & diversity in South Africa.
Currently run by a team of 5 based in Cape Town, we are carefully developing the packaging that will allow us to scale the programme to rest of South Africa and beyond.
Figure 1 codeX in Action: Reviews, Retros & Standups
I am an agile coach and CEO at codeX. I have a deep interest in the practices, principles and underlying theories that facilitate excellent team practices and products.
I joined codeX as COO and Agile Coach in July 2014 with a love of Deliberate Discovery (a mindset championed by Dan North that emphasizes the continuous learning aspect of software development); the knowledge that technical agile practices are as key to the success of an agile adoption as the collaboration practices; and the idea that training developers with all of these from the start of their learning journey would remove much of the ‘unlearning & relearning’ pain of taking on new hires. This was the starting point for developing the curriculum.
I joined the software industry after studying fine art and metalwork. I’ve long felt the software world of creative craftsmanship – where experiential learning & continuous improvement practices allow the self-taught to thrive alongside the academically educated – could offer a route out of poverty. If only it could be accessible.
I discovered James Zull’s writing on the neuroscience of learning while researching the neuroscience of facilitation. The more I learnt, the more agile learning practices seemed to offer the structure needed to create a new approach to education, and in doing so, overcome the barriers to diversity that the industry struggles to address.
Along my codeX journey I have had deep insight into the level of challenge candidates from underserved communities live with every day, that are simply unrelatable to people coming from ‘ordinary’ suburban areas – a comparatively very privileged background.
The journey of finding, training and placing candidates from this demographic has highlighted the many invisible barriers they face – and led to a focus on how to design for conditions of scarcity.
3. Our Story
There are many stories to tell about codeX, from many perspectives.
This paper focuses on how we dealt with two metrics-based filters that effectively block different talent from entering the industry. First, the filter for who finds their way into software training, and then who is able to make the transition into a long term career.
3.1 CHALLENGE 1: FINDING HIGH POTENTIAL CANDIDATES
The “particular mix” that makes a good software developer seems to be a bit of a mystery, a kind of ‘gift’.
There is of course a strong correlation with good school mathematics and science results – and unfortunately, this is often where education institutions stop looking. This has skewed the industry demographics to be dominated by those who have been encouraged to study these and had access to good teaching. This has produced an indirect filter for affluence.
3.1.1 The Problem: Correlation Is Not Causation
The primary filter for getting into tertiary software education is the math and science grades from the Senior Certificate (the exam after twelve years of schooling). Many applicants from poorer backgrounds have simply not had qualified teaching in math and science – through circumstances they have no control over. Their critical thinking may be less exercised than the affluent school graduates, yet their capability is firmly there – and going to waste. So their futures are effectively decided by a Senior Certificate result which is treated as an indicator of personal ability, rather than a reflection on the education system that produced it.
Our research had shown that learning to code has actually been shown to improve math results, so while computational thinking is related to math ability, it is not predicated on it. Multilingual ability is also a positive indicator. We were sure there were more.
Focusing on ability over exam results, we decided to build a programme that would accommodate both under-served and affluent groups, to learn side by side. Instead of specifying academic results for our applicants, we accepted a broad range of applicants and allowed our metrics to emerge.
3.1.2 What we did next: An Agile Curriculum As A Bridge
Our approach is inspired by the work of Carol Dweck on the Growth Mindset – influential research showing our brains and talents can be developed through well designed learning strategies, rather than being fixed traits.
With the Growth Mindset at its heart, we designed the programme using techniques already in use by agilists, shaped to a large extent by Deliberate Discovery principles.
We draw distinctions between different types of not-knowing based on Deliberate Discovery’s 5 Orders of Ignorance, and develop strategies for overcoming each level.
The curriculum and learning journey draw elements from Sharon Bowman’s Training from the Back of the Room (fun and interactive training strategies based on neuroscience that aid engagement and retention of knowledge), scrum, kanban, facilitation, design thinking, complexity thinking, systems thinking, and the neuroscience of learning.
We developed a custom full stack web development curriculum with the learning journey embedded in iterative projects based on narratives rooted in the candidates’ daily lives.
Each project builds on the previous learning, starting with data concepts such as train number classifications (first two numbers indicate route, second two numbers indicate inbound / outbound direction).
Over time, concepts from community problem solving to business-focused web components form a “bridge” for computational thinking to develop, starting with clear relationships for known concepts, and transferring to new business domains as the coders’ proficiency develops.
We do this with high levels of feedback, and no grades.
Emerging Metrics: Measuring Learning Over Knowledge
When we started, we knew we couldn’t rely on exam results, and we didn’t want to create just another numerical rating.
We began simply by looking for problem solving strategies, and then realized more was needed – specifically the ability to explore relationship thinking, and the ability to learn syntax.
During our second year of operations, we developed fulltime bootcamps, running initially for four weeks and eventually two, as we put the tooling in place to evaluate candidates’ progress quickly.
These soon became the de facto entrance criteria to be accepted into the one year programme.
Rather than numeric rating, our feedback started out as a mentor discussion of each candidate, with written observations emailed to each candidate, and a Red / Amber / Green (RAG) rating on their progress.
We started reviewing how our applicants handle uncertainty, whether they can see relationships between concepts, what depth of problem solving strategies they apply, as well as how they connect existing skills to new challenges, whether their pace is appropriate, how well they collaborate, and how they respond to feedback.
3.1.3 Results: An Image Of Excellence
The written feedback was initially very time consuming, with 8 reviews taking up to a full day.
Sending out weekly feedback forced us to find a faster method to provide the feedback, yet for our own success we could not sacrifice quality.
Over time, we started to identify patterns of behaviours which slowly evolved into a set of indicators that clarify desired / sufficient effort; insufficient effort: and forced approaches that hamper progress, typically driven by frustration or fear.
We combined these into a grid format, in order of Insufficient | Desired/Sufficient | Excessive, maintaining the RAG status:
Green (reflecting sufficient / desired indicators) in the centre column; and Orange (areas of concern) or Red (where significant improvement is required) in either of the outer columns, depending on the level of concern.
3.1.4 What we learned:
This format of providing feedback on applicants’ learning approach, rather than their skills knowledge at any one time, has an immediate impact: the coders are able to course-correct effectively, and engage with mentors on their individual strengths.
A second, deeper relevance emerged over time: by providing a metric showing desirable and undesirable indicators, ranged as “too little | just right | too much” we had created a clear image of strong problem-solving abilities to work towards.
With each candidate able to orient themselves in their behaviours, we immediately saw far less over-correcting, and had incidentally started to build a positive image of a fully rounded developer.
Figure 2 Feedback building a positive image of a fully rounded developer
3.2 CHALLENGE 2: PLACING GRADUATES FROM OUR PROGRAMME
Our first major placement successes and challenges began at the end of 2015, as we saw the affluent software development world come face to face with a different kind of talent, and the challenges our graduates face on a day to day basis.
It was also our first encounter with metric-based exclusion in the form of supposedly “universal” logic assessments and personality profiling tools.
3.2.1 The Problem: Placement Filters
When we first started placing graduates, our curriculum was in its infancy. Drawing from our own agile experience of “customer collaboration over contract negotiation” we worked with trusted employers as partners who could provide objective feedback and appreciate and nurture potential.
We quickly incorporated learning from both the positive and the negative experiences, focused on finding ways to build better bridges. Their input has been invaluable.
Over time we have learnt that our most successful placements have been with organizations that evaluate our grads’ technical skills and place them in supportive agile teams where they are able to contribute early.
However many companies still insist on the online profiling tools to help them to “filter” the applications for “business fit” – and this is now our biggest area that we are working to make change.
When employers invite our coders to complete their assessments, the feedback we typically receive is that candidates perform well in ‘attention to detail’ categories but poorly in numerical ability and personality profiling.
In addition to applying supposedly universal filters to assess individual traits and abilities, the results are interpreted in relation to scores in an “industry norm” to identify cultural fit – a further factor in institutionalizing stereotypes and absence of diversity.
For one company the stats were:
- 28 candidates completed the online assessments
- 24 candidate did not meet the minimum requirements
- 4 candidates were interviewed
- 2 candidates were offered a position
- and only one accepted it.
That Must Be Right Then?
These filters are in place to protect employers – a poor performer can pose many challenges to their teams. Yet candidates who didn’t meet the minimum requirements above are employed elsewhere to high praise.
One of these grads built a Scala API as their first project. Other codeX grads are working on Oracle JET business components, financial systems and fin-tech platforms; e-learning, and in a variety of digital agencies.
We received the following feedback specifically referring to three of the 24 apparent “under-performers”:
The codeX graduates are a delight. Two things in particular have impressed me; their confidence and their initiative. Their confidence has allowed them to very quickly feel a part of the team and to be able to contribute right away. As for their initiative I am impressed how they are able to use it particularly for self-directed learning which has allowed them to tackle unfamiliar problems systematically and with very little input from their mentors. Carlo Kruger – Wealth Migrate
Something else must be going on.
3.2.2 What We Did: Getting Insight Into The Data
Another company’s assessments consistently filtered out our strongest candidates. “Circle Company” interviewed a few coders early in their learning journey, with the (by now) standard results and decided not to hire. Fortunately we had a passionate supporter elsewhere in the organization, with a deep need for talent., who worked with us to find alternative ways to hire our graduates. Using low-risk internal projects as a bridge to learn their tech stack and business domain, they took on grads as interns.
The new supporters re-interviewed some of the candidates who had been dismissed in the earlier round, this time based on their GitHub profile and just the psychometric profile – which Circle Company believed should be universal.
It’s worthwhile to note that most of the profiling software in use was developed in Australia, a country with much lower ethnic diversity and poverty rates than South Africa.
Before the interviews, our supporters had already formed expectations based on the assessments results which indicated: risk-averse, poor communicators, a preference for non-social interaction, and weak logical thinking.
Having met and worked with them, these same people could see that the graduates they met with are: success-oriented, with a positive approach to learning, confident open collaborators, and strong technical thinkers with much more developed skills than anticipated.
Mad, Sad, Glad: Those Profiling Assessments
I was both mad and sad when I received the assessment feedback.
At what Risk?
The indication of ‘risk averse’ candidates comes from a rating that shows a preference for predictable environments and contained structures – which is deemed to be a weakness. Figure 2 (on page 6) shows a publicly available example similar to the actual report.
This fundamentally indicates how poorly the assessments cater for the context of non-affluent dwellers.
It’s not uncommon for our candidates to be let down by transport, electricity, water supply, and even witness stabbings and shootings in their daily commute. To persist despite personal safety concerns and unreliable basic infrastructure that is easily available to affluent dwellers, can not be called an inability to deal with uncertainty.
Unfortunately indicating a preference for a much less risk-filled life meets employers’ concerns about the impact of poor communicators who wait to be told what to do.
The communication and conscientious indicators suffer from similar biases.
Figure 3 Psychometric Profiling Sample Low sensation-seeking score interpreted as unable to handle uncertainty
I do not think it means what you think it means: Conscientiousness in circumstances of Scarcity
Most enlightening has been insight into some of the questions themselves, which are steeped in ‘invisible’ cultural norms from an affluent context.
One example is a comfort rating being late for an important social engagement – lateness being taken as an indicator for poor timekeeping and even anti-social tendencies.
However situations of scarcity require a far greater amount of trade-off thinking than affluence. In this case, tradeoffs may include any of: caretaking and other responsibilities for families and friends; high cost of transport; long journeys (a 15 minute drive can make for a 2 hour sojourn via public transport) and unsafe passage, especially over weekends and after dark – all of which can be further complicated by unexpected outages.
Another bias is the issue of self-reported vs observed skills.
Questions that are designed to surface skills and talents which employers may be looking for, can only provide the desired results if the candidates already recognize they have those skills – which means having previously received recognition for them, to the point that they see them as traits they own.
One candidate’s approach is consistently experimental, quick to find solutions to curriculum challenges which were often well beyond our expectations – yet they scored low on creativity and initiative.
Aside from cultural bias in the questions relating to artistic endeavours, the candidate sees themself as being capable, and wanting to produce excellent work. They simply had never framed this approach as remarkable, and hadn’t been exposed to situations that explicitly rewarded them for it. The organization would have missed a great thinker through this bias.
One of the barriers that keep psychometric profiling firmly in place is a moratorium on disclosing the profiling results – in some cases even the applicants are required to waive their rights to the results.
Where they are shared, highly constructed feedback sessions can only be delivered by trained personnel. Our experience of the languaging around feedback has been corrective rather than discursive, with a somewhat laboured focus on “areas needing development” and how these will hinder their careers.
Where results seem incongruent to the applicant, the responding narrative varies from being misinterpreted (only trained personnel can truly understand), or that applicants are not open to feedback and self-development.
Where we were fortunate enough to be invited to discuss the feedback (with permission where required), the sessions were designed to enlighten us on the candidate profiles, and, where hired, how they would be tracked.
Over time we have been able to point them to the safety and scarcity factors that hadn’t been considered, and the mismatches quickly became clear.
While there remains some defense of the metrics, within weeks of Circle Company’s internship starting, it became obvious to all that the personalities of the candidates are a far cry from the assessment results.
3.2.3 The Results
These conversations hit me hard – they are truly maddening and saddening. Yet I’m glad too, to find myself in a position to be able to bring these hidden biases to light, be able to challenge them, and start creating healthy interviewing and onboarding processes.
When we started this journey with Circle Company they wanted to monitor the change in results over an 18 month period to measure their interns’ “development”. After a couple of months, they were willing to question the accuracy of the data, too. After four months, the interns were effectively working as an independent team, having established strong communication with two senior managers as well as peers. They are also evaluated in a similar fashion to established and performing teams within the organization – a direct result of building a nurturing relationship and a clear learning path designed to match their skills to the organization.
3.2.4 What We Learned
There is plenty of contention around the validity of psychometric profiling – an indication of how little quantitative value it offers. The rationale for continued use is that organizations need some measure to filter applicants for a good fit.
In this case, it seems clear that profiling is the wrong yardstick to use. And while recruiters on all sides acknowledge the inefficiencies, it has been institutionalized, and not scrutinized. Consistent deviations in results are more often than not dismissed as ‘cultural difference’ rather than indicators of different day-to-day circumstances, and alternate factors in the trade-off.
Unfortunately to date, HR staff have had no other easy numerical source they believe they can trust. Where we haven’t built relationships with interviewers, those doors remain closed to our coders.
But through conversation and engaging directly with hiring staff needs, they come to recognize the impact the filters are having, and are increasingly open to alternative approaches.
82% of our graduates find work, and of these 70 – 75% stay in the industry.
Feedback from our graduates:
“I am a junior software developer at [a financial company] and I found this job through codeX immediately after my graduation All the skills I have learned at codeX have opened up many doors for me especially in a growing industry, and I can take on any challenge thrown at me!” – Nurha, Junior Developer
“codeX gives you a hook and teaches you how to fish” – Rendani, Junior Developer
With a stable and predictable curriculum, and a growing reputation for excellence, we are extending our placement network in the next steps to scale operations. The more corporate employers we encounter, the more we uncover hidden bias in the assumptions and filters built into profiling tools.
Having built trusted relationships, we are increasingly able to navigate these biases openly, building healthier and more appropriate meeting points for employers and graduates, and innovative ways to shape the early careers of “diverse” graduates.
4. What We Learned About metrics and talent
The metrics used to measure talent are not only questionable – they are harmful. Both for the careers of high potential youth, and for any hope for true diversity and inclusion in the industry.
We have not found easy, machine-based indicators for either identifying or placing talent – in both cases human interaction and observation are required, along with assessments that embrace that people change over time and behave differently in new situations.
For identifying talent, we have developed a measurement of learning rather than knowledge, that we have found to be far more meaningful than number ratings.
For placing talent, we do not believe that anonymous, “standardized” testing will ever be the right way to assess individual skill and organizational fit. At a very minimum, in order for profiling tools to have any hope of being inclusive, it’s essential that these are extended to incorporate behavioural science studies of scarcity, and subjected to rigorous, industry-wide scrutiny for bias.
But ultimately only collaboration on projects will ever indicate how well a team member truly performs, and shifting onboarding to a more hands-on approach, while creating time for low-hanging-fruit internal projects has proved successful for this.
The specific practices we have evolved are shaped to meet the constraints of the South African industry. And we also believe the behaviours that shaped them and the principles they are built on are present throughout the software industry.
High among my most valued discoveries has been observing how people who are treated with respect (often for the first time) and supported rather than judged as they go through their learning journey, respond to the learning environment.
It is more powerful than I had ever expected. Key learning, as evidenced by Carlo Kruger’s quote about codeX graduates, is that where comparative metrics serve to alienate and isolate, openness and generosity of spirit can create excellence and give it space to flourish.
The more we share these stories, the faster we can build an inclusive community that truly does bridge the digital divide.
My deep thanks go to André Vermeulen, the codeX CTO and my partner in every step of building codeX; Directors Michael Jordaan & Dave Weber whose dedication and business wisdom make this journey possible; and our network of organizations, startups & managers that make up codeX’s unique social development network, each adding their own distinctive value.
And a special thanks to Sue Burk, whose true engagement and ability to identify nuggets of value within seemingly ordinary data, has made writing this paper a pleasure – shining a new light into hidden corners that makes this story both clearer and much richer for it.
Carol Dweck – Growth Mindset https://hbr.org/2016/01/what-having-a-growth-mindset-actually-means
Sharon Bowman – Training from the Back of the Room http://bowperson.com/training-from-the-back-of-the-room/
Sendil Mulainaihan & Eldar Shafir- Scarcity: https://scholar.harvard.edu/sendhil/scarcity https://www.goodreads.com/book/show/17286670-scarcity
James Zull – From Brain to Mind: Using Neuroscience to Guide Change in Education: https://muse.jhu.edu/article/502343/summary https://sharpbrains.com/blog/2006/10/12/an-ape-can-do-this-can-we-not/
Dan North – Deliberate Discovery: https://dannorth.net/2010/08/30/introducing-deliberate-discovery/
Education Theories – Cooperative, Constructivist & Connectivist: http://thinkspace.csu.edu.au/gdyer/2014/06/01/the-big-three-constructivism-constructionism-and-connectivism/
Psychometric Profiling – https://www.recruitingblogs.com/profiles/blogs/what-no-one-tells-you-about-psychometric-testing