Leadership vs Management

management and leadership

Or is it Leadership *and* Management?

 

Tom Geraghty
Speaking at CIO Event in London, 2019

I created this graphic in 2019 as part of a presentation on High Performing Teams for the IT Leaders Conference.

management and leadership

Inspired by Grace Hopper’s “You manage things, you lead people” quote, I wanted to make the point that great leadership also requires great management skills. You can be a great manager of things without leadership skills, but you can’t be a great leader without good management skills. Without those management skills, you may be able to lead people, but your lack of direction, effectiveness, and capability could lead to failure.

You manage things, you lead people" quote by grace hopper

Sometimes management and leadership are presented as a binary, or worse, that “management” is bad and “leadership” is good. Neither is true: we should resist “leaderism“, and instead concentrate on the actual capabilities and skills required to manage things, and lead people. Both can be learned, taught, and always improved. We dive into this much deeper over at psychsafety.com, where we examine the capabilities and skills required for both excellent management and leadership.

tom geraghty psychological safety

(Since 2019, this graphic has gone a bit viral on LinkedIn, Chegg, Twitter and elsewhere!)

The fabulous Elita Silva translated the management and leadership graphic into Portuguese!

management and leadership - Portuguese

 

And the fabulous Ana Aneiros Vivas has translated it into Spanish!

Spanish-version-of-Management-and-leadership

Filippo Poletti translated it into Italian!

management and leadership in Italian

 

And the folk at Solutions and Performances – Executive Search have translated it into French!

Critique of Personality Profiling (Myers-Briggs, DISC, Predictive Index, Tilt, etc)

I find that some of my ideas take a few weeks, months or even years to form. This one took almost exactly a year before coalescing (coagulating?) in my mind. I’ve been thinking about personality tests in the context of efficacy, equity and neurodiversity recently, and it troubles me.

I’ve always found personality testing problematic – indeed any pseudo-Jungian approach to putting people into type categories I find highly distasteful and potentially harmful.

Critical literacy is sorely lacking in the business and management world. This is possibly largely because it’s not rewarded: we reward confidence, sticking by decisions, bullishness and simple answers to complex problems.

In respect to diversity, inclusion, and equity, I just can’t square the desire to categorise people and their personalities with the very real need for inclusion and diversity of ways of thinking. It’s seems simply antithetical.

To summarise the flaws in personality testing:

  • There is very little evidential basis behind personality profiling, and significant evidence against it.
  • The models are usually based on false dichotomies of “big picture vs detail-oriented”, when there is no evidence that these exist.
  • The models are also based on WEIRD (Western, Educated, Individualistic, Rich, and Democratic) societies, and fail to recognise collectivist, holistic strengths.
  • They rarely address context and inter-relational behaviours, but instead make assumptions about behaviour from individualistic measures.
  • They tend to assume that our personalities are largely fixed and unchangeable.
  • These tools can lead to false and potentially harmful assumptions made about other people and the way they behave.
  • The tools may be used for unethical (and illegal) practices such as recruitment, selection for promotion, or other decisions made about someone without their consent.
  • In my experience, they are one of the most highly weaponised management tools ever created.
  • Because they lead people to believe that they can understand someone based upon a profile, they can prevent further, discussion, examination and effort to understand people and their ever-evolving uniqueness.
  • The algorithms used are rarely open. Algorithms inherit the biases of those people that created them, and if we are making ourselves subject to analysis by algorithm, I want to know what it’s doing and who designed it.
  • Many tests are biased (see above) – for example, the Big Five was shown to bias against women and categorise them as more aggressive when answering identically to a man: because the original data model was flawed.
  • To avoid a critique of poor reliability, we’re often told to avoid doing the tests more than once.
  • When assigned a profile, we are generally not allowed to dispute it. Even though we have spent decades in our own minds, a five-minute test is assumed to know more about me, than me.

Even scientists who are most concerned with assessing individual differences in personality would concede that our ability to predict how particular people will respond in particular situations is very limited.

Personality, strength, or psychometric models such as Myers-Briggs, DISC, Belbin, Predictive Index, Tilt and the myriad others available, attempt to codify people and their preferences, personalities, behaviours and values into archetypes, using fixed (usually proprietary and opaque) algorithms. There is usually a commercial reason that these tests are closed-source, because companies don’t want someone copying the code and using it for distributing it, but it also prevents detailed analysis and evaluation of the algorithm.

 

Repeatability and validity

These archetypes (such as “maverick”, or “Inventor”) are then categorised and collated into larger group types, and in many organisations, used  to inform everything from role selection, management approach, or even hiring decisions, (which is illegal in many cases).

In 20 years of management, I have never seen a psychometric analysis tool generate a constructive outcome, particularly from a diversity, equity and inclusion (DEI) perspective. I also find it interesting that personality testing *only* exists in the business world, not in the academic world of actual psychological study. Do business managers actually think they know something psychologists don’t?

In my opinion (somewhat backed up by many years of experience and study), categorising people and attempting to simplify the complexities of our nature, in an attempt to make other people and ourselves more predictable, is certainly a seductive proposition. But it is error-prone, and dangerous. Adam Grant, organisational psychologist at Wharton, agrees.

 

Psychometric analyses don’t work. Indeed, they are often damaging.

The reason they will never work is because they try to map a complicated framework onto a complex problem. You may be familiar with Carl Jung, and his “12 Archetypes” of “Ruler, Sage, Explorer, etc”, which are frequently criticised as mystical or metaphysical essentialism. Since archetypes are defined so vaguely and since archetypal images have been observed by many Jungians in a wide and essentially infinite variety of everyday phenomena, they are neither generalisable nor specific in a way that may be researched or demarcated with any kind of rigour. Hence they elude systematic study, which is true of many other domains of knowledge that seek to reduce complex problems and systems to simple, archetypal models and solutions.

As Cynefin shows us, complicated systems can be really big, and appear complex, but the laws of cause and effect don’t change. When you press the A/C button in your modern car (which is “complicated”), the A/C comes on, and the same thing happens every subsequent time you do it. This is rather obviously not the case with people.

In a complex systems such as humans, asking a teammate to help you out with a task one day results in them helping you, but on another day, they might tell you to stick it; maybe they’re hungover, stressed and busy, maybe they’re tired, or maybe they just don’t feel like helping. Cause and effect change in complex systems; and humans are complex. Really complex. Which is why “the soft stuff is the hard stuff“.

Complicated systems can seem messy, but an action results in the same result each time. People are not like that. They are complex, and groups of people even more so. Cause and effect changes constantly – pressing the equivalent of that A/C button on a complex human has one effect today and a different effect tomorrow.

And that is why personality, psychometric, “strength” tests etc will never work in the way people desire them to. People don’t fit into boxes, and neither should we try to.

 

All models are wrong. Some are useful.

The problem is when you use a model and apply it to a complex problem in the assumption that it’s right.

“It ain’t what you don’t know that hurts you, it’s what you know for sure that just ain’t so.”

And people selling these systems either know this, in which case they’re selling snake oil. Or they’re simply being optimistically gullible, looking for simple answers to complex problems. To be fair, we humans are almost infinitely susceptible to the seductive simplicity of personality archetypes, even more so when they’re about us. This is known as the Barnum effect, where it’s possible to give everyone the same description and people nevertheless rate the description as very accurate.

 

the barnum effect sketchplanations https://sketchplanations.com/the-barnum-effect

Flawed evidence of personality test reliability

MBTI fails on both validity and reliability tests, as do most other personality and psychometric tools. Proponents (usually people selling them) are keen to point out reliability measures that show, with a degree of error, that the same person taking the same test at a different time often obtains a similar result. This only serves to highlight the problem however – just as I would tell you my favourite colour is yellow if you ask me today, and I’d respond usually with the same a month later – it doesn’t follow that my favourite colour has anything to do with my personality, nor that my personality is stable over time. Equally, I may be lying. My favourite colour is actually blue.

Most of these systems apply an assumption of dichotomies, or even force them – you are either X or Y: cannot be both, and you cannot change from one to the other. This has been disproven too. 

When I did a “Predictive Index” test, I was told that I was far from empathetic, because I was evidence and data driven. According to PI, someone cannot be both evidence-oriented and empathetic. Not only is this offensive, it’s completely unfounded. In fact research shows that people with more rigorous and evidence-driven thinking skills are also better at understanding and managing emotions. These are simply not valid tests.

We should all be suspicious of algorithms that describe us or make decisions about us that are closed source, and psychometric tests are no different. Predictive Index have repeatedly declined to open source their algorithm, ostensibly to protect their intellectual property.

The key to the Big 5 model is its simplicity. It doesn’t sort anybody into a “type,” just informs them where they fall on a continuum of personality traits. There are no tricks and no surprises to be revealed, and it’s not a black box. However, even though it’s the most trusted psychological profiling test in academia, the “Big 5” has been found to be systematically sexist. “Women are told they’re significantly more disagreeable than men who answer questions identically.

Criticism of MBTI and others extend even further, often due to a highly westernised, English-language, neurotypical approach. 

 

Dangerous tools?

Evidence shows that, far from being a “short-cut” to more insightful leadership, tools such as these can be harmful – they may convince managers that they’re doing “good management”, and discourage further effort to improve management and leadership behaviours. At worst, they’re actively discriminatory and detrimental to individual and team performance, reducing the quality of human interactions and decreasing levels of psychological safety.

Conversely,  I’ve actually found value in doing “which Hogwarts house are you in?” or “Which sex and the city character are you?” quizzes with teams. They’re obviously nonsense, but they facilitated a good discussion with team members about preferences and styles – and it was much more fun than MBTI!

(In fact, those quizzes have an advantage over some of the “official” tests because they make no pretence of scientific accuracy.)

Finally, I’ve never come across a strongly competent leader who used personality testing and categorisation. It seems to me (and I’m conscious of my own biases here) that these tests can sometimes risk replacing empathy. A way to feel like you’re understanding people, and “doing the work” without actually putting in the effort to do so.

Personally, given all the flaws and limitations of personality profiling, I hope organisations stop using them, and businesses stop trying to make money out of them.  We try not to use flawed tools to do finance, accounting, software development, design, or data analysis. Why is it acceptable to use flawed tools to understand and manage the most important thing in organisations – people? And why, when we realise that they’re flawed, are they so “sticky”? Why can’t we seem to get rid of them?

What do you think? Are they a useful tool, or a potentially dangerous over-simplification of human nature?

 

Read more: https://adamgrant.substack.com/p/mbti-if-you-want-me-back-you-need and https://www.psychologytoday.com/gb/blog/give-and-take/201309/goodbye-to-mbti-the-fad-that-wont-die

Resilience Engineering, DevOps, and Psychological Safety – resources

With thanks to Liam Gulliver and the folks at DevOps Notts, I gave a talk recently on Resilience Engineering, DevOps, and Psychological Safety.

It’s pretty content-rich, and here are all the resources I referenced in the talk, along with the talk itself, and the slide deck. Please get in touch if you would like to discuss anything mentioned, or you have a meetup or conference that you’d like me to contribute to!

Here’s a psychological safety practice playbook for teams and people.

Open Practice Library

https://openpracticelibrary.com/

Resilience Engineering and DevOps slide deck  

https://docs.google.com/presentation/d/1VrGl8WkmLn_gZzHGKowQRonT_V2nqTsAZbVbBP_5bmU/edit?usp=sharing

Resilience engineering – Where do I start?

Resilience engineering: Where do I start?

Turn the ship around by David Marquet

Lorin Hochstein and Resilience Engineering fundamentals 

https://github.com/lorin/resilience-engineering/blob/master/intro.md

 

Scott Sagan, The Limits of Safety:
“The Limits of Safety: Organizations, Accidents, and Nuclear Weapons”, Scott D. Sagan, Princeton University Press, 1993.

 

Sidney Dekker: “The Field Guide To Understanding Human Error: Sidney Dekker, 2014

 

John Allspaw: “Resilience Engineering: The What and How”, DevOpsDays 2019.

https://devopsdays.org/events/2019-washington-dc/program/john-allspaw/

 

Erik Hollnagel: Resilience Engineering 

https://erikhollnagel.com/ideas/resilience-engineering.html

 

Cynefin

Home

 

Jabe Bloom, The Three Economies

The Three Economies an Introduction

 

Resilience vs Efficiency

Efficiency vs. Resiliency: Who Won The Bout?

 

Tarcisio Abreu Saurin – Resilience requires Slack

Slack: a key enabler of resilient performance

 

Resilience engineering and DevOps – a deeper dive

Resilience Engineering and DevOps – A Deeper Dive

 

Symposium with John Willis, Gene Kim, Dr Sidney Dekker, Dr Steven Pear, and Dr Richard Cook: Safety Culture, Lean, and DevOps

 

Approaches for resilience and antifragility in collaborative business ecosystems: Javaneh Ramezani Luis, M. Camarinha-Matos:

https://www.sciencedirect.com/science/article/pii/S0040162519304494

 

Learning organisations:
Garvin, D.A., Edmondson, A.C. and Gino, F., 2008. Is yours a learning organization?. Harvard business review, 86(3), p.109.
https://teamtopologies.com/book
https://www.psychsafety.co.uk/cognitive-load-and-psychological-safety/

 

Psychological safety: Edmondson, A., 1999. Psychological safety and learning behavior in work teams. Administrative science quarterly, 44(2), pp.350-383.

The four stages of psychological safety, Timothy R. Clarke (2020)

Measuring psychological safety:

 

And of course the youtube video of the talk:

Please get in touch if you’d like to find out more.

Anonymous feedback can destroy psychological safety

Feedback sucks. Advice is better.

In most cases, feedback sucks. It really does.

Unless the person delivering the feedback is highly empathetic, has lots of free time, is highly skilled and is in the proper position to provide it, and the person receiving it is in the right frame of mind, open to feedback, confident, mature and in a safe place, it’s probably going to be uncomfortable at best or at worst, devastating.

Delivering feedback is hard. In my experience managing teams over a couple of decades, I’ve seen it done so badly that it verges on abuse (in fact, on occasion it certainly was abuse), and despite my best efforts, I’ve have delivered feedback so badly that the relationship took months to recover. I’ve learned from those experiences, and now I’m better, but certainly not perfect.

But ultimately it is important to give and receive feedback if we want to get better at the things we care about. Given how incredibly hard it is to deliver feedback in person, why would we facilitate anonymous feedback?

A misguided solution.

Anonymous feedback is often presented as a solution to problems including unequal power dynamics, bias, fear, or a lack of candour. In reality, anonymous feedback masks or even exacerbates those problems. Great leadership and management solves, or should solve, those problems.

Anonymity reinforces the idea that it’s not safe to speak up. It’s mistaken for objectivity. It presumes that the people who receive it will interpret it exactly as it was intended.

Feedback must be contextual. It must also be actionable, otherwise why provide it?

Conversations matter.

The reason we deliver feedback in person is because it demands a discussion; for example, imagine someone wishes to give you feedback on the way I behaved in a meeting because you came across as aggressive and intolerant. You’d certainly want to know, but you would also want them to know that an hour before that meeting, you’d received some upsetting family news and were struggling to deal with it. That conversational feedback then provides a channel for an open and frank discussion, and an opportunity to support each other.

If that same feedback was delivered anonymously, not only is your theoretical self having a tough time with family problems, but now (in your head, for that is where we all reside) you’re overly aggressive, intolerant, and failing in your role.

Feedback must be actionable.

Anonymous feedback is incredibly difficult to act upon, and can breed a sense of frustration, fear, and resentment, particularly in small teams and organisations.

All feedback must be a conversation. And in order to have a conversation, you must be able to converse with the other party.

You may work in a high-trust, low-politic environment. Or you may believe that you do, since rarely is this truly the case. If you believe that you do, check your privilege. Are you experienced, senior, well paid, white, cis, male, able-bodied or neurotypical? Chances are, for those that are not in those categories, the degree of trust and safety they feel may be somewhat lower, and the impact of feedback considerably greater.

Unconscious bias

There are numerous biases in effect when it comes to feedback and indeed all interpersonal relationships, particularly in the workplace. For example, women are often perceived as more aggressive than men when demonstrating the same behaviour, due to an unconscious bias that women should be more feminine.

Anonymous feedback, rather than removing that bias, enables it, and feeds it, because a woman receiving anonymous feedback that she should “be less aggressive” is forced to accept it as objective, when it’s actually less likely to be the case.

Bias affects everyone. A man may receive feedback suggesting he should be less softly spoken in meetings, an introvert may be told they should speak up more, or (and this happens a lot) a young woman may be told to smile more.

Motivations

Consider the motivations for someone providing anonymous feedback. One reason might be that they genuinely want you to be better, and they already think you’re great, so they’re giving you a chance to excel even more. That’s the only good reason for feedback. All others, including power-plays, envy, bias, inexperience, or simple misunderstanding of the situation, are terrible reasons, and will only have a negative impact on the team.

The point is that when providing feedback, even if your intentions are pure, you will not be aware of your unconscious bias, and working through those biases is that is something that only a conversation can facilitate.

Dialogue.

In every single 1-1 you have with a team member, ask what you can do better, what more or less you could be doing, or what, if anything you could change in you interactions with team members. This regular, light-touch, conversational cadence provides a safe space for feedback. And even if in 99% of the sessions, there is no feedback to give, it ensures that when there is some feedback required, it comes easily, and isn’t a difficult process.

Anonymity encourages poor leadership.

Anonymous feedback processes also provide a get-out, an excuse, for poor leadership and avoiding conversations where feedback is requested or proffered. The thinking may be “I no longer need to ask what more I can do or how I can be better, since we have regular anonymous feedback instead.” This is dangerous, and leads to a general degradation of good leadership practices.

For these reasons, I never provide or accept anonymous feedback. I will always, instead, have a conversation.

Culture.

If you’re tempted to use anonymous surveys and feedback, ask yourself why you feel that anonymity is required, and address the underlying issues. A truly great culture doesn’t require anonymity, and an organisation without a great culture is not maximising the potential of the people within it.

Q. How do you know that you could improve as a leader?

Q. How do you know that you could improve as a leader?

A. You’re still breathing.

Check out Jenifer Richmond and  find out more about her excellent executive coaching services. I’ve been working with Jenifer for some time now, and she has helped me hugely in identifying my career goals and, through questioning and challenging, helped me to make difficult decisions and changes of direction where necessary. I really can’t recommend her enough.