From my amateur alchemical/medical historian point of view there are some obvious human ‘knowledge disruptors’, that can lead to problems with medicine, which may need to be rendered explicit. These are:
1) Tradition and authority
3) Misguided Logic
4) Anecdote and self-confirmation
6) Ethical staunchness
There may well be further obvious knowledge disruptors; this is not an attempt to limit them.
Nearly all medical problems are intensified, through the interaction of these human knowledge disruptors with biological complexity. However, these disruptors are not just present in medicine, they are likely to generate problems in people’s attempts to deal with complex systems of all types.
Furthermore, these knowledge disruptors all tend to be boosted when there are social groups, or social conflicts, involved.
If other people agree with you, praise the genius of those who agree with you, praise the ethical rightness of agreement, or condemn those who disagree with you as stupid or immoral, then that reinforces the knowledge disruption. As I have said many times before: for most of us, in most situations, knowledge is socially verified. Thinking we are independent, probably means we think similarly to those we classify as fellow independents.
1) Tradition and Authority.
This disruptor usually takes the forms of: “We have always treated the disease this way, and this way alone,” or “Galen, or Paracelsus, or Steiner or ‘some other important figure’ say we should treat this disease or this problem this way, and this way alone” or “We have always lived this way and it was really successful, so we should continue to live this way”. “Those other people who disagree with tradition and authority are traitors, and are at best misguided.” “Altering our treatments and behaviour would be immoral.”
There are several problems with these claims and procedures.
The first set of problems is that the tradition or authority may:
- a) never have worked in the first place,
- b) never worked without problems,
- c) the makers of tradition came to their decisions by applying some of the other knowledge disruptors, or
- d) have been enforced by violence, not through effectiveness.
For example the makers of tradition may have argued “Galen used treatment X on a person and they recovered” (Anecdote). Or people may have applied the logic that people with damp conditions should be treated by warmth (possibly Misguided Logic), or asserted that “Paracelsus used the elixir of Gold, which I can sell to you, to cure this disease” (Self-interest). Or they might argue: “Treatment X is traditional and anti-socialist, therefore it will work better than something that looks like socialist medicine” (Reaction and Ethical Staunchness). Ethical Staunchness is also likely to lead to enforcement by violence, in the same kind of ways the religion of love led to the inquisition: it’s how you save people.
The Second set of problems centres on the issue that we have a finite number of descriptive terms which can be applied to any disease. The description may ignore other important factors, which given that bodies have a huge range of possible responses, can render the normal treatment valueless in this case, or in this series of cases.
For example a disease may generate the sense of heat and damp, with a rash. It may be important as to whether the rash is red, pink, brown, mottled etc. Is the patient thirsty or dry? Given the limited vocabulary, diseases can resemble each other in the ways we describe them and yet be completely different in cause, prognosis and required treatment. Diseases are changing all the time, and new diseases appear. So a treatment which traditionally works for this apparent disease (as we describe it), may not work on the disease actually being faced.
Even if we diagnose a person by the presence of a ‘virus’ or bacteria within them then, as we can see from Covid-19, it may have radically different effects depending on its interaction with the system: random variation, the patient’s constitution, age, other diseases or poisonings present, etc… and thus require different treatments.
Even if the ‘same’ disease could be treated by herb X or antibiotic Y one hundred years ago with huge success, the disease may now have evolved to an existence in which those treatments no longer work. Or the medicines may interact, combine, or compound with new background chemicals in different ways and no longer help – the medicines may even harm people nowadays in ways they did not originally. The herbs themselves may have changed, or it may have been a variety of herb grown in a particular field with a particular chemical composition, that was actually effective, and that is not where the practitioner is getting the herbs from, as that variation was unknown.
It is also possible that part of the traditional treatment has been lost, because it was verbal, or imitative, and all we have of the tradition is the bit that seems (logically) plausible.
In summary. Tradition and Authority can be wrong, diseases and situations may change, may look the same but be different, and the items used in the treatment etc can change over time as well. Fear of violence, or being morally wrong (and or being punished for this), can lead to a lack of attention to the actual problems.
When tradition and authority succeeds, it is because the traditions have been useful in the past, and the past is similar enough to the present for them to be effective. The question is always, whether the situation is still the same as it was in the past, or whether the traditional ways of behaving have now created a problem, which further application of those ways of behaving cannot solve.
This occurs when a group of people don’t like one or other tradition for whatever reason, so they avoid its treatments, even when the treatments seem to work, or when the practitioners take on board their criticism and improve.
Usually if people are in reaction they campaign forcibly to destroy the tradition or people’s use of that tradition, they do not believe it can work or be improved. Potential useful knowledge is lost- the classic baby thrown out with the bathwater situation.
Reaction can be useful if the previous, or other, system has failed. But that attacked system may have advantages which are in danger of being ignored. It is not uncommon for a system to modify itself in reaction to the challenge from another system, then defeat the other system and when that system is gone enforce the old destructive ways more thoroughly.
3) Misguided Logic
This is probably one of the most common ways of getting things wrong. ‘Logic’ is only as good as its assumptions and procedures, and few sets of assumptions and procedures are going to be able to completely deal with, and predict, a complex universe. The Logic and procedures may be faulty as well, but they backs up important assumptions made by the group.
We can see this when people make such arguments as that fatty arteries are found in people with heart problems, therefore no fats must be eaten. However, some fats need to be eaten, as they are essential for human biological functioning, so the procedure based on this fault logic may have bad health effects. Other people might argue that as some fats are useful, humans should eat almost nothing but fat etc. But what if some ‘types’ of people should eat more fat and others less fat, or different people should eat different types of fat. The issue needs ongoing investigation, not to be settled by tradition, logic, anecdote or self-confirmation.
When Donald Trump advised his medical teams to study the effects of injecting disinfectants and using light to fight Covid, he was engaging in apparently misguided logic of the form: “Disinfectants and light may kill the virus, therefore they might kill the virus inside the body.” The problem was that taking disinfectants internally might also be injurious, or even lethal, and many people expected the President to be aware of this, and not make the suggestion in public where it might lead some people to try it out without medical supervision (because of the authority of the President, who is a self-confessed super-genius).
Group logic tends to ignore the variety and complexity of life, the things we don’t know, or don’t value, and the side-effects of treatments. As well, because it is persuasive, the logic may not be tested. If the patient dies from applying the logic, the problem is said to arise from the patient (Self-confirmation). Perhaps the patient did not follow the instructions properly? Perhaps the logic was applied too late? Perhaps it is just one of those things, as the procedure normally works? May be there was a mistake in this situation, but it is generally effective? Much back surgery seems a great example of “follow the logic” going wrong, and the apparently large lack of success has been ignored.
Another logic error, takes the form of “if small amounts of something is good, then large amounts of it are even better”. People might argue that small amounts of substance B, have beneficial, even necessary, properties, so we should take large amounts of substance B, when it could actually be poisonous over a certain level. We can see this most obviously in climate denial were people can argue that larger amounts of CO2 will simply propel plant growth and not cause any problems at all. The logic does not recognise the change of state that can be induced by too much of something which is generally necessary.
In summary, the effectiveness of logic and theory is always limited in a complex universe. A deduction from the theory may be wrong in a specific situation, no matter how persuasive it is. Theory and logic has to be tested repeatedly, and data gathered which shows how effective the deductions are (and whether things have changed). That means someone needs to actively try and disprove the logic, as humans will tend towards self-confirmation, no matter how badly the deductions deliver.
4) Anecdote and self confirmation.
The George Carlin video, in the version discussed earlier, is a great example of this. He says he swam in raw sewage as a kid (or had exposure to ‘germs’ and pollution) and has always been healthy, and that no one in his locale had polio. We may know he did not get polio, but we only have the word of himself, a person who was not studying polio in his area, as to the lack of polio in his area, and we have no study of the connection between the exposure to germs and pollution in the Hudson sewage and the lack of polio that he claims was general. It is also not impossible there may have been a substance in the river which killed polio, while not affecting other diseases, so the success had nothing to do with the factors claimed.
We also don’t know whether people died of other things that we could attribute to such exposure, but which were so normal that they were ignored. We don’t know whether all his friends had life-long health from the same source, or whether some of them where sickly, or died in their thirties as a consequence.
Carlin has not looked for evidence that is not confirming, probably because he is in self-confirmation mode – and possibly because he made money telling his audiences what they want to hear. (“Disease is not threatening, you can get over it by being tough. Pandemics are never a problem for tough people as its only weak people who die. You do not have any responsibility to others, as that inhibits your ‘tough liberty.'”)
He might just be a naturally healthy and robust person. This fact is, in itself, interesting, but it may mean that his discussion of what keeps a person healthy is completely without generalisable value. Perhaps a person who is born robust enough can do things that would normally hurt other people, without any ill effect? We probably all know people who live in ways which would harm us, but which does not effect them that badly.
Self confirmation usually leads to people ignoring evidence which goes against their anecdotes or logics. If you have a group of people with the same biases, then self-confirmation is reinforced by the confirmation of trustworthy others in your group who are your compatriots and friends. And if people outside your group say you are wrong, they are ‘obviously’ untrustworthy and likely to be trying to deceive you. You keep your belief to avoid losing status in your group, or being exiled for heresy
Anecdote can open up interesting discussions, and it may be the only way to proceed at the beginning of a study, it may even be correct, but it is not compelling evidence, because it usually focuses on a limited number of cases, in a complex world of difference.
5) Self Interest
This may occur when the practitioner makes a living out of selling treatment. If you have a system, and someone comes to you, then you are likely use it, rather than wonder if another system might be better in this case. If a practitioner sells medicines, surgery, treatment, health planning etc, then they will try to sell these to their patients, to keep their livelihood. They may be tempted to sell the most expensive and glamorous treatment – because glamour confirms anecdote and gives authority, and because the practitioner might make more money out of it. They may over-prescribe. They may perform recondite surgery because they can and they can charge for it, and so on. Again, if others you admire do similar things then it reinforces the practice.
If a practitioner depends on selling treatment for their livelihood, they have even less incentive to test the treatments in the short term, and more likelihood of self-confirmation, following the authority which pronounces this a good treatment, using misguided logics to justify the treatment, and ignoring counter evidence. This does not mean all practitioners are corrupt by any means, but that many practitioners have an incentive to give unnecessary treatment – which may prove harmful.
Likewise if a researcher receives funding from a body which has a commercial interest in a product or treatment, then they are more likely to keep their funding by praising the product or confusing objections to the product. The purchaser of research may also suppress negative results and keep the positive results, because the negative results must be wrong, and its easier to see why they could be wrong. It does seem to be that Pharmaceutical Company research needs to be independently checked, rather than simply accepted.
6) Ethical Staunchness
Ethical staunchness comes about when a theory becomes identified with an ethical position which is taken to be fundamental. Change in the situation is irrelevant. Modification of the condemned, or the condemned procedure, is irrelevant. Failure of the moral position to generate what it considers to be success is irrelevant – the position is correct irrespective of the results. Ethical Staunchness basically implies that taking in evidence, aiming to find out what is wrong with an approach, or looking at the situation in detail is forbidden. If you criticise the position you are immoral, and not only face expulsion, but you cannot be listened to. People can be sacrificed to morals. Morality overwhelms observation.
Ethical Staunchness seems to be a refusal of complexity or negotiation – which is not the same as saying that ethics are unnecessary or always harmful… And sometimes ethical rigor may be required, to as not to compromise with something the person considers deeply immoral – as when people were staunchly anti-Nazi, and refused to support the persecution of those the Nazis had declared immoral. It may be that recognition of the problem does not lead to easy answers.
With complexity it is tempting to try and limit the variations and hesitations that are a normal part of the knowledge and living process, and to foreclose to certainty. This simplification may help action, and to some extent may be useful for a while, but have long term consequences which are disruptive of our ways of living and knowing.
In this blog post I have tried to suggest how socially standard ways of knowing and responding to complexity, may disrupt our knowledge of the world, and our reactions to it.