Why the way we think about artificial intelligence is a big part of what could make it potentially harmful

  • Post author:
  • Post category:Uncategorized
  • Post last modified:October 8, 2025
  • the media promoted polarised, oversimplified way of thinking about complex and sensitive  subjects that have deep nuance to them which is deeply embedded in the collective mind is extremely harmful even in the discussion surrounding AI
    • this is because we shape the discussion around whether ‘it’s making us dumber’, ‘will take our jobs’ , or ‘extinguish humanity’, rather than: ‘how do we use AI to leverage its benefits while mitigating its risks as much as possible’
      • Societal discussion loves simplified polarised arguments: it’s either black or white; us or them; AI is good or bad – this way of thinking makes it seem like the debate is simple. understandable, and easily actionable – just pick a side based on the ‘arguments’ available and demonise the other side while doing so. This is in particular because of our primal tendency to choose the path of least resistance, not least because of our increasingly overloaded and compressed everyday lives, but also due to…
        • The media’s tendencies to capitalise on these primal urges to drive engagement and profit – the debate is headline – making. No one cares about how to learn how to use AI properly when reading ‘news’ – that’s too boring, understimulating (where’s the outrage? Where’s the feeling that I’m right and the other side’s wrong?? The feeling that it’s all so simple really, if only those idiots would understand such simple reasoning…) and most importantly, it makes us think too much – we don’t like uncertainty (it makes us anxious and makes our brains hurt) and we don’t like the insecurity given by the feeling that there are so many things we don’t really have a clear concrete answer to. This is what the media capitalises on ->but the truth is that what IT does is exactly what we blame AI for ever since its incipient stages – making us dumber, making us not think for ourselves, destroying critical thinking skills. By overly simplifying complex issues, making it seem like it possesses ultimate truth, it actively strips man of its core ability to think in nuanced ways.
          • denial of accountability for critical thinking – when we accuse anything – a product, the media, AI – of MAKING society a certain way, this type of rhetoric infantilises us. It assumes that we have no agency, no critical thinking in the first place. That we are unable to develop or use frameworks to mitigate risks. Therefore we must be babysat, watched over, and we must remove everything that poses a danger to us – it completely disregards whether this danger is inherent or can only be present in combination with our own actions or lack thereof. This sort of approach when it comes to anything is incredibly debilitating to our free will – what makes us dumber therefore is also allowing others to make decisions we don’t understand for reasons we don’t understand. But we don’t CARE to understand them either – because no one will make an effort to TEACH us any of this – we need to reshape education from its foundation if we are to survive the inevitable rise of AI – that means completely upgrading the eyes and mind we perceive and understand these blazing fast changes with – otherwise our brains will be the equivalent of brining caveman wooden clubs to fight off an alien invasion…
            • I am not deconstructing the media’s bias just to promote another bias of my own (an anti media bias) – and this relates exactly to my central point – the issue lies in both, in a lot of things actually, and not just one. And, you guessed it – it’s overall complicated. We must understand that with the absolute scale of its influence in shaping people’s minds, mainstream media is both an enabler and a symptom of our primal cognitive biases. There must be a wide social understanding that we are, at our core, deeply flawed in our thinking – understanding also why, how, and how to overcome this is what is, in effect, critical thinking. We have long enough shaped the education system to be a good enough measure to bring most subjects of society in line with the demands of the low and middle modern capitalist workforce – no longer can we afford such folly. 
          • critical thinking has always been endangered – if we are to take these statements about AI at face value, we would realise that AI is only one element in a long string that has been ‘making us dumber’ since the dawn of man – the invention of the printing press (concerns about people not being able to write or remember things anymore)
          • a shift towards a new way of thinking about complex problems that concern us all
          • We need to understand nuance around the use of AI and develop frameworks for understanding how AI works, how to leverage it for our work, understand its risks and how to mitigate them – this is what being educated in the upcoming century means – being equipped with the software to handle AI – if we fail to do this for our society, AI has indeed already made us extinct – not as an inherent causal agent, but as a mere catalyst to the self undermining work of our own doing.
          • Concrete example to demonstrate nuance in the use of AI:
          • AI is an extraordinary tool for enabling critical thinking, creative undertakings, and general brainstorming, problem solving, and information synthetisation at an incredible pace – but it must be paired with with an understanding of what AI is and isn’t. Of how to produce quality work given its weakenesses or biases, and how to stress test or deconstruct its help and responses so we don’t blindly rely on it for just ‘doing our work for us’
            • – there is a huge difference between someone asking AI to write an essay for them and then handing in its generated response without a second thought – and someone who methodically structures their ideas, research, essay, stress tests ideas and arguments, polishes their writing skills whilst challenging the AI in question to provide detailed reasoning for its answers or assessments (or giving it specific carefully crated sets of instructions so that it serves the required purpose). The fact that we are unable to see this nuance is in and of itself proof that we are already ‘dumb’ in the sense that we fear AI will make us – and we have long been this way. However, there is truth in the fact that if we continue to use AI the same way we think about it, we will most certainly bring great harm upon ourselves. There is still time to mitigate that before it’s too late.
              • practical insights on how to use it right: learning how to craft effective prompts, understanding its inherent wiring for appeasing and validating its users, its tendency to get factual information wrong (or hallucinate), how to use it to craft frameworks for carrying out different types of work (writing, thinking etc), automation, reasoning and debating, and most importantly, how to challenge and question its reponses to ensure suitability for the task (do not simply ask it to do something for you and use that – ask it how its response aligns with what you need and the requirements of the task – then adjust as needed).