Why we all need to start thinking about Artificial Intelligence now

Michael-Appleby-headshot
Posted: 19th July 2023 Michael Appleby, Managing Director, Altair Consulting

Artificial Intelligence (AI) will be a bigger disruptor than any other recent macro event (think Brexit, Covid and the War in Ukraine). The big difference though is that whilst they were all top-down disruptors, which changed our lives (almost) overnight, AI will disrupt bottom-up. There will be no grand single defining moment where we all instantly shift systems, or go into a lockdown, or there is mass panic on the financial markets. It will happen gradually but at pace over a short space of time.

With the proliferation and capabilities of AI tools increasing exponentially, we are all already playing catch-up. It’s a well-commentated fact that ChatGPT (the most well-known version of Generative AI) had reached 100 million users worldwide within two months of launching in November 2022, despite it largely being off the radar of the public. As a comparison, Instagram took nearly 2.5 years to develop the same user base. In terms of capabilities, although not a perfect assessment, some tests have shown ChatGPT-4 (the latest version) to have an IQ of 155 (Einstein was 160), with forecasts that ChatGPT-5 is likely to be 10x more capable. It’s also worth noting that this latest generation of AI tools are significantly more complex and capable than what we were used to even just 6 months ago.

I’d cautiously estimate that right now every organisation in the country has at least a small group of employees who are actively using some form of AI tool to support their work – mostly unbeknown to their managers.

I mean why wouldn’t you? You suddenly have a ‘virtual assistant’ which has access to all the knowledge of humanity which in a fraction of a second can help with any daily task (produce letters / complete secondary research / write a song / provide advice / support problem solving / draft social media posts / respond to emails / translate text etc.). The potential for efficiency gains and other improvements is genuinely astounding and there are already organisations in other sectors which are embracing this opportunity and shifting their business and operating models.

What are the limits?

But, AI (like humans) isn’t perfect, and it is has specific limitations which users should be aware of. For example; some only have access to data from a certain time period, all have inbuilt biases, all have a tendency to ‘hallucinate’ (i.e. generate responses which are not factually correct) and outputs tend to be generic.

These limitations will reduce over time as the tools improve. And the outputs of the currently available AI tools can be improved significantly with (human) user training. But even now, in a lot of cases, the outputs can take you 60% or 70% of the way towards a final desired version.

As well as the opportunities, there are big risks to consider, particularly at present when we are still in a ‘blind growth’ phase of use. For example, uncontrolled use of AI tools could result in data breaches, the use of voice cloning AI creates significant potential for fraud (this is particularly a risk for housing associations with vulnerable customers), overuse / misuse may also result in incorrect decisions being made / information being provided as part of upward reporting etc.

These and other risks will create have significant implications for housing associations to deal with e.g., from a governance perspective, how do Boards gain assurance (and how does that flow up to the Regulator) on work being completed / information provided if AI is being used or what additional measures need to be put in place to protect against new forms of fraud?

And this is the challenge for leaders in the sector. How do you respond to something which is still out of direct sight, but developing at pace and already impacting on the organisation in unknown ways?

Some thoughts are:

  • The grief curve applies – we will all start in the ‘denial’ phase (“AI? It’ll never happen or at least not for a long time!”) but we can’t stay there for long, pretending this won’t happen or thinking it will take a decade to have an impact, is in my view is wrong. AI is already being used and having an impact, the disruption (and associated risks and opportunities) will only increase exponentially.
  • Invest in understanding – the bottom-up disruption is critical here; it is likely that some staff at lower levels in organisations will already have a strong understanding of AI. The rate of development in AI is so quick, it is critical that leaders (board and executive teams) get a strong understanding now of what forms of AI are available, their potential applications and limitations. A strategy and policy position for AI use should be put in place sooner, rather than later.
  • Do not view this a ‘technology issue’ – yes, it goes without saying that AI is a technology solution, but thinking on it should not be outsourced to and be seen as the responsibility of the IT team. AI will impact on all aspects of a housing organisation – not limited to; interactions with customers, the type of risk which need to be managed, the skills required in an organisation, how assurance is provided to the Board etc. etc. AI needs to be on the agenda for every Board, Chief Executive and Executive Director.
  • Focus on culture – I have heard of some organisations in other sectors which have taken a draconian “You will not use AI to do your work” That has simply had the effect of driving the use of AI underground, out of sight and out of any control. In my view a more mature approach is needed based on openness, transparency and trust. Employees will use AI, even if told not to. Organisations need to foster an open culture and support staff in understanding the limitations and ethical considerations of AI, to help make better decisions on a day-to-day basis and ensure appropriate quality assurance processes are in place.

With all of this in mind, AI should already be on the risk and opportunities registers of all housing organisations. Examples of misuse of AI are already occurring in other sectors (e.g. lawyers citing cases generated by ChatGPT which were not factually correct). Within the next few months, I expect we will see an example in the housing sector where the misuse of AI has led to some form of detrimental impact (e.g. fraud, data breach, discrimination, ill-informed decision making). It won’t though be with an organisation which has provided strong leadership and developed a mature and open approach to AI, it’ll more be a result of actions taking place under the radar.

For what it’s worth I have a very positive view of the potential benefits and opportunities that AI can bring. Over the medium term there are massive opportunities for positive impacts on a broad range of activities – at a society and housing sector level. But work needs to start now on understanding how tools are currently being used, what their future impacts could be and develop a culture which will enable employees to use them in the most productive and safe way.

Latest News

See all news