An unhealthy fascination (with AI)
15946
post-template-default,single,single-post,postid-15946,single-format-standard,wp-custom-logo,bridge-core-3.1.8,qode-page-transition-enabled,ajax_fade,page_not_loaded,,side_area_uncovered_from_content,qode-theme-ver-30.5,qode-theme-bridge,qode_header_in_grid,wpb-js-composer js-comp-ver-7.6,vc_responsive
 

An unhealthy fascination (with AI)

An unhealthy fascination (with AI)

I have been interested in AI for some time, it’s an area of tech that I have followed, developed and utilised, for a couple of decades now.

Going back further, and outside of a professional context, I recall being intrigued and surprised by early computer game AI (a genuine application of AI) and even further into the past, the standard depictions of AI in books and movies.

Here is how it all unfolded…

First Encounters: AI in gaming

In gaming, good early examples were created in first person shooters like Doom, where NPCs (non-player characters) began acting and interacting in more realistic and autonomous lifelike ways. In particular I enjoyed triggering aggressive NPCs to fight each other and therefore avoiding the need to take the damage ‘myself’.

The trait that drew my attention in these games was their lifelike-ness, a nascent kind of autonomy. Yes, still part of a pre-coded program, and relatively limited, but at least a glimmer of the potential to have computers with a form of agency.

Of academic interest

Later, in an academic context, my fascination led me to coding a Genetic Algorithm (or GA, a Cross Pollinating Parallel Island GA to be precise) as the cornerstone of an MSc in IT systems in business.

GAs (long surpassed by Deep Learning and Large Language Models) mimic evolution by encoding candidate solutions to a given problem in strings of code, and creating generations of possible solution strings by combining them many many times, with a little mutation thrown in. These solutions are scored for fitness by applying them to the problem. The fittest ‘survive’ and make it through to the next round (generation) of the endless competition.

In itself my GA worked reasonably well but, foreshadowing much of the future AI, it was highly niche (addressing only one esoteric problem type) and struggled to integrate with the real world in any useful way. Fortunatly this lack of application did not worry the examiners overly and I got a solid mark for the work.

Diodes on, but nobody home

The trait that caught my attention during this exercise was the AIs ability to find solutions through manipulation of data alone with zero consciousness or genuine understanding of the problem. The AI had no clue what it was doing, or why it was doing it, but non the less found solutions that we humans did not.

This aspect of AI, it’s pseudo-intelligence without conscience or consciousness, is one that has fueled the world of SciFi and still bothers notable modern high profile tech-prenures including Musk and Gates today. As the field grows and develops, accelerating the power and potential of AI, this challenge grows alongside it. Spiderman understands with great power comes great responsibility, but AIs do not, at least not yet.

The ‘Alignment problem’ as it is sometimes called, has become a considerably more pressing concern as the zeitgeist once more wakes up to the power of AI through the popularity of LLMs like ChatGPT.

AI at work – sort of

Later in a professional context, I worked in data analytics and visualisation, in the field that was to become known as ‘Big Data’. Some understandably conflate this with AI, or assume that the two practices are somehow intrinsically linked. However, whilst we did indeed seek to mine useful information from raw data, we rarely did so through genuine AI. More often our work was algorithmic (establishing patterns in a predefined way) or simply exposing what was already on the surface through powerful visualisations.

The important difference, and one that exposes a key misunderstanding about AI and it’s application, is that we were not asking AI to ‘discover’ and ‘inform’ us. Rather we were:

  1. Using machines as tools to enable us to do the discovering and,
  2. Subsequently encoding what we had learned in models which are then applied to other and wider data sets

Today much improved approach would look something like:

  • Find something interesting and tell us about it
  • Tell us something about the future (a change to make or a prediction to act on) based on this discovery
  • Tell us how confident you are and why your rationale

Parallel Running

In the 2000’s I founded my first tech / web firm in which we rode other tech waves, namely:

  • The rise of SaaS with it’s powerful, useful and often inexpensive tools
  • Open Source, which in combination with ‘broadband’ enabled cost effective business systems
  • Overseas outsourcing marketplaces, which created access to a wide range of technical skills at prices affordable to SMEs

Our company (LCubed) leveraged all of these phenomena and overlapped this with a specialisation in the communication and use of complex information.

Mostly this entailed simplifying and applying (or helping others apply) science and research findings in practical, usable ways. Either through clear and accessible information resources, or by creating interactive tools, models and decision engines that people could use directly.

In one typical example, we analysed climate change projection data and quizzed subject matter experts, as the foundation to create web based decision tools that people and organisations on the ground use to could plan mitigation strategies.

So, although much of the data and insight we helped communicate was itself created utilising AI, for us direct use of AI took more of a back seat to less interesting Web 2.0 approaches. On occasion we sought to use or apply it, perhaps more often that was strictly necessary, particularly when Bots and Natural Language Processing became widely available and easier to use, but pure AI was not a goto tool of choice.

The field of AI was far from idle of course, in fact it used it’s time wisely progressing in leaps and bounds, and simultaneously riding Moore’s tide of compute power. Notably DeepBlue & AlphaGo became unbeatable in their narrow fields, and AlphaZero self-trained it’s way to the front of the pack in a single day.

So whilst the early potential of AI proved not to be as paradigm changing as hyped, it slowly found niches, permeated applications, and delivered on specific use cases, keeping the use of AI in it’s various guises widespread, but no longer in the spotlight.

Simultaneously the world has become prodigiously digitised, data on everything and anything is now everywhere. In my view these pillars (AI sophistication, compute power and data availability) combine to create a perfect storm for the use of AI today and into the future.

The LLM explosion: Heaven or HAL?

So for several years at LCubed, whilst we focused on the realities of building a thriving business, and were engrossed by other emerging technologies, smarter minds were making progress in AI. (Confession: I recall I even once secretly applied for a digital strategy and marketing role at OpenAI in it’s first months.)

Teams like those at OpenAI progressed with NLP and deep learning, laying the foundations for the large language models like the Generative Pre-trained Transformers (GPTs).

The boom in both awareness, and popularity of these tools has brought AI back onto the agenda with a bang. It seems that Pandora’s box has been opened much sooner and much more suddenly than most people expected. Legends have it that Pandoras’ curiosity compelled her to open the box gifted to her by Zeus “and out swarmed all the troubles of the world”.

In this case my take on the content of the box is more concerning than some tech optimists, but not entirely negative by any means. One thing however is for sure, significant change is coming and it will shake up much of the world including (and perhaps particularly) our professional classes.

The impact of this new phenomena is difficult to predict, but my intuition leads me to think some of the following will be significant:

  • Is it alive? Interesting certainly, but perhaps less impactful than…
  • Is it aligned? If it can outsmart us without trying why would it prioritise, and act on, our best interests?
  • What will bad actors be enabled to do? Even if the AI is never autonomous, it can surely be recruited to the aid of those with destructive goals.
  • What will be left for us to do? If ‘not a lot’, then…
  • How will we rejig the rules around value distribution?

In the short term, and without doubt, productivity will boom, but equally over time, the value of specialist knowledge (as measured in a traditional capitalist market sense) will plummet as everyone has access to the same stock of knowledge via personal ‘intelligent’ agents.

Historically these macro shifts (Agriculture->Industrialisation->Information) have driven increased productivity and enabled individuals to rise up the value chain, creating wealth and improving lifestyles. This time, my view is that the displacement of those approaching the top of the ladder will have a different effect.

What will these talented knowledge workers do (that can not be done by an AI of some form) and is still compelling enough to persuade others to pay them for it? If the value associated with programming/surveying/analysing/writing/animating/negotiating/planning (select or replace as you see fit) is driven towards zero, how will we agree on who has ‘earned’ their income, and how?

In the world of business there are two standard paths to efficiency, automation and outsourcing. Globalisation brought a wave of outsourcing that changed even our small business. We became more flexible and capable, at the same time as reducing costs. The trade-off was not felt by owners and managers, but by our juniors as the local teams decreased in size to the point where only team leaders and management remained in Australia.

Automation is more challenging, as building bespoke tech is expensive, and it’s just plain hard to automate complex activities. Here again I believe that is likely to change, because automated development is not far away, meaning that a trial and error approach to tool building is feasible (Agile on steroids). If an attempt at automation doesn’t work, just throw it out and try again.

Throughout my time building systems and tools, the word ‘just’ has been a red flag for me when heard in any discussion of needs. Almost without fail it’s use says more about the speaker than the task, but now I find I am using it myself – a leading indicator of the coming change… perhaps.