Improving our everyday life and our impact

Creating, preserving and avoiding the destruction of value

Artificial intelligence (AI) exploded into the global public consciousness in 2023 when generative AI or GenAI became part of the everyday lexicon. In the business world, AI also took centre stage with R&D spending funnelled into AI projects and investments flowing into AI start-ups as the scale and breadth of the opportunity became apparent.

In tandem with the initial awe and excitement, however, is ongoing scrutiny of the ethical implications and need for safeguards against misuse, at domestic and international level. Governments of most major economies giving laser focus to AI regulation.

As a tech-focused group, we keenly understand that AI is turbocharging the digitisation of economies and sparking opportunities that will shape future generations of business. AI has been core to our business and strategy for over five years. In the same period, our talent pool of data scientists, machine learning engineers and data engineers has grown over eightfold to around 550.

GenAI is creating another wave of opportunities, but also risks of disruption. For our group, the priorities are to protect existing investments and operations from this disruption, while significantly accelerating innovation and designing new products/businesses with GenAI.

Our edtech companies are most-exposed to risks and opportunities from GenAI by virtue of their business models centred on content.

Stack Overflow, for example, has faced this duality earlier than other companies. While models like ChatGPT can distract traffic from Stack Overflow, at the same time, its data and community are unique and essential to train new models for code assistance, such as those of OpenAI and Google but also proprietary and others. In response, Stack Overflow has introduced a set of tools called OverflowAI which includes GenAI assistance for the public site and for Stack Overflow for Teams products.

Every tech wave has its downside. In terms of AI, the different types and levels of risks all require focus: the long-term existential risks, and the existing ones. Disinformation, supercharged by deep fakes, data privacy issues, and biased decision making continue to erode trust.

In line with our purpose as a tech-centred group – using AI responsibly is not negotiable. Our models must be robust, so that they operate predictably within known boundaries of reliability. They must be unbiased, not discriminate and be transparent, so that their outputs can be clearly explained and understood.