Adam provides an example of how Strategic Initiatives drive transformation in Consumer Goods, Retail, and Manufacturing sectors
Many organizations force GenAI into strategic initiatives without the necessary data foundations needed to sustain them. Jumping to use sophisticated LLMs before establishing the single source of truth, clear data ownership, and fit-for-purpose techniques typically results int the following:
Research suggests that this is a prevailing problem:
Adam Rogalewicz, Manager, SWAT Team, PwC Poland
Adam Rogalewicz provides his insights coming directly from experience gathered throughout data transformation projects that enable strategic initiatives. Adam’s work focuses on building the data foundations, comprising and effective governance that enable the deployment of Gen AI and other forms of Deep Learning beyond the proof of concept into production to attain measurable business outcomes.
Strategic initiatives drag on not because we do not know how to deploy the technology, but because there is lack of solid data foundations. For example, when master data is not standardized, harmonized, and enriched we lack a consistent ontology that defines key business domains like Customer, Product, Services, Vendors, to name a few, and the relationships between them. Without consistent master records AI outputs can become inconsistent, with hallucinations and biases resulting in increased validation costs, and ultimately stall the entire initiative.
To mitigate this, we re-configure the Enterprise Data Platform to provide access to cleansed datasets. Once we have cleansed data we can then leverage a combination of deterministic techniques along with LLMs to increase the accuracy and performance of simple tasks like matching & merging master records.
To put this into practice and make this less abstract, lets consider a simple example of a consumer goods company bottling and selling sparkling water through a network of distributors and retailers. This company allocates EUR 100 million of promotional spend to boost volume, but we notice that their master records that define their Products down to Stock Keeping Unit (SKU) are duplicative and inconsistent e.g., ‘Sparkling Water 500ml’ is the same as ‘Sparkling 0.5L’ vs. ‘Sparkle Water 500’ and so on. Now imagine the impact that this can have if there are over 10,000 SKUs across thousands of product lines and hundreds of thousands of different types of materials and packaging used to manufacture these variants of products. The EUR 100 million promotional spend would typically be misallocated, simply because Commercial teams like Marketing have a different definition of ‘Product’ to that of Finance, Supply Chain and Procurement departments.
This simple but yet very prevalent inconsistency leads to reduced net revenue and, worse, cannibalization of margin-accretive products that in turn should be promoted but are not. To solve this we can try and use LLMs to detect duplicates and clean the product catalogue. Initially the results may look promising but when we take into consideration the number of permutations derived from the number of product categories, brands, SKUs and departments, the costs of tokens would be too high to make this commercially viable to operate on a daily basis. So what is a more viable solution? A combination of fuzzy matching, and rule-based normalization; with LLMs and clerical review reserved for more complex cases. By adopting such a ‘hybrid’ approach to matching & merging, within which the application of LLMs was just one of many methods, the organization reduced the misallocation of promotional spend, and cannibalization of profitable products, and ultimately improved net revenue at a fraction of the original cost.
Strategic initiatives like Net Revenue Optimization, Zero Based Budgeting, migrations to new instances of ERPs and CRMs including SAP4/HANA and Salesforce, just to name a few, will always be dependent on reliable data across multiple sources both from within and outside of the organization. Of course there is now the opportunity to improve and accelerate strategic initiatives with the application of GenAI and a multitude of new and sophisticated LLMs. However, in doing so we should not forget to also utilize much simpler techniques including rules-based Machine Learning, and Natural Language Programming in combination with human supervision in the form of clerical reviews. The adoption of such a multifaceted approach may require additional upfront effort in terms of configuration but it will provide the necessary trade-off in terms of lower operating costs should the tokenisation of LLMs remain comparatively high. So, instead of throwing expensive LLMs into every initiatives, explore different combinations that may yield greater return on investment in the long-term.
For complementary viewpoints on turning AI into business outcomes: