Search for:
Have legacy systems failed us?


I have been working on-and-off with “legacy” systems for decades. The exact definition of what such a thing is, may come across as vague and ill-defined, but that’s ok. The next generations of software developers, data engineers and data scientists and in fact anyone working in tech, will present you with this idea and then you’ll have to work out the realness of their perspective.

For any twenty or thirty-something in tech these days, anything created before they were born or started their career, is likely labeled legacy. It’s a fair perspective. Any system has successors. Yet if it is ‘old’ and is still clicking and whirring in the background as a key piece of technology holding business data together, it might reasonably be considered a part of some legacy.

The term is loaded though.

For those who haven’t quite retired yet (myself included) – legacy connotes some sort of inflexible and unbendable technology that cannot be modernized or made more contemporary. In some industries or contexts though, legacy implies heritage, endurance and resilience. So which is it? Both views have their merits, but they have a different tonality to them, one very negative and one almost revered.

In my early working years, I had the pleasure of seeing the rise of the PC, at a time when upstart technologies were trying to bring personal computing into the home and the workplace. The idea of computing at home was for hobbyists and dismissed by the likes of IBM. Computing at work often involved being bound to a desk with a 30kg beige and brown housed CRT “green screen – dumb terminal” with a keyboard that often weighed as much, or more, than the heaviest of modern day laptops.

One of the main characteristics of these systems, was that they were pretty consistent in the way that they operated. Yes, they were limited, especially in terms of overall functionality, but for the most part the journeys were constrained and the results and behaviours were consistent. Changes to these systems seemed to move glacially. The whole idea of even quarterly software updates for example, would be perhaps somewhat of a novelty. Those that had in-house software development teams laughably took the gestation period of a human baby to get pretty much ‘anything’ done. Even bugs, when they were detected and root cause analysed, would take months to often be remediated, not because of complexity to solve, but rather because of the approaches to software change and update.

I suppose, in some industries, the technology was a bit more dynamic but certainly the friends and colleagues that I worked with in other industry sectors didn’t seem to communicate that there was a high level of velocity of change in these systems. Many of them were mainframe and mini-mainframe based – often serviced by one or more of the big tech brands that dominated in those days.

I would suppose, that a characteristic of modern systems, and modern practices then, is probably encapsulated in the idea of handling of greater complexities. Dealing with higher volumes of data and the need for greater agility. The need for integrated solutions for example, has never pressed harder than it does today. We need and in fact demand interconnectedness and we need to be able to trace numerous golden threads of system interoperability and application technology interdependence at the data level, at unprecedented levels.

In the past we could get away with manual curation of all kinds of things, including describing what we had and where it was, but the volumes, complexities and dependencies of systems today, make the whole idea of doing these things manually, seem futile and fraught with the risk of being incomplete and quickly out of date. Automation is now more than a buzzword, it’s table stakes, and many will rightly assume that automation has already been considered in the design.

Legacy Systems and Their Limitations

As anyone who regularly uses office applications will attest. Just a cursory consideration of your presentations, documents and spreadsheet files, folders and shared content, you will find that they demonstrate just how quickly things can get out of hand.

Unless you are particularly OCD perhaps; you likely have just one steaming heap of documents that you’re hoping your operating system or cloud provider is able to adequately index for a random search.

If not, you’re bound to your naming conventions (if you have any), the recency timestamps or some other criteria. In some respects, even these aspects seems to make all this smell suspiciously like a “legacy problem”.

The growth of interest and focus in modern data management practices in general, means that we need to consider how business and operational demands are reshaping the future of data governance in general.

I still don’t have a good definition for what a “Legacy System” really is despite all this The general perspective is that it is something that predates what you work with on a daily basis. This seems as good as any definition. But, we have to acknowledge though, that legacy systems remain entrenched as the backbone of a great many organizations’ data management strategies. The technology may have advanced and data volumes may have surged, but many legacy systems endure, despite or perhaps in spite of their inadequacies for contemporary and modern business needs.

Inability to Handle Modern Data Complexity

One of the most significant challenges posed by legacy data systems is often their inability to cope with data volumes and the inherent complexities of contemporary data. Pick your favourite system and consider how well it handles those documents I described earlier, either as documents or as links to those documents in some cloud repository.

Many of the legacy solutions that people think about as legacy solutions, were designed more than a generation ago when the technology choices were more limited, there was less globalization and we were still weaning ourselves off paper based and manually routed content and data. Often the data itself was conveniently structured with a prescribed meta model and stored in relational databases. These days, businesses face a deluge of new data types—structured, semi-structured, and unstructured—emanating from an ever gorwing number of sources including social media, IoT, and applications.

Legacy transactions and master data systems are now having to deal with managing tens and hundreds of millions of records spread across function and form specific siloed systems. This fragmentation in turn, is leading to inconsistencies in the data’s content, data quality and data reliability. All this makes it difficult for organizations to know what to keep and what to discard, what to pay attention to and what to ignore, what to use for action and what to simply consider as supplementary.

If there is enough metadata to describe all these systems, we may be lucky enough to index it and make it findable, assuming we know what to look for. The full or even partial adoption of the hybrid cloud has simply perpetuated the distributed silos problem. Now, instead of discrete applications or departments acting as data fiefdoms, we have the triple threat of data in legacy systems, unindexed. Data in local system stores and data in cloud systems. Any technical or non technical user finds it understandably challenging to find what they want and what they should care about because there are very few fully integrated seamless platforms that describe everything in a logical and accessible way.

Rigidity and Lack of Agility

Legacy and traditional systems are also characteristed by some inherent rigidity. The approach to implementing or running them often involves elongated processes that can take months or even years in implementation and require regimented discipline for daily operations. New initiatives hooked to legacy applications are typically characterized by expensive and high failure rates due to their own inherent complexity and the need for extensive customization to integrate and work with more contemporary technologies.

For example, prominent ERP software company SAP, announced in February 2020 that it would provide mainstream maintenance for core applications of [ECC] SAP Business Suite 7 software until the end of 2027.

But according to The Register, as recently as June 2024 representatives of DACH customers suggested that they don’t believe they will even meet the 2030 cut-off when extended support ends.

Research by DSAG, representing SAP customers in DACH found that 68% still use the “Legacy” platform. 22% suggesting that the SAP ECC/Business Suite influenced their SAP investment strategy for 2024. Many are reluctant to upgrade because they have invested so heavily in customizations. All this makes for some tough calls.

The rigidity of the legacy system compounded by the reticence of customers to upgrade does present a challenge in terms of understanding just how responsive any business can be, to changing business needs. SAP wants you to use shinier and glossier versions of their technology in order to maintain a good relationship with you and to ensure that they can continue adequately supporting your business into the future but if you won’t upgrade what are they to do?

The modern digital economies expect businesses to be able to pivot quickly in response to market trends or customer demands. Being stuck on legacy solutions may be holding them back. Companies running on legacy, may need significant time and resources to adapt further or scale to meet the expectations of the new. Apparent system inflexibility will likely hinder innovation and limit one’s ability to compete effectively.

Unification is a possible answer

If you recognise and acknowledge these limitations, then you’re likely already shifting away from the traditional siloed approaches to data management towards more unified platforms.

Integrated solutions like SAP provide a holistic view of organizational data, and they have been paramount for years. But even here, not all the data is held in these gigantic systems. SAP would segment the platforms by business process. Order to Cash, Procure to Pay, Hire to Retire and so on. But business are multidimensional. Business processes aren’t necessarily the way the business thinks about its data.

A multinational running on SAP may think about its data and systems in a very regional fashion, or by a specific industry segment like B2C or B2B; they may even fragment further depending on how they are set up. Channel-focused business for example is not unusual. eCommerce vs Retail Stores; D2C… The number of combinations and permutations are seemingly limitless. Yet each of these areas is likely just another data silo.

A break with data silos fosters cross-divisional collaboration allowing the business to enhance decision-making processes and improve overall operational efficiency. ERP doesn’t necessarily promote this kind of thinking. Such a shift is not just reactive with respect to shortcomings of legacy systems and the like; it is also driven by a broader trend towards digital transformation.

In commercial banking for example, thinking through the needs and wants of the different regional representations, the in-market segments and then the portfolio partitions, means that some data is common, some data is not, but most importantly, all of the data likely needs to be in one unifying repository and definitely needs to be handled in a consisent, aligned, compliant and unified way. Through the lens of risk and compliance, everyone’s behaviours and data are viewed in the same way, irrespective of where theuir data is held and who or what it relates to.

Incorporating modern capabilities like artificial intelligence (AI), machine learning (ML), and big data analytics requires solutions that can support these initiatives effectively and seems to be a popular topic of discussion. You can poo-poo AI and ML as fads with relatively limited real applicability and value right now, but like yester year’s personal computers, mobile phone technology and like, these kinds of things have an insidious way of permeating our daily lives in ways that we may have never considered before and before we know it, we have become hooked on them as essential capabilities for us to get through our daily lives.

Lessons in retail

In modern retail in the developed world, for example, every product has a barcode and every barcode is attached to a master data record entry that is tied to a cost and pricing profile.

When you checkout at the grocery store, the barcode is a key to the record in the point of sale system ahnd pricing engines and that’s the price that you see on the checkout receipt. Just 25 years ago, stores were still using pricing “guns” to put stickers on merchandise, something that still exists in many developing countries to this day. You might laugh, but in times of high inflation it was not uncommon for consumers to scratch about on the supermarket shelves looking for older stock of merchandise with the old price.

Sticker-based pricing may still prevail in places but often the checkout process is cashless, auto reconciling for checkout and inventory and especially for auto pricing all with the beep of a read of that barcode by a scanner.

As these technologies become even more affordable, and even more accessible to all sizes of business and even the most cost consciousness. In all aspects of buying, handling, merchandising and selling grows, the idea of individually priced merchandise will probably disappear altogether and we’ll still be frustrated by the missing barcode entry in the database at checkout or that grocery item that is sold by weight and needs to be given its own personal pricing barcode because the checkout doesn’t have a scale. This then becomes a legacy problem in itself where we straddle the old way of doing things and a new way.

In much the same way, transitioning from legacy to something more contemporary doesn’t mean that an organization has to completely abandon heritage systems, but it does mean that continuing to retain, maintain and extend existing systems should be continuously evaluated. The point here is that once these systems move beyond their “best-by” date, an organization encumbered by them, should already have a migration, transition or displacement solution in mind or underway.

This would typically be covered by some sort of digital transformation initiative.

Modern Solutions and Approaches

In stark contrast to legacy systems, modern solutions are typically designed with flexibility and scalability in mind.

One could argue that perhaps ther’s too much flexibility and scale sometimes, but they do take advantage of contemporary advanced technologies which means that they potentially secure a bit more of a resiliency lifeline.

A lifeline in the sense that you will continue to have software developers available to work on it, users who actively use it because of its more contemporary look and feel, and a few more serviceable versions before it is is surpassed by something newer and shinier, at which point it too becomes classified as “legacy”.

Cloud-Native Solutions

One of the most significant advancements in data systems these days, is the prevalence of cloud-native solutions. Not solutions ported to the cloud but rather solutions built from the ground up using the cloud-first design paradigm. I make this distinction because so many cloud offerings are nothing more than ‘moved’ technologies.

Cloud native systems may use microservices architecture — a design approach allowing individual components to be developed, deployed, and scaled independently. They may also make use of on-demand “serverless” technologies. By taking advantage of the modularity afforded by microservices, organizations can adapt their data management capabilities relatively more quickly in response to changing business requirements. This could be through technology switch outs or incremental additions. The serverless elements means that they make use of compute on-demand and in theory this means a lower operational cost and reduced wastage due to overprovisioned idle infrastructure/

Many cloud-native data management solutions also have the ability to more easily harness artificial intelligence and machine learning technologies to enhance data processing and analysis capabilities. Such tool use facilitates real-time data integration from diverse sources, allowing businesses to more easily maintain accurate and up-to-date data records with less effort.

Instead of being bound to geographies and constraining hardware profiles, users only need to have an internet connection and suitable software infrastructure to securely authenticate. The technology that supports the compute being able to be switched out in a seemingly limitless number of combinations according to the capabilities and inventory of offerings of the hosting providers.

Scalability is one of the most pressing concerns associated with legacy systems, one that these contemporary systems technologies seem to have largely overcome. Cloud-native solutions purport to be able to handle growing data volumes with almost no limits.

A growing data footprint also compells the organizations that continue to generate vast amounts of data daily. The modern data solution suggests that it can scale horizontally—adding more resources as needed without impairment and minimal disruption.

The concept of data mesh is also growing in popularity. It seems to be something that is gaining traction as an alternative to traditional centralized data management frameworks. On face value at least, this seems not dissimilar to the debate surrounding all-in-one versus best-of-breed solutions in the world of data applications. Both debates revolve around fundamental questions about how organizations should structure their data management practices to best meet their needs.

Data Mesh promotes a decentralized approach to data management by treating individual business domains as autonomous entities responsible for managing their own data as products. This domain-oriented strategy empowers teams within an organization to take ownership of their respective datasets while ensuring that they adhere to standardized governance practices. By decentralizing data ownership, organizations achieve greater agility and responsiveness in managing their information assets.

The concept also emphasizes collaboration between teams through shared standards and protocols for data interoperability. This collaborative approach fosters a culture of accountability while enabling faster decision-making processes driven by real-time insights. Set the policies, frameworks and approaches centrally but delegate the execution to the perhipheral domains to self-manage.

The Evolutionary Path Forward

Evolving from legacy to modern data management practices then starts to reflect broader transformations which occur through the embrace of things digital. Such a shift is not merely about adopting new tooling; it represents a fundamental change in how businesses view and manage their data assets. Centralized, constrained control gets displaced by distributed accountability.

Along the way, there will be some challenges to be considered. Amongst these, the cost of all these threads of divergence and innovation. Not all business areas will necessarily run at the same pace. Some will be a little more lethargic than others and their palate for change or alternative ways of working may be very constrained and limited.

Another issue will be the costs. With IT bugets remaining heavily constricted by most businesses, the idea of investing in technology bound initiatives is nowadays wrapped up in elaborate return-on-investment calculations and expectations.

The burden of supportive evidence for investment now falls to the promoters and promulgators of new ways of working and new tech; to provide proof points, timelines and a willingness to qualify the effort and the jsutification before the investment flows. With all the volatility that might exist in the business, sometimes these calculations, forecasts and predictions may be very hard to calculate.

Buying into new platforms and technologies also requires a candid assessment as to the viability or likelihood that any particular innovation will actually yield a tangible or meaningful business benefit. While ROI is one thing, the ability to convince stakeholders that the prize is a worthwhile prize is another. Artificial Intelligence, machine learning and big data analytics present as a trio of capabilities that hold promise that some will continue to doubt the utility of.

As evidenced by history being littered with market misreads like RIM’s Blackberry underestimating the iPhone and Kodak Film’s lack of comprehension of the significance of digital photography. Big Tech’s Alphabet (Google), Amazon, Apple, Meta and Microsoft may get a bunch wrong, but the more vulnerable business sector that depends on these tech giants cannot really afford to make too many mistakes.

Organizations need to invest as much in critically evaluating next generation data management technologies as in their own ongoing market research. They need to do this to understand evolving preferences and advancements. This includes observing the competition and shifts in demand.

Those that foster a culture of innovation, encourage experimentation and embrace new technologies need to be prepared to reallocate resources or risk having any position of strength that they have, being displaced, especially by newer more agile entrants to their markets. Agility means being able to quickly adapt, a crucial characteristic for responding effectively to market disruptions. Being trapped with a legacy mindset and legacy infrastructure retards an organization’s ability to adapt.

Driving toward a modern Data-Driven Culture

To maximize the benefits of modern data management practices, organizations must foster a culture that prioritizes data-driven decision-making at all levels. In a modern data-driven culture an organization’s data management environment is key. Decisions, strategies and operations at all levels need to be bound to data.

For this to work, data needs to be accessible, the evaluators, users, and consumers of the data need to be data literate and they need to have the requisite access and an implicit dependency on data as a part of their dailies. For effective data management there needs to be a philosophy of continuous improvement tied to performance metrics and KPIs like data quality measures accompanied by true accountability.

Building blocks for this data driven culture hinge not only on the composition of the people and their work practices but also on the infrastructure which needs to be scalable and reliable, secure and of high performance.

The data contained therein, needs to be comprehensive, rich and accessible in efficient and cost effective ways. The quality of the data needs to be able to stand up to all kinds of scrutiny from a regulatory and ethical standpoint, through auditability and functional suitability. Any efforts to make the whole approach more inclusive and embracing of a whole organization inclusive mindset should also be promoted. The ability to allow the individual business units to manage their own data and yet contribute to the data more holistically will ultimately make the data more valuable.

If legacy has not failed us already, it will. Failure may not be obvious. It could be a slow, degraded experience that hampers business innovation and progress. Organizations that do not have renewal and reevaluation as an integral part of their operating model.

To effectively transition from legacy systems to modern data management practices, organizations must recognize the critical limitations posed by outdated technologies and embrace the opportunities presented by contemporary solutions.

Legacy systems, while at some point foundational to business operations, often struggle to manage the complexities and voluminous data generated in today’s digital landscape. Their rigidity and inability to adapt hinder innovation and responsiveness, makes it imperative for organizations to evaluate their reliance on such systems.

The shift towards modern solutions—characterized by flexibility, scalability, and integration—presents a pathway for organizations to enhance their operational efficiency and decision-making capabilities. Cloud-native solutions and decentralized data management frameworks like Data Mesh empower businesses to harness real-time insights and foster collaboration across departments. By moving away from siloed approaches, organizations can create a holistic view of their data, enabling them to respond swiftly to market changes and customer demands.

As I look ahead, I see it as essential that organizations cultivate their own distinctive data-driven culture.

A culture that prioritizes accessibility, literacy, and continuous improvement in data management practices. Such a shift would not only enhance decision-making but also drive innovation, positioning any organization more competitively in an increasingly complex environment.

All organizations must take proactive steps to assess their current data management strategy and identify areas for modernization.

They should begin by evaluating the effectiveness of existing legacy systems and exploring integrated solutions that align with their business goals.

They should invest in training programs that foster data literacy among employees at all levels, ensuring that the workforce is equipped to leverage data effectively.

Commit to a culture of continuous improvement, where data quality and governance are prioritized. By embracing these changes, organizations can unlock the full potential of their data assets and secure a competitive advantage for the future.


Read More
Author: Clinton Jones

To the cloud no more? That is the question.


Cloud computing has undergone a remarkable transformation over the past decade.

What was once hailed as a panacea for companies struggling with the high costs and unsustainability of on-premise IT infrastructure has now become a more nuanced and complex landscape. Businesses continue to grapple with the decision to migrate to the cloud or maintain a hybrid approach, the complexity, costs and risk are essential to understand the evolving dynamics and the potential pitfalls that lie ahead.

The initial appeal of cloud solutions was undeniable.

By offloading the burden of hardware maintenance, software updates, and data storage to cloud providers, companies could focus on their core business activities and enjoy the benefits of scalability, flexibility, and cost optimization. The cloud promised to revolutionize the way organizations managed their IT resources, allowing them to adapt quickly to changing market demands and technological advancements.

However, not all businesses have fully embraced the cloud, especially when it comes to their mission-critical systems. Companies that handle sensitive or proprietary data have often been more cautious in their approach, opting to maintain a significant portion of their operations on-premise. These organizations may have felt a sense of vindication as they watched some of their cloud-first counterparts grapple with the complexities and potential risks associated with entrusting such critical systems to third-party providers.

The recent news from Basecamp, for example, was driven by spiraling costs, irrespective of the cloud provider (they tried AWS and GCP). Thus, Basecamp decided to leave the cloud computing model and move back to on-premise infrastructure to contain costs, reduce complexity, avoid hidden costs, and retain margin. This way they felt that they had more control of the delivery and sustainment outcomes.

The Ongoing Costs of Cloud-First Strategies

Cloud bills, for example, can comprise hundreds of millions or billions of rows of data, making them difficult to analyze in traditional tools like Excel and cloud computing reduces upfront startup costs, including setup and maintenance costs, with 94% of IT professionals reporting this benefit. Accenture for example, found cloud migration leads to 30-40% Total Cost of Ownership (TCO) savings.

As many as 60% of C-suite executives also cite security as the top benefit of cloud computing, ahead of cost savings, scalability, ease of maintenance, and speed.

The private cloud services market for example, is projected to experience significant growth in the coming years. According to Technavio, the global private cloud services market size is expected to grow by $276.36 billion from 2022 to 2027, at a CAGR of 26.71%. 

The cloud of course supports automation, reducing the risk of human errors that cause security breaches and accoridnly the platforms help capture the cost of tagged, untagged, and untaggable cloud resources, as well as allocate 100% of shared costs. For those organizations that have wholeheartedly adopted a cloud-first strategy, the operational budgets for cloud technologies have often continued to climb year-over-year.

Instead of fully capitalizing on the advances in cloud technology, these companies may find themselves having to maintain or even grow their cost base to take advantage of the latest offerings. The promise of cost savings and operational efficiency that initially drew them to the cloud may not have materialized as expected.

As this cloud landscape continues to evolve, a critical question arises: is there a breaking point where cloud solutions may become unviable for all but the smallest or most virtualized cloud-interwoven businesses?

This concern is particularly relevant in the context of customer data management, where the increasing number of bad actors and risk vectors, coupled with the growing web of regulations and restrictions at local, regional, and international levels, can contribute to a sense of unease about entrusting sensitive customer data to cloud environments.

The Evolving Regulatory Landscape & Cyber threats

The proliferation of data privacy regulations, such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States, has added a new layer of complexity to the cloud adoption equation.

These regulations, along with a growing number of industry-specific compliance requirements, have placed significant demands on organizations to ensure the security and privacy of the data they handle, regardless of where it is stored or processed.For businesses operating in multiple jurisdictions, navigating the web of regulations can be a daunting task, as the requirements and restrictions can vary widely across different regions.

Failure to comply with these regulations can result in hefty fines, reputational damage, and even legal consequences, making the decision to entrust sensitive data to cloud providers a high-stakes proposition.

Alongside the evolving regulatory landscape, the threat of cyber attacks has also intensified, with bad actors constantly seeking new vulnerabilities to exploit.

Cloud environments, while offering robust security measures, are not immune to these threats, and the potential for data breaches or system compromises can have devastating consequences for businesses and their customers.

The growing sophistication of cyber attacks, coupled with the increasing value of customer data, has heightened the need for robust security measures and comprehensive risk management strategies. Companies must carefully evaluate the security protocols and safeguards offered by cloud providers, as well as their own internal security practices, to ensure the protection of their most valuable assets.

Balancing Innovation and Risk Management

In light of these challenges, many businesses are exploring hybrid approaches that combine on-premise and cloud-based solutions.

This strategy allows organizations to maintain control over their mission-critical systems and sensitive data, while still leveraging the benefits of cloud computing for less sensitive or more scalable workloads.

Some companies are also taking a more selective approach to cloud adoption, carefully evaluating which workloads and data sets are suitable for cloud migration.

By adopting a risk-based approach, they can balance the potential benefits of cloud solutions with the need to maintain a high level of control and security over their most critical assets.

As the cloud landscape continues to evolve, it is essential for businesses to carefully evaluate their cloud strategies and adapt them to the changing circumstances.

This may involve regularly reviewing their cloud usage, cost optimization strategies, and the evolving regulatory and security landscape to ensure that their cloud solutions remain aligned with their business objectives and risk tolerance.Regular monitoring and assessment of cloud performance, cost-effectiveness, and security posture can help organizations identify areas for improvement and make informed decisions about their cloud investments.

Collaboration with cloud providers and industry experts can also provide valuable insights and best practices to navigate the complexities of the cloud ecosystem.

As the cloud landscape continues to evolve, it is clear that the path forward will not be a one-size-fits-all solution.

Businesses must be careful in weighing the potential benefits of cloud adoption against the risks and challenges that come with entrusting their critical data and systems to third-party providers.The future of cloud solutions will likely involve a more nuanced and balanced approach, where organizations leverage the power of cloud computing selectively and strategically, while maintaining a strong focus on data security, regulatory compliance, and risk management.

Collaboration between businesses, cloud providers, and regulatory bodies will likely be crucial in shaping the next chapter of the cloud revolution, ensuring that the benefits of cloud technology are realized in a secure and sustainable manner.


Read More
Author: Uli Lokshin

Data Governance playbooks for 2024


Back in 2020, I offered up some thoughts for consideration around generic or homogenous data governance playbooks. Revisit it if you care to.

This was in part fueled by frustrations with the various maturity models and potential frameworks available but also by the push, particularly from some software vendors, to suggest that a data governance program could be relatively easily captured and implemented generically using boiler-plated scenarios by any organization without necessarily going through the painful process of analysis, assessment and design.

Of course, there is the adage, “Anything worth doing, is worth doing well“, and that remains a truism as applicable to a data governance program as anything else in the data management space.

You can’t scrimp on the planning and evaluation phase if you want to get your data governance program to be widely adopted and effective irrespective of how many bucks you drop and irrespective of the mandates and prescripts yelled from the boardroom.

Like any change program, a DG initiative needs advocacy and design appropriate to the context and no vendor is going to do that perfectly well for you without you making a significant investment of time, people and effort to get the program running. If you’re evaluating a software vendor to do this for you, in particular, you need to be sure to check out their implementation chops and assess their domain knowledge, particularly relevant to your industry sector, market and organizational culture. This is a consulting focus area that “The Big Four” have started to look more closely at and are competing with boutique consultancies on. So if you have a passion for consulting and you feel all the big ERP and CRM projects have been done and you want to break into this space, then here is an area to consider.

What is it exactly?

The term “playbook” in a business context is borrowed from American football. In sports, a playbook is often a collection of a team’s plays and strategies, all compiled and organized into one book or binder. Players are expected to learn “the plays” and ahead of the game the coach and team work out the play that they are likely to run at the opposing team or the approach that they will use if the opposing team is observed to run a particular play of their own. Some plays may be offensive, some defensive and then there may be other plays for specialised tactical runs at a given goal or target.

A “business playbook” contains all your company’s processes, policies, and standard operating procedures (SOPs). Also termed a “company playbook”, it is effectively a manual outlining how your business does what it does, down to each business operations role, responsibility, business strategy, and differentiator. This should be differentiated from a RunBook where the latter is your “go-to” if a team needs step-by-step instructions for certain tasks. Playbooks have a broader focus and are great for teams that need to document more complex processes. It is a subtlety that is appreciated more when you are in the weeds of the work than when you are talking or thinking conceptually about new ways of optimizing organizational effectiveness and efficiency.

A data governance playbook is then effectively a library of documented processes and procedures that describe each activity in terms of the inputs and capture or adoption criteria, the processes to be completed, who would be accountable for which tasks, and the interactions required. It also often outlines the deliverables, quality expectations, data controls, and the like.

Under the US President’s management agenda, the federal Data Strategy offers up a Data Governance Playbook that is worth taking a look at as an example. Similarly, the Health IT Playbook is a tool for administrators, physician practice owners, clinicians and practitioners, practice staff, and anyone else who wants to leverage health IT. The focus is on the protection and security of patient information and ensuring patient safety.

So, in 2024, if you’re just picking up the concept of a playbook, and a data governance playbook in particular, it is likely that you’ll look at what the software vendors have in mind; you’ll evaluate a couple of implementation proposals from consultants and you’ll consider repurposing something from adjacent industry, a past project or a comparable organization.

Taking a “roll-your-own” approach

There’s plenty of reading content out there from books written by industry practitioners, and analysts, to technology vendors, as mentioned. Some are as dry as ditchwater and very few get beyond a first edition, although some authors have been moderately successful at pushing out subsequent volumes with different titles. A lot of the content though, will demonstrate itself to be thought exercises with examples, things/factors to consider, experiences and industry or context-specific understandings or challenges. Some will focus on particular functionality or expectations around the complementary implementation or adoption of particular technologies.

With the latest LLM and AI/ML innovations, you’ll also discover a great deal of content. Many of these publications, articles and posts found across the internet have already been parsed and assimilated into the LLM engines so, a good starting point is for you to ask your favourite chatbot what it thinks.

Using a large language model (LLM) like ChatGPT to facilitate the building of data playbooks might be feasible to a certain extent but there will be challenges.

On the plus-side. An LLM could generate content and provide templates for various sections of a data playbook, such as data classification, access controls, data lifecycle management, and compliance. It can also assist in drafting policy statements, guidelines, and procedures.

It could help in explaining complex data governance concepts, definitions, and best practices in a more accessible language for use in say a business glossary or thesaurus. This could be beneficial for individuals who might not have a deep understanding of data governance – think about your data literacy campaigning in this context.

Users can also directly interact with an LLM in a question-answer format to seek clarity on specific aspects of data governance and help build an understanding of key data governance concepts and data management requirements.

Just as for generic playbooks, there are going to be problems with this approach, LLMs operate based on patterns learned from a diverse range of data, but they often lack domain specificity. A data management platform or data catalog itself might have an LLM attached to it but has it been trained with data governance practice content?

Data governance often requires an understanding of industry-specific regulations, data types, and organizational contexts that might not be captured adequately by a generic model.

We’ve also heard about AI hallucinations, and some of us may have even experienced a chatbot hallucination. Without the particular character of data governance practice and domain knowledge, there’s a risk that the AI might generate content that is wholly or partially inaccurate, incomplete, or not aligned with the actual organizational need. This then, would have you second-guessing the results and having to dig into the details to ensure that the suggested content is appropriate. You’ll need to have a domain expert on hand to validate the machine-generated output.

Data governance practices and regulations are also ever-evolving. What the LLM might not be aware of, is new regulations, new compliance expectations or new industry standards. So leaning purely on machine-generated content may be deficient in revealing emerging best practices unless it gets to be trained with updates.

Each organization has its unique culture, structure, and processes. The intertwined nature of DG with the various organizational processes, and understanding these interconnections is vital; that’s best achieved with careful analysis, process design and domain knowledge. The tool you use to help elaborate your playbook might simply provide information in isolation, without any grasp of the broader organizational context. Without appropriate training and prompting, the specific nuances of the organization will make it almost impossible to tailor the generated content to align with organizational goals and practices.

I guess my whole point is that you will not escape the human factor. If you insist on going it alone and relying on machine-generated content in particular then that same content should undergo thorough validation by domain experts and organizational stakeholders to ensure that the results are accurate and aligned with organizational and industry requirements.

The use of modern-day tooling to assist human experts in drafting and refining data playbooks is a valuable acceleration approach that has merit but just as for generic playbooks and templates, you need to leverage the strengths of canned, automated generation and human expertise to arrive at a good result.

I’d love to hear what if anything you’ve done with chatbots, AI, ML and LLM to generate content. If you are implementing any data management or data governance initiatives, I would love to know how successful you have been and any tips or tricks you acquired along the way.


Read More
Author: Clinton Jones

A Strategic Approach to Data Management


There is a delicate balance between the needs of data scientists and the requirements of data security and privacy.

Data scientists often need large volumes of data to build robust models and derive valuable insights. However, the accumulation of data increases the risk of data breaches, which is a concern for security teams.

This hunger for data and the need for suitable control over sensitive data creates a tension between the data scientists seeking more data and the security teams implementing measures to protect data from inappropriate use and abuse.

A strategic approach to data management is needed, one that satisfies the need for data-driven insights while also mitigating security risks.

There needs to be an emphasis on understanding the depth of the data, rather than just hoarding it indiscriminately.

Towards Data Science article Author, Stephanie Kirmer reflects on her experience as a senior machine learning engineer and discusses the challenges organizations face as they transition from data scarcity to data abundance.

Kirmer highlights the importance of making decisions about data retention and striking a balance between accumulating enough data for effective machine learning and avoiding the pitfalls of data hoarding.

Kirmer also touches on the impact of data security regulations, which add a layer of complexity to the issue. Despite the challenges, Kirmer advocates for a nuanced approach that balances the interests of consumers, security professionals, and data scientists.

Kirmer also stresses the importance of establishing principles for data retention and usage to guide organizations through the decisions surrounding data storage.

Paul Gillin, Technology Journalist at Computerworld raised this topic back in 2021. in his piece Data hoarding: The consequences go far beyond compliance risk, Gillin discusses the implications of data hoarding, which extends beyond just compliance risks. It highlights how the decline in storage costs has led to a tendency to retain information rather than discard it. 

Pijus Jauniťkis a writer in Internet Security at Surfshark describes how the practice can lead to significant risks, especially with regulations like the General Data Protection Act in Europe and similar legislation in other parts of the world.

There is however a landscape where data is both a valuable asset and a potential liability, a balanced and strategic approach to data management is crucial to ensure that the needs of both groups are met.

The data community has a significant responsibility in recognizing both.

Data management responsibilities extend beyond the individual who created or collected the data. Various parties are involved in the research process and play a role in ensuring quality data stewardship.

To generate valuable data insights, people need to become fluent in data. Data communities can help individuals immerse themselves in the language of data, encouraging data literacy.

A governing body organizationally, is often responsible for the strategic guidance of a data governance program, prioritization for the data governance projects and initiatives, approval of organization-wide data policies and standards and if there isn’t one, one should be established.

Accountability includes the responsible handling of classified and controlled information, upholding data use agreements made with data providers, minimizing data collection, informing individuals and organizations of the potential uses of their data.

In the world of data management, there is a collective duty to prioritize and respond to the ethical, legal, social, and privacy-related challenges that come from using data in new and different ways in advocacy and social change.

A balanced and strategic approach to data management is crucial to ensure that the needs of all stakeholders are met. We collectively need to find the right balance between leveraging data for insights and innovation, while also respecting privacy, security, and ethical considerations.


Read More
Author: Uli Lokshin

Unlocking Value through Data and Analytics


Organizations are constantly seeking ways to unlock the full potential of their data, analytics, and artificial intelligence (AI) portfolios.

Gartner, Inc., a global research and advisory firm, identified the top 10 trends shaping the Data and Analytics landscape in 2023 earlier this year.

.These trends not only provide a roadmap for organizations to create new sources of value but also emphasize the imperative for D&A leaders to articulate and optimize the value they deliver in business terms.

Bridging the Communication Gap

The first and foremost trend highlighted by Gartner is “Value Optimization.”

Many D&A leaders struggle to articulate the tangible value their initiatives bring to the organization in terms that resonate with business objectives.

Gareth Herschel, VP Analyst at Gartner, emphasizes the importance of building “value stories” that establish clear links between D&A initiatives and an organization’s mission-critical priorities.

Achieving value optimization requires a multifaceted approach, integrating competencies such as value storytelling, value stream analysis, investment prioritization, and the measurement of business outcomes.

Managing AI Risk: Beyond Compliance

As organizations increasingly embrace AI, they face new risks, including ethical concerns, data poisoning, and fraud detection circumvention.

“Managing AI Risk” is the second trend outlined by Gartner, highlighting the need for effective governance and responsible AI practices.

This goes beyond regulatory compliance, focusing on building trust among stakeholders and fostering the adoption of AI across the organization.

Observability: Unveiling System Behaviour

Another trend, “Observability,” emphasizes the importance of understanding and answering questions about the behaviour of D&A systems. .

This characteristic allows organizations to reduce the time it takes to identify performance-impacting issues and make timely, informed decisions.

Data and analytics leaders are encouraged to evaluate observability tools that align with the needs of primary users and fit into the overall enterprise ecosystem.

Creating a Data-Driven Ecosystem

Gartner’s fourth trend, “Data Sharing Is Essential,” underscores the significance of sharing data both internally and externally.

Organizations are encouraged to treat data as a product, preparing D&A assets as deliverables for internal and external use.

Collaborations in data sharing enhance value by incorporating reusable data assets, and the adoption of a data fabric design is recommended for creating a unified architecture for data sharing across diverse sources.

Nurturing Responsible Practices

“D&A Sustainability,” extends the responsibility of D&A leaders beyond providing insights for environmental, social, and governance (ESG) projects.

It urges leaders to optimize their own processes for sustainability, addressing concerns about the energy footprint of D&A and AI practices. This involves practices such as using renewable energy, energy-efficient hardware, and adopting small data and machine learning techniques.

Enhancing Data Management

“Practical Data Fabric,” introduces a data management design pattern leveraging metadata to observe, analyse, and recommend data management solutions. .

By enriching the semantics of underlying data and applying continuous analytics over metadata, data fabric generates actionable insights for both human and automated decision-making. It empowers business users to confidently consume data and enables less-skilled developers in the integration and modelling process.

Emergent AI

“Emergent AI,” heralds the transformative potential of AI technologies like ChatGPT and generative AI. As one AI researcher described it, “AI ‘Emergent Abilities’ Are A Mirage”. Per a paper presented in May at the Stanford Data Science 2023 Conference related to claims of emergent abilities in artificially intelligent large language models (LLMs) in particular and cited by Andréa Morris Contributor on Science, Robots & The Arts in Forbes.

This emerging trend however seemingly trivial, is expected to redefine how companies operate, offering scalability, versatility, and adaptability. As AI becomes more pervasive, it is poised to enable organizations to apply AI in novel situations, expanding its value across diverse business domains.

Gartner’s highlights another trend, “Converged and Composable Ecosystems,” and old topic from the start of the 2020s, it is focused on designing and deploying data ana analytics platforms that operate cohesively through seamless integrations, governance, and technical interoperability.

The trend advocates for modular, adaptable architectures that can dynamically scale to meet evolving business needs.

“Consumers as Creators,” is nothing particularly new, it envisions a shift from predefined dashboards to conversational, dynamic, and embedded user experiences as a ninth trend.

 Werner Geyser described 20 Creator Economy Statistics That Will Blow You Away in 2023 in his Influencer marketing hub piece

A large percentage of consumers identify as creators. Over 200 Million People globally, consider themselves as “creators”.

Content Creators Can Earn Over $50k a Year and the global influencer market size has increased now to a potential revenue earner of $21 Billion In 2023.

Organizations are encouraged to empower content consumers by providing easy-to-use automated and embedded insights, fostering a culture where users can become content creators.

Humans remain the key decision makers and not every decision can or should be automated. Decision support and the human role in automated and augmented decision-making remain as critical considerations.

Organizations need to combine data and analytics with human decision-making in their data literacy programs. While indicators from marketing analysts like Gartner may serve as a compass, guiding leaders toward creating value, managing risks, and embracing innovations the imperative is to deliver provable value at scale underscores the strategic role of data and analytics leaders in shaping the future for their organizations.

As the data and analytics landscape continues to evolve, organizations that leverage the trends strategically will be well-positioned to turn extreme uncertainty into new business opportunities.


Read More
Author: Jewel Tan

Balancing Cloud Transformation in Turbulent Times


The spectre of an impending economic downturn looms large, prompting business leaders to re-evaluate their strategic decisions, particularly regarding cloud transformation.

Simon Jelley, General Manager for SaaS Protection, Endpoint and Backup Exec at Veritas Technologies notes that despite the economic uncertainty, cloud migration remains a prevalent trend, with 60% of enterprise data already residing in the cloud.

However, the challenge lies in maintaining the cost benefits associated with the cloud, as evidenced by the fact that 94% of enterprises fail to stay within their cloud budgets.

To address this, businesses are encouraged to adopt a hybrid multicloud environment, necessitating careful data management strategies. Here are key steps organizations should take:

  • Establish Data Visibility: Gain a comprehensive understanding of where your data resides, whether on-premises or in the public cloud.
  • Enable Workload Migration/Portability: Facilitate seamless movement of workloads between on-premises infrastructure and various cloud service providers.
  • Leverage Software-Defined Storage: Embrace agile and scalable storage solutions to accommodate the dynamic nature of multicloud environments.
  • Prioritize Data Regulatory and Compliance Issues: Ensure compliance with data regulations across different cloud environments.
  • Eliminate Data Protection Silos: Streamline data protection processes to avoid fragmentation and enhance overall security.

By implementing these measures, organizations can fortify their data management capabilities, ensure resilience and meet compliance objectives amid economic uncertainties.

Cybercrime: A Persistent Threat, Demands Proactive Measures

As cybercrime continues to evolve, organizations must adapt their data management strategies to withstand increasingly sophisticated attacks. Ransomware, in particular, remains a potent weapon for cybercriminals seeking to exploit the value of organizational data.

While addressing cyber resilience is crucial, Jelley also advocates for a proactive approach to reduce the risk of attacks. The focus is on increasing data visibility, and the suggested steps include:

  • Create a Data Taxonomy or Classification System: Classify data based on sensitivity and importance to establish a clear understanding of information assets.
  • Establish a Single Source of Truth (SSOT) Location: Designate centralized locations for each category of data to streamline management and control.
  • Define and Implement Policies: Develop and enforce policies tailored to the specific requirements of identified data types.
  • Continually Update and Maintain Data Taxonomy, SSOT, and Policies: Keep data management strategies agile and responsive to evolving cyber threats.

By adhering to these proactive measures, organizations can limit exposure and enhance their ability to recover in the event of a cyber attack, ultimately safeguarding their critical data.

Digitization 3.0: Unleashing the Power of Usable Data

Digitization has undergone significant phases, with the current era—Digitization 3.0—focusing on extracting maximum value from data while ensuring security, resiliency, and privacy. Jelley emphasizes the importance of contextualizing data to enhance its usability, paving the way for user experience-driven workflows. Building upon the foundation of the preceding trends, organizations can achieve this by:

  • Consolidating Data Control: Utilize platforms capable of managing data across diverse environments, including on-premises, virtual, and multicloud.
  • Map Uses and Users: Conduct a thorough analysis of existing tools and users to seamlessly transition to a consolidated platform.
  • Implement Adequate Training: Ensure that teams are well-versed in utilizing the new consolidated platform to maximize its functionalities.

Digitization 3.0 represents a paradigm shift in data utilization, emphasizing the need for organizations to not only manage and protect their data but also harness its full potential to drive innovation and customer-centric experiences.

As businesses navigate the intricate landscape of data management in 2023, Simon Jelley’s insights shed light on the pivotal trends shaping the industry.

Economic uncertainty, cybercrime, and Digitization 3.0 collectively underscore the importance of proactive, adaptive data management strategies. By embracing data visibility, fortifying cybersecurity measures, and leveraging the power of contextualized data, organizations can not only weather the challenges of the present but also position themselves for success in the data-driven future.

Jelley reiterates the fundamental importance of caring about data—its management, protection, and the ability to address prevailing trends. In a world where information is a critical asset, businesses that prioritize effective data management will not only survive but thrive in the face of evolving challenges.

As we close out 2023, staying abreast of these trends and implementing strategic data management practices will be integral to achieving long-term success in a data-centric business landscape.


Read More
Author: Flaminio

Incentivizing Consumers to Self-Serve Zero-Party Data and Consent


Privacy remains a big deal and there are several reasons why consumers may be hesitant to allow organizations to master their personal data.

Organizations keep records on consumers for various reasons, among them, personalization, service, marketing, compliance and fraud prevention.

They may use your data to personalize your experience with their products or services; using your browsing and purchase history to recommend products that you are more likely to be interested in.

Keeping records of your interactions with customer service teams enables them to provide better support in the future and ensure that needs are met quickly and efficiently.

Marketing campaigns may be annoying but when they are personalized there may be a change in perception. Analysing behaviour and preferences, marketeers can create more relevant and targeted advertising that is more likely to result in a conversion.

Especially in financial services, organizations need to keep records on consumers to comply with legal and regulatory requirements. For example, they may need to keep records of your transactions for tax or accounting purposes but also to minimize the likelihood of money laundering or illegal use of financial instruments and infrastructure.

In exchange for goods, services or funding, they may use consumer data to prevent fraudulent activity.; monitoring behaviour, usage profiles and transactions, they can identify suspicious activity and take action to prevent fraud.

On the flipside, consumers may feel that their personal data is sensitive and should be kept private.

They may worry that if an organization masters their personal data, it could be used for nefarious purposes or sold to third-party companies without their consent.

Consumers may also be concerned that if an organization masters their personal data, it could be at risk of being hacked or stolen by cybercriminals, resulting in potential identity theft, personal financial loss, and other undesirable consequences.

One of the reasons is that consumers feel that if an organization masters their personal data, they lose control over it; worrying that the data will be used in ways they do not approve of, or that they will not be able to access or delete their data as they see fit.

In particular, consumers worry that their personal data could be used to discriminate against them based on their race, gender, religion, or other personal characteristics. Personal data that is used to make decisions about who to hire, who to offer loans to, or who to market products to are undesirable uses of personal data, for consumers at least.

Consumers have long held feelings that if an organization masters their personal data, it could also lead to unwanted intrusion into their personal lives accompanied by constantly being targeted with ads or other forms of marketing, based on their behaviour being monitored and analysed in ways that feel intrusive or uncomfortable and an invasion of privacy.

Zero party data

An opt-in approach with first-party data can help to address some of the concerns that consumers may have about their personal data being mastered.

First-party data refers to the information that consumers willingly provide through interactions with a website, a product, or a service. An opt-in approach means that organizations only collect and use the consumer’s data with the explicit consent of the consumer. This can give consumers greater control over their data, and can help to build trust between consumers and organizations.

Those privacy concerns can be addressed through opt-in meaning consumers must explicitly agree to allow the collection and use of the data in specific ways. This can give consumers greater control over their personal information and can help to ensure that their data is being used only for legitimate purposes.

By limiting the data that is collected to only what is necessary for specific purposes, the opt-in approach with first-party data helps to reduce the exposure risk associated with prospective data breaches. Organizations that collect first-party data are often also more invested in protecting that data, as it is valuable for building and maintaining the customer relationship.

An opt-in approach also gives consumers more control over the personal information allowing them to choose which data to continue to share, and supporting opt-out of specific data and its collection at any time.

To reduce the risk of discrimination, organizations are required to obtain explicit consent before collecting data on personal characteristics and though data is typically used for personalization and targeted advertising, the consumer can decide how it should be used especially in relation to important decisions about the consumer.

An opt-in approach with first-party data also helps to reduce the feeling of intrusiveness. Consumers now have control over what data is collected and how it is used, the personalization and customization can enhance the user experience rather than detracting from it.

If an organization is considering implementing a customer master data management solution, it’s important to understand how this approach can address consumers’ concerns about their personal data.

Through increased transparency the CMDM provides greater transparency into the data that an organization collects and how it is used; this in turn builds trust with consumers, as they can see exactly what information is being collected and why.

By centralizing the customer data in a CMDM and implementing robust security measures, a customer master data management solution reduces the vectors and edges that provide risk in the event of data breaches. This can also provide reassurance to consumers who are concerned about the security of their personal information.

A CMDM also enables organizations to provide more personalized experiences to customers which in turn helps to build stronger relationships with customers, increases loyalty, and ultimately drives revenue growth.

An opt-in approach gives customers more control over their data, the CMDM can demonstrate that the organization respects the privacy of its customers. This is often a important differentiator in the competitive marketplace, where consumers are increasingly concerned about their data privacy.

CMDM also helps with compliance. Organizations need to comply with data privacy regulations, such as GDPR and CCPA. CMDM’s like that offered by Pretectum, can help to avoid legal and reputational risks associated with non-compliance by providing reassurance to customers and regulators that consumer data is being handled in a responsible and compliant manner.

Overall, a customer master data management solution can help to build trust with customers, enhance data security, deliver better customer experiences, and demonstrate respect for privacy and compliance with regulations.

Communicating with customers about how their personal data is being collected, used, and protected is increasingly important in good customer relationship management.

Consumers expect organizations to be transparent about the data they collect and how it is being used. They expect clear communication on the purpose of the data collection, and what benefits the customers can expect from it. They also expect to provide customers with easy-to-understand information about their data rights and options for managing the data.

Organizations would reassure customers that their personal data is being stored and protected securely, explaining the measures they have put in place to safeguard against data breaches, such as encryption, firewalls, and access controls.

Using an opt-in approach to data collection, means that customers have control over the data that is collected and can choose to opt out at any time. The benefits of opting in are of course more personalized experiences or access to exclusive offers.

Emphasizing respect for privacy of customers and a commitment to protecting personal data go hand in hand and would also explain compliance with relevant data privacy regulations. The responsible organization also highlights any certifications or standards they have achieved in in relation to governance and compliance regulation adherence.

The benefits that customers expect from the data collection might seem obvious such as an enhanced overall experience, but providing examples of how the data is being used to personalize products and services, improve customer service, and offer tailored promotions and discounts is important communication.

Overall, effective communication with customers about the implementation of a customer master data management solution is most critical to building trust and addressing concerns.

Transparency on intent and behaviours, emphasizing data security and privacy, using an opt-in approach, highlighting customer benefits, and complying with relevant regulations, organizations can reassure their consumers that their personal data is being handled responsibly and ethically.

In response, consumers should engage in self-service zero-party data and consent inquiries because it allows them to have greater control over their personal data and the experiences they have with an organization.

By providing preferences and consent, consumers can receive more relevant and personalized experiences, products, and services.

Ecommerce sites could show recommendations based on customer stated interests and preferences, health apps could provide workout plans tailored to a user’s fitness level and selected goals.

Reduced clutter in inboxes may make interactions with an organization more efficient and enjoyable and when accompanied by the ability to decide what information is shared with an organization and how it is used, feelings of more control of personal data and confidence that it is being handled responsibly may follow.

Keeping the interest alive

If the data is collected but not used, it should be securely stored and deleted after a reasonable period of time to ensure compliance with relevant data privacy regulations and businesses can incentivize consumers to provide their data in the context of self-service zero-party data and consent inquiry by offering exclusives, discounts, rewards and previews.

Offering exclusive content, such as whitepapers, eBooks, or reports only accessible to those who provide their data can be a powerful incentive, especially for customers who are interested in a particular topic.

Personalized discounts or coupons to customers who provide their data especially in retail could encompass discounts on next purchases based on stated interests or style preferences.

A free cup of coffee, for example, is obvious at a coffee shop but consider how Waitrose did the same for loyal card holders and how other retailers do the same for their loyalty scheme members. The offer of a free drink after a certain number of visits, with additional rewards for sharing preferences and feedback is an obvious option but the others are a little more subtle.

Giving customers early access to new products, services, or features if they provide their data like AMEX customers in association with events or event tickets is a great way to build excitement and loyalty among customers. Capital One and other financial institutions incentivize in similar ways.

Game or challenge e vents that encourages customers to provide their data like PokĂŠmon Go, a 2016 augmented reality mobile game offers participants rewards for completing certain challenges. Additional rewards for sharing preferences and data is common with many card loyalty schemes as well as social apps.

In the end, it’s important to ensure that any incentives offered are aligned with the interests and preferences of customers, and that they are relevant and valuable.

Organizations today should ensure that they are transparent about their data collection practices and are respecting the privacy of their customers at all times.

Give customers the opportunity to self serve and drive first party data into the DNA of your business.


Read More
Author: Uli Lokshin

RSS
YouTube
LinkedIn
Share