data – S7Clear Immovable Driven https://s7clear.com S7Clear foment stakeholders built a better world. Thu, 16 Mar 2023 14:13:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://s7clear.com/wp-content/uploads/2023/02/s7clear-logo-lightblue.svg data – S7Clear Immovable Driven https://s7clear.com 32 32 How Network Effects Make AI Smarter https://s7clear.com/how-network-effects-make-ai-smarter/ Thu, 16 Mar 2023 14:13:07 +0000 https://s7clear.com/?p=16356

Network effects have dictated the success of technologies from the telephone to shopping platforms like Etsy, and AI tools such as ChatGPT are no exception. What is different, however, is how those network effects work. Data network effects are a new form. Like the more familiar direct and indirect network effects, the value of the technology increases as it gains users. Here, however, the value comes not from the number of peers (like with the telephone) or the presence of many buyers and sellers (as on platforms like Etsy), but from feedback that helps it make better predictions. More users mean more responses, which further prediction accuracy, creating a virtuous cycle. Companies need to consider three lessons: 1) feedback is crucial, 2) routinize meticulous gathering of information, and 3) consider the data you share, intentionally or not.

Late last year, when OpenAI introduced ChatGPT, industry observers responded with both praise and worry. We heard how the technology can abolish computer programmersteachersfinancial traders and analystsgraphic designers, and artists. Fearing that AI will kill the college essay, universities rushed to revise curricula. Perhaps the most immediate impact, some said, was that ChatGPT could reinvent or even replace the traditional internet search engine. Search and the related ads bring in the vast majority of Google’s revenue. Will chatbots kill Google?

ChatGPT is a remarkable demonstration of machine learning technology, but it is barely viable as a standalone service. To appropriate its technological prowess, OpenAI needed a partner. So we weren’t surprised when the company quickly announced a deal with Microsoft. The union of the AI startup and the legacy tech company may finally pose a credible threat to Google’s dominance, upping the stakes in the “AI arms race.” It also offers a lesson in the forces that will dictate which companies will thrive and which will falter in deploying this technology.

To understand what compelled OpenAI to ally itself with Bing (and why Google may still triumph), we consider how this technology differs from past developments, like the telephone or market platforms like Uber or Airbnb. In each of those examples, network effects — where the value of a product goes up as it gains users — played a major role in shaping how those products grew, and which companies succeeded. Generative AI services like ChatGPT are subject to a similar, but distinct kind of network effects. To choose strategies that work with AI, managers and entrepreneurs must grasp how this new kind of AI network effects work.

Network Effects Work Differently for AI

AI’s value lies in accurate predictions and suggestions. But unlike traditional products and services, which rely on turning supplies (like electricity or human capital) into outputs (like light or tax advice), AI requires large data sets that must be kept fresh through back-and-forth customer interactions. To remain competitive, an AI operator must corral data, analyze it, offer predictions, and then seek feedback to sharpen the suggestions. The value of the system depends on — and increases with — data that arrives from users.

The technology’s performance — its ability to accurately predict and suggest — hinges on an economic principle called data network effects (some prefer datadriven learning). These are distinct from the familiar direct network effect, like those that make a telephone more valuable as subscribers grow, because there are more people you can call. They are also different from indirect or second-order network effects, which describe how a growing number of buyers invites more sellers to a platform and vice versa — shopping on Etsy or booking on Airbnb becomes more attractive when more sellers are present.

Data network effects are a new form: Like the more familiar effects, the more users, the more valuable the technology is. But here, the value comes not from the number of peers (like with the telephone) or the presence of many buyers and sellers (as on platforms like Etsy). Rather, the effects stem from the nature of the technology: AI improves through reinforcement learning, predictions followed by feedback. As its intelligence increases, the system makes better predictions, enhancing its usefulness, attracting new users and retaining existing ones. More users mean more responses, which further prediction accuracy, creating a virtuous cycle.

Take, for example, Google Maps. It uses AI to recommend the fastest route to your destination. This ability hinges on anticipating the traffic patterns in alternative paths, which it does by drawing on data that arrives from many users. (Yes, data users are also the suppliers.) The more people use the app, the more historical and concurrent data it accumulates. With piles of data, Google can compare myriad predictions to actual outcomes: Did you arrive at the time predicted by the app? To perfect the predictions, the app also needs your impressions: How good were the instructions? As objective facts and subjective reviews accumulate, network effects kick in. These effects improve predictions and elevate the app’s value for users — and for Google.

Once we understand how network effects drive AI, we can imagine the new strategies the technology requires.

OpenAI and Microsoft

Let’s start with the marriage of OpenAI and Microsoft. When we beta-tested ChatGPT, we were impressed with its creative, human-like responses, but recognized it was stuck: It relies on a bunch of data last collected in 2021 (so don’t ask about recent events or even the weather). Even worse, it lacks a robust feedback loop: You can’t ring the alarm bell when suggestions are hallucinatory (the company does allow a “thumbs down” response). Yet by linking to Microsoft, OpenAI found a way to test the predictions. What Bing users ask — and how they rate the answers — are crucial to updating and improving ChatGPT. The next step, we imagine, is Microsoft feeding the algorithm with the vast cloud of user data it maintains. As it digests untold numbers of Excel sheets, PowerPoint presentations, Word documents, and LinkedIn resumes, ChatGPT will get better at recreating them, to the joy (or horror) of office dwellers.

There are at least three broad lessons here.

First, feedback is crucial. The benefits of AI intensify with a constant stream of user reactions. To remain intelligent, an algorithm needs a data stream of current user choices and rating of past suggestions. Without feedback, even the best engineering algorithm won’t remain smart for long. As OpenAI realized, even the most sophisticated models need to be linked to ever-flowing data sources. AI entrepreneurs should remember this.

Second, executives should routinize meticulous gathering of information to maximize the benefits of these effects. They ought to traverse the typical financial and operational records. Useful bits of data can be found everywhere, inside and outside the corporation. They may come from interactions with buyers, suppliers, and coworkers. A retailer, for example, could track what consumers looked at, what they placed in their cart, and what they ultimately paid for. Cumulatively, these minute details can vastly improve the predictions of an AI system. Even infrequent data bits, including those outside the company’s control, might be worth collecting. Weather data helps Google Maps predict traffic. Tracking the keywords recruiters use to search resumes can help LinkedIn offer winning tips for job seekers.

Finally, everyone should consider the data they share, intentionally or not. Facts and feedback are essential for building better predictions. But the value of your data can be captured by someone else. Executives should consider whose AI stands to benefit from the data they share (or allow access to). Sometimes, they should limit sharing. For instance, when Uber drivers navigate with the app Waze, they help Google, the owner, to estimate the frequency and length of ridehailing trips. As Google considers operating autonomous taxis, such data could be invaluable. When a brand like Adidas sells on Amazon, it allows the retail behemoth to estimate demand across brands (comparing to Nike) and categories (shoes) plus the price sensitivity of buyers. The results could be fed to a competitor — or benefit Amazon’s private label offerings. To counter that, executives can sidestep platform intermediaries or third parties. They can negotiate data access. They can strive to maintain direct contact with customers. Sometimes, the best solution may be for data owners to band and share in a data exchange, like banks did when establishing ways to share data on creditworthiness.

When you consider AI network effects, you can better understand the technology’s future. You can also see how these effects, like other network effects, tend to make the rich even richer. The dynamics behind AI mean that early movers may be rewarded handsomely, and followers, however quick, may be left on the sidelines. It also implies that when one has access to an AI algorithm and a flow of data, advantages accumulate over time and can’t be easily surmounted. For executives, entrepreneurs, policymakers, and everyone else, the best (and worst) about AI is yet to come.

]]>
Empowering Strategic Procurement with Data Insights https://s7clear.com/empowering-strategic-procurement-with-data-insights/ Sun, 06 Feb 2022 03:09:10 +0000 https://s7clear.com/?p=10719

As the world wakes up to social and environmental injustice, a spotlight is shining on global supply chains. Countries around the world are requiring greater transparency in supply chains, across a range of topics, from conflict mineral mining to deforestation regulation.

The burden on supply chain leaders and procurement teams to disclose and report on the values of their suppliers is growing exponentially. The pressure to perform, drive efficiencies, and deliver a competitive advantage is now compounded by administrative overhead to not only ensure supply chains reflect the values of the purchasing organization but also to prove it to third parties.

This can feel like a mountain to summit. For the future of sourcing, we must strap on our boots and get climbing.

Appropriate application of data and analytics will be critical in the successful management of these emerging requirements for procurement teams worldwide. The complexity of global supply chains creates a unique challenge. Fortunately, new technologies and data standardization make this an achievable goal.

Leveraging Prequalification Questionnaires

As procurement companies, we must stop thinking of a Prequalification Questionnaire (PQQ) as just a questionnaire. When we study traditional responses to PQQ questions as data points, we unlock huge value from the content stored in what is traditionally considered an administrative headache.

Digitization of a PQQ is not enough; building structure and intelligence into the design of a questionnaire, and the structure of its responses, enables analytics on a whole new scale. This drives enormous benefit for the procurement organizations facing new reporting strains, as well as for the suppliers sharing the burden.

There are three important elements to an efficient supply chain reporting framework:

  • Common data standard
  • Data sharing
  • Analytics and benchmarking

With these in place, procurement teams can easily extract required figures for annual reporting or legal disclosures. Procurement reporting tools can help teams instantly answer the following types of questions:

  • How many of my suppliers are SMEs?
  • How many of my suppliers have a modern slavery policy statement?
  • What percent of my supply base is operating an internally accredited carbon reduction scheme?

Empowering Procurement to Take the Lead

As a procurement leader, you are probably familiar with requests to report on the supply base. Perhaps you’ve heard something along the lines of, “I’d like to add a figure to the annual report on how many of our suppliers can demonstrate they mine conflict-free.” It seems like a sensible thing to do, but so often leaves a procurement team scrambling to determine which suppliers would be relevant to contact, how to reach them, and how to collect and store the information.

More and more, legislative action includes provisions for not only reporting on individual company practices, but the practices within companies’ supply chains as well, meaning these sorts of requests will only grow with time. By requiring companies to report on the ethics reflected in their supply chains, these pieces of legislation seek to drive out “bad actors” typically lurking at tier three or four of the supply chain.

Conflict Minerals

New EU legislation takes effect January 1, 2021, aimed at restricting the trade of conflict minerals. It requires EU companies to only import tin, tantalum, gold and tungsten from responsible sources. From that same date, EU importers of these products will have to carry out due diligence on their supply chain.

SECR: Streamlined Energy and Carbon Reporting

SECR legislation, which came into force April 1, 2019, requires all large UK companies and large LLPs, as well as all quoted companies, to report on their annual energy use, greenhouse gas emissions and energy efficiency actions they have taken.

Modern Slavery

The UN Universal Declaration of Human Rights protects the rights of workers and individuals. The UK Modern Slavery Act includes a provision for transparency in supply chains and requires large companies to issue a slavery and trafficking statement annually. The statement must set out what steps the organization has taken to ensure there is no slavery in any part of its business, including its supply chains. France and Australia have followed suit with similar legislation while other European countries have legislation in the works.

Deforestation

The UK Government published proposals in August for legislation requiring large UK companies to perform supply chain due diligence and publish information certifying that certain commodities were produced in accordance with local laws pertaining to deforestation.

“This would mean publishing information to show where key commodities, including rubber, soil and palm oil, came from and that they were produced in line with local laws protecting forests,” reported Sky News.

Equality, Diversity, and Inclusion

Several countries have long-standing equality legislation that prevents discrimination based on a range of factors including gender, race or sexual orientation. Focus is now shifting to apply these principles to supply chains and promote diversity through procurement processes.

Driving Value in Your Procurement Team

Carbon accounting, as an example, is rapidly becoming the universal tool for procurement professionals to measure cost effectiveness and drive business performance both internally and within the supply chain.

It is well known that reducing an organization’s carbon footprint can deliver money straight to the bottom line. Organizations that focus on carbon reduction can save millions per annum through an internationally accredited ISO14065 greenhouse gas certification program.

Collection and promotion of the appropriate data regarding emission levels, performance against targets and offsets are critical to measuring the impact carbon accounting throughout the supply chain can have. Streamlined data collection, reporting and sharing tools are key enablers of tomorrow’s sourcing reality.

Common Standard

International standards and responsible sourcing protocols are common reporting frameworks organizations can complete once and share many times. A procurement organization’s willingness to accept and adhere to common standards has the power to dramatically reduce supplier reporting obligations.

Suppliers often feel overburdened by the sheer volume of forms, certifications and audits they must complete. By accepting internationally recognized standard data templates and audit protocols, suppliers can spend less time achieving accreditation and more time on their core business, driving efficiency and increasing productivity.

Standards have traditionally dictated how questions are asked but have put less emphasis on how the responses are collected and stored. It is equally important that responses to standard questionnaires are stored properly so they can be treated as data points to be compared and analysed.

Data Sharing

It is not much use accepting a standard of data if that data is not easy to move between platforms and systems. API technology is standard and available with most ERP, P2P and S2P systems. Leveraging this means you can reduce the supplier burden further by taking data from accreditation bodies themselves, rather than requiring a supplier to submit the data separately for each buyer it wants to supply.

Analytics and Benchmarking

With the two previous steps of a common data requirement and standard data sharing in place, procurement teams can easily analyse and draw conclusions from the data. Data science teams can demonstrate strengths and weaknesses of suppliers individually, and as groups or chains aligned to specific products, or by their size, country of trading or other factors. Armed with these tools, procurement organizations can easily extract information required for reporting, such as how many suppliers can demonstrate their raw materials are mined in conflict-free zones.

]]>