Price management is no small task for B2B pricing owners. While machine learning can set smarter pricing strategies, understanding how it works in the context of B2B and knowing how to leverage it to deploy highly-relevant pricing guidance, and ultimately, create a more strategic pricing approach isn’t as straightforward. In this four-part series, Zilliant Director of R&D, Pricing Science Amir Meimand will share a summary of how predictive analytics, prescriptive analytics and machine learning contribute to a smart pricing strategy for B2B companies.
Machine Learning Background
In the 1950s, Arthur Samuel, a pioneer of machine learning (ML), wrote the first game-playing program. The program played checkers against world champions to learn and eventually win the game. ML is built on the hypothesis that a machine can learn how the human brain processes information. Instead of following instructions, the computer uses a framework with many examples to derive the best rules and logic to make decisions independently. This set of rules and logic is called the ML model.
Prescriptive Analytics and B2B Pricing Science
The goal of B2B pricing science is to optimize pricing strategies by using prescriptive analytics to model and modify historical behavior. Although pricing science does not solely predict historical pricing behavior, predictive analytics is the foundation of this process. The first step in creating a pricing strategy, developing a robust and reliable prediction model, is crucially important because failing to understand historical behavior and failing to capture market dynamics leads to irrelevant price recommendations.
In B2B companies, pricing behavior depends on many factors, such as product type, industry, location, annual customer spend, order size, seasonality, and many more. With so many factors at play, hundreds of millions of unique sales circumstances are possible, each of which is extremely relevant and important to a company. As a result, a predictive model should not only capture the effect of each factor from past data but also learn over time and adjust when encountering new circumstances.
Fewer Assumptions = A Smarter B2B Pricing Solution
Traditional predictive models rely heavily on the assumption about data distribution and try to identify structures and patterns based on this assumption. Although these models are based on mathematically-proven theory, data rarely conform to the assumed distribution of real-world problems.
For example, we may develop an outlier detection algorithm based on the assumption that the distribution of the discount level for a specific SKU is a normal distribution. This assumption might be true for a business overall, as shown in Fig. 3. The distribution of discount levels for one specific SKU nationwide is pretty much a normal distribution centered around 40%. Therefore, the model may flag any transaction with a discount below 15% or above 75% as overpriced or underpriced, respectively.
However, at the regional level, data may deviate from a normal distribution. As shown in Fig. 4, the distribution for the same SKU in a non-competitive region is skewed to the left. Based on this distribution, any transaction discounted more than 50% is underpriced, while transactions discounted less than 5% are valid (i.e., the model does not flag any transactions as overpriced).
In contrast, ML tends to make fewer assumptions and instead explores the data for structures and patterns. It uses new data sets to validate a found structure and pattern instead of theoretical proofs. More importantly, ML models can learn, grow and develop on their own when exposed to new circumstances by adjusting predictions or even creating new ones based on recognized patterns which may not have been envisioned by the original developer.
Noise Versus Signal in B2B Pricing Datasets
Determining the right level of flexibility is a critical task in prediction modeling. A good model should be flexible enough to capture all signals, yet not be too flexible to the point of counter-productivity by picking up noise. This is more important in the big data world, as the largest data sets can be noisier. This can lead to the identification of some structures and patterns which are the result of random correlation and thus may not occur in the future.
Three fundamental ML components separate signals from noise to increase the reliability and accuracy of predictions:
- Feature extraction: What are the best parts of data that should be used for modeling?
- Regularization: What are the best weights for data within the model?
- Cross-validation: What are the best ways to measure and test the accuracy versus the robustness of the model?
Although from both a theoretical and experimental perspective, ML models mostly outperform traditional models, study after study shows that humans and machines produce far superior results when working together than either one can produce when working alone.
The best approach to creating a truly smart pricing program is to not only employ best-in-class ML algorithms, but also combine these algorithms with some pre-defined rules. Typically, these rules would reflect current B2B pricing best practices. Zilliant customers, for example, benefit from over a decade of in-house expertise on data science, as well as experts in B2B pricing who work with customers to develop nimble pricing strategies tailored to their unique business circumstances. In this way, Zilliant customers leverage accurate and reliable models which are highly specialized and relevant for the unique and complex dynamics of B2B pricing.
Speak Your Mind: What are your essentials for a smart B2B pricing strategy? Join the conversation on LinkedIn.
Stay tuned for Amir’s next post in which he will discuss different classes of ML algorithms and how they can be applied to B2B pricing.
About the AuthorFollow on Linkedin Visit Website More Content by Amir Meimand