How I Combined GPT With Classical Optimization to Make Retail AI 85% Faster
In large retail operations, category management teams spend significant time deciding which product goes onto which shelf and in which order. Shelf space is very expensive real estate in retail. According to Consumer Packaged Goods(CPG) Research, 10 percent improvement in shelf availability can drive a 5 percent sales lift. Shelf updates don’t happen frequently because the optimization engine working behind it is slow. By the time the static planogram gets updated, demand may have already shifted due to local events, promotions, weather, etc.
The optimization engine behind it works great until the product catalog gets messy. I tested a different approach: use ChatGPT to do the thinking part and then handed the results to optimization engine and achieved 85 percent faster operation while barely losing any solution quality.
Artificial intelligence models like GPT or Claude are great at understanding the context and product categorization, however, they are not good at mathematical optimization. Classical algorithms are great at optimization but struggle with real messy, mixed product data. Combining both and using them to their strengths will get the best of both.
Traditional algorithms, mainly those using the optimization approach, have been the backbone of retail shelf space allocation and beyond. This method achieves great results when we're dealing with homogeneous data. For instance, in the retail world, when products share common widths, the optimization approach can reduce the computation time. However, stores' inventories include products with varying dimensions from multiple vendors or manufacturers that don't share common characteristics. Consider a planogram shelf containing items with mixed dimensions and the traditional algorithm shortcuts break down. The traditional approach works well with homogeneous data but not with heterogeneous data.
Traditional mathematical approaches cannot recognize that certain product categories or certain product arrangements can yield better results. This limitation is a problem in the retail industry, where there are frequent planogram updates.
3-Phase Hybrid Approach
To address the challenges, here is a three-step hybrid approach that leverages large language models and traditional dynamic programming with GCD.
Phase 1: Intelligent LLM categorization
Here we use the OpenAI’s GPT or Claude to categorize the given products. Unlike traditional clustering algorithms which depend on numerical attributes, LLMs analyze multiple factors such as product widths, product category and semantic relationships. The LLM identifies patterns and groups similar products together where the optimization runs most efficiently.
Phase 2: Parallel Traditional Optimization
In this phase, we allocate shelf space to each product group based on their profit generation and then apply the optimized GCD algorithm independently to each group. This approach will break the single large problem into multiple smaller problems, solving each in parallel for efficiency. This parallel approach simultaneously optimizes multiple product groups and can speed up the overall algorithm to a great extent.
Phase 3: Integration
The final phase combines all the results from all product groups into a comprehensive retail shelf space allocation plan. This integration process ensures the overall solution maintains all feasibility constraints.
Performance Analysis: Quality Trade-Offs vs. Speed
Extensive testing with different problem scales reveals a good performance characteristic with the hybrid approach:
- 20 products: 24.1 percent faster execution (1.89 ms vs. 2.49 ms)
- 50 products: 81.45 percent faster execution (5.09 ms vs. 27.39 ms)
- 100 products: 85.9 percent faster execution (10.68 ms vs. 75.74 ms) based on Figure 1.
To put this in a perspective, for a category manager updating a 100-product shelf and computation speed going from 76 milliseconds to 11 milliseconds means it's fast enough to re-optimize multiple times a day as soon as sales data come in. According to IHL Group, out-of-stocks cost retailers an estimated $1 trillion globally per year.
The results show that the hybrid approach scales exceptionally well with the increase of problem size.
Figure 1: Comparing classical and hybrid approach for profit achievement and computation efficiency for 100 products
While the approach may yield significant improvements in execution speed, it does come with a trade-off in potential profit, sacrificing 3 percent to 9 percent of optimal revenue for enhanced computational efficiency.
- Profit: It achieves 91 percent to 93 percent of optimal profit in comparison with classical approach based on Figure 1.
- Space: It uses 91 percent to 98.9 percent of overall available retail shelf vs. 100 percent by classical approach based on Figure 2
The reason for the above tradeoff is the division of products may not always be optimal. The LLM's grouping strategy is intelligent but may not be mathematically optimal for the perfect profit optimization.
Applicability: Beyond Retail Optimization
This hybrid approach isn't restricted to retail optimization. It can extend to domains such as:
- Supply Chain Management: The hybrid approach can help for efficient inventory allocation and distribution planning.
- Financial Portfolio Optimization: Using LLMs to analyze market sentiments and asset relationships and applying algorithms for a risk-adjusted return.
- Transportation and Logistics: Using LLMs to understand geographical distance relationships between destinations and applying algorithms to optimize routes.
The key requirements for successful application include the following:
- The application should have heterogenous data that can be grouped based on its attributes for real time to near real-time solution.
- The underlying semantic relationships that can be sub-grouped to smaller problems and can be eligible for parallel processing.
Implementation and Future Directions
The hybrid approach will introduce more complexity and cost with LLM (AI model) costs and infrastructure costs for all parallel processing. However, these costs are often offset by great reduction in computation power using this hybrid approach to solve large-scale problems in real time.
Future research includes more advanced integration between LLMs and classical algorithms, researching the use of domain specific fine-tuned models for improved categorization.
Conclusion
The hybrid approach combining LLMs and classical algorithms represent a paradigm shift in approaching complex optimization problems. By leveraging new LLM capabilities like pattern recognition and semantic understanding and combining them with classical algorithms, we get a practical solution to the growing challenge of solving large-scale problems with heterogenous, real-world data.
The retail shelf space allocation case study clearly demonstrates that this hybrid approach can achieve dramatic computational speedups (up to 85.9 percent faster) while maintaining acceptable quality (91 percent to 93 percent optimal). Most importantly, with this new approach we can solve the large-scale problems in real time with great speed, which wasn't possible with the legacy approach.
As many organizations are growing up, increasing much more heterogenous data requiring rapid optimization, hybrid AI architectures offer a promising path forward by combining both AI and classical computational methods.
This article is based on peer-reviewed research published in the European Journal of Information Technologies and Computer Science (DOI: https://doi.org/10.24018/compute.2025.5.4.155)
Code available at: https://github.com/RaviTeja444/shelf-space-comparison-approach
Ravi Teja Pagidoju is a professional MBA student at Campbellsville University having experience of building AI/ML systems for retail optimization and supply chain.
Related story: DTC to Shelf: Why Cult Followings Are Now Retail Leverage
Ravi Teja Pagidoju is a professional MBA student at Campbellsville University having experience of building AI/ML systems for retail optimization and supply chain. He holds an MS in Computer Science and has published research on hybrid LLM-optimization approaches in IEEE and Springer publications.





