21.1 C
Casper
Wednesday, September 11, 2024

LLMs vs. Traditional ML: Finding the Right Fit

Must read

Discover when to leverage LLMs and when traditional ML models shine. Explore use cases in financial services and learn how to optimize AI investments.

With the introduction of ChatGPT, the promise of generative AI (GenAI) and large language models (LLMs) has taken the world by storm. Even at this early stage, various use cases for these technologies have emerged across various industries, including the financial services field. At the same time, foundational models have democratized the ability to create and use AI tools via APIs, and implementing GenAI solutions at the enterprise level is now more of a software engineering challenge than a hard data-science problem.

Is traditional machine learning the answer?

Amid this GenAI fervor, some organizations may be tempted to deploy the latest LLM technology for problems that don’t require such a sophisticated solution. As LLMs become increasingly accessible, technology leaders must recognize the importance of evaluating the necessity and efficiency of using such advanced tools for specific problems—especially when a simpler solution may deliver a better outcome. Many software challenges organizations face today can be solved by traditional machine learning (ML) models.

To illustrate this point, here are scenarios in which traditional machine learning may be the answer.

There’s one specific problem to solve.

If an organization has a specific use case, solvable by traditional statistical learning, leveraging an LLM may not be the ideal solution when there are requirements for latency and explainability. That said, beginning with an LLM can be a strategic choice, especially for tasks that benefit from zero-shot or few-shot learnings.

For instance, in sentiment analysis of customer reviews—using automation to determine whether feedback is positive or negative—the versatile learning capabilities of an LLM might simplify development. However, transitioning to traditional statistical methods may be more effective if this approach doesn’t yield the desired results. This strategy allows for the initial exploitation of an LLM’s broad applicability with the option to shift toward more conventional techniques, potentially bringing together the best of both worlds to create more refined solutions.

Also Read: Decoding Netflix’s Personalization Magic

Cost and energy efficiencies are a priority.

Implementing a traditional machine learning model can be more cost-effective than developing a cutting-edge, customized LLM—particularly if an organization has already built the ML model. LLMs also use a significant amount of energy, so there’s an environmental factor for organizations to consider.

Although some ML models can take hours to train and might still incur considerable costs, LLMs can be more expensive depending on the foundational model to host and utilize, making this a technical and financial decision. Developing a bespoke ML model could be more cost-effective and sustainable than adopting or building a comprehensive LLM in scenarios where the application is specific and well-defined.

This nuanced decision-making process underscores the importance of evaluating both the immediate and long-term impacts of choosing between LLMs and traditional ML models, factoring in efficiency, cost, and organizational values.

Traditional ML models have a track record of success in the organization.

Integrating new LLMs with traditional ML models leverages their strengths to create a powerful and cohesive system. An LLM can respond to incoming queries by itself or dynamically consult a more appropriate traditional model, enhancing efficiency and accuracy. For instance, in a customer service scenario, an integrated system can predict the likelihood of customer churn, suggest strategies for retention, and identify opportunities for upselling.

What’s particularly powerful about this setup is the ability to manage and utilize hundreds of specialized traditional ML models under the umbrella of a single LLM. This approach ensures that specific queries are addressed by the most capable model, from initiating conversations with a chatbot adept at recognizing signs of customer dissatisfaction to transitioning to solutions that enhance customer loyalty and increase sales. The true strength lies in the system’s capacity to intelligently navigate through a vast array of ML models and deliver targeted outcomes.

Also Read: Spotify’s Personalization: Big Data and AI in Streaming

Strict governance is required.

For example, payment fraud detection is a critical area where explainable models are essential. Statistical methods are preferable in such cases because they provide transparency and traceability, which are crucial for compliance and auditability. LLMs, although powerful, are primarily designed for text processing and lack the inherent governance frameworks needed for sensitive applications like fraud detection.

On the other hand, summarizing customer reviews presents an ideal use case for LLMs because these models excel at text classification without extensive retraining or data collection. Their pre-existing knowledge base allows them to categorize and analyze reviews efficiently, making them an ideal tool for businesses looking to understand customer feedback quickly and accurately.

Traditional ML applications for financial services

There are many areas in financial services where organizations can solve problems with traditional ML models instead of developing and leveraging a brand-new LLM. Specific examples where traditional machine learning is likely the right answer in a financial services setting include:

• Recommending the best financial products to account holders.

• Analyzing accounts to predict customer churn and account holders’ likelihood of leaving a financial institution.

• Enhancing network security by analyzing IP addresses and blocking users that pose a threat.

There’s a lot of noise right now about GenAI and LLMs, and my team is among the many excited about the use cases these technologies offer. At the same time, resource constraints are a reality for many organizations. In the quest for innovative and sophisticated AI technologies, I encourage technology and AI leaders to remember Occam’s Razor—the principle suggesting that the simplest explanation is often the best.

Also Read: Unveiling Amazon’s Technological Prowess for Personalized Shopping

As the field of AI surges forward, especially within the rapidly evolving landscape of large language models, it’s thrilling to witness the continued excellence in traditional machine learning research. The burgeoning synergy between advanced LLMs and foundational ML techniques is particularly exciting and promises innovative solutions that leverage the strengths of both worlds.

More articles

Latest posts