9.3 C
Casper
Monday, February 3, 2025

If It Works Great. Else, We “Fail Fast”

Must read

Khushbu Raval
Khushbu Raval
Khushbu is a Senior Correspondent and a content strategist with a special foray into DataTech and MarTech. She has been a keen researcher in the tech domain and is responsible for strategizing the social media scripts to optimize the collateral creation process.

Ines Ashton, Director of Advanced Analytics at Mars, shares valuable insights on scaling AI projects past proof of concept.

Datatechvibe spoke to Ines Ashton, Director of Advance Analytics at Mars, about scaling AI projects past proof of concept.

“Mars has a very strong culture of innovation, which helps unblock AI research at Mars. It uses an agile approach to experiment and build POC,” said Ines Ashton, the Director of Advance Analytics at Mars. She discusses how data leaders can scale ML projects past proof of concept, including creating a common language, addressing financial blockers, and overcoming data challenges.

Excerpts from the interview;

What advice would you give data leaders to scale AI projects past proof-of-concept?

In my opinion, there are three blocks to scaling AI & ML. 

  • Creating a common language: It is worth beginning this conversation by pointing out that there is a difference between “true” AI, which is a machine that can pass the Turing Test in which the AI entity can hold a conversation with a human behind a closed door and the human is unaware Versus the “current/2023” AI definition in which AI is an “umbrella” term for computers performing perform tasks commonly associated with humans by leveraging simple or sophisticated mathematical models.

    Also, it is worth noting that when the word AI is used, 0.1%< of the cases would refer to the “true” AI, with the rest as the umbrella term. So, to avoid unnecessary confusion, we need to encourage common definitions. This is not easy, especially in a large organization such as Mars, but you can start small, like an immediate team, then a D&A community, then a business community, etc. For example, in my team, we never use the word AI as we acknowledge its true meaning, so you are more likely to hear us use the term ML or Deep Learning or even reference the specific model names versus the umbrella term.

    In cases where definition alignment is impossible, I recommend avoiding all complex terminology and instead using specific examples that illustrate the topic in a way that is easy to understand. For example, at Mars, we have an X-ray image classification model that helps radiologists identify 53 thorax/abdominal X-ray findings in cats and dogs.
  • Removing financial blockers impacting development or scaling: At Mars, I use three methods to deal with this issue:
    • Fail fast: Mars has a very strong culture of innovation, which helps unblock AI research at Mars. It uses an agile approach to experiment and build POC, if it works great else, we “fail fast.”
    • Strength in numbers: Mars is also very large, so it has the benefit that it can spread the cost of little benefits when a benefit is repeated millions of times (e.g., x-ray model). In other words, every ML has a huge potential at Mars, so cost is less of a blocker.
    • Bolt-on: at Mars, we “bold on” machine learning during the standard upgrade process. As we have digitized the workspace, we have used this opportunity to build models on top of existing data sets, which means the cost is shared. ML is not a standalone expensive deliverable.
  • Overcome any data challenges that might arise during build or scaling: A few techniques that I use with my team are:
    • Understand the business problem and assess if the available data has any limitations. Only progress if the data limitation is understood or can be mitigated. Ensure that the data does not hold a known bias.
    • Use proxy data from missing data points. We don’t live in a perfect world, so don’t strive for perfection, but good enough and proxy data can play a very powerful role in this space.
    • Be clear about what conclusions you can or can NOT draw from the data. AI is not a miracle worker to ensure your stakeholders know what business decisions can be drawn from the data.
    • Don’t cut corners and extensively test the model before going live.
    • Don’t hand your hat on one number; put ranges/ confidence intervals on the results.
    • Sometimes, waiting and collecting more data and then running the models is the best option; if you have only one chance to convince the business, don’t waste it by using flawed data.
    • Does the human oversight piece pass the ‘does this outcome make sense in the real world test?’

In summary, to scale AI projects past proof of concept, data leaders must start using a common language, lean into the financial blockers, and anticipate the data obstacles from scaling.  By doing so, data leaders can ensure that their AI projects deliver value to the organization and are adopted across the enterprise.

What advice would you give data leaders when investing in a technology stack and choosing partners?

At Mars, IT oversees the technology stack hence, I have spent less time reflecting on the topics. However, I advise picking a tech stack and sticking with it for three to five years. Some disjointed legacy systems at Mars can make data orchestration and analysis challenging. In addition, upskilling your workforce onto a new system more frequently than every three to five years becomes very hard after you reach a certain size.

When it comes to choosing a partner, my advice is:

  • Choose partners who understand the business: You should look for partners who have experience working in their industry and understand their business needs. Partners with a deep understanding of the business can help identify the right technology stack and provide valuable insights into using data to drive business value. For example, I specifically work in the Supply Chain space, and many vendors say they can do it, but very few have hands-on experience.
  • Treat your vendors as extensions to your team: To succeed, you need to work as one team, and you are both in it together, onboard your vendors expensively in the business and take them with you in the critical meeting. Treat as an extension of your team to build long-lasting relations that are built on respect and responsibility. 

How can business leaders ensure trust in predictive analytics to prepare to build future resilience?

To ensure trust in predictive analytics and prepare for future resilience, business leaders should consider the following steps:

  • Define clear objectives: Define clear objectives for predictive analytics initiatives and ensure that they align with the organization’s strategic goals. Objectives should be specific, measurable, achievable, relevant, and time-bound (SMART).
  • Data is your currency: Predictive analytics is only as good as the data it uses. You should ensure that the data used for predictive analytics is high quality, accurate, and reliable. Data should be regularly audited and monitored for quality and completeness.
  • Be transparent: Document the methodology, assumptions, and limitations used in the analysis. This can build trust among stakeholders and increase their confidence in the results.
  • Co-create with stakeholders: involve stakeholders in the predictive analytics process to ensure they understand how the analysis is conducted and to gather feedback on the results. This can help build trust and increase the adoption of the results.
  • Continuously improving the models: Predictive analytics models should be continuously evaluated and refined to ensure they remain accurate and relevant. New data becomes available daily, so the model can always be improved. Use an MLOps team to ensure the models are supported and operating at their full potential.

How can data and analytics help measure the success of enterprise-wide digital transformation?

Data and analytics are critical in measuring the success of enterprise-wide digital transformation efforts. Below are some examples:

  • Define Key Performance Indicators (KPIs): To measure the success of digital transformation, it is important to establish a set of KPIs that can be tracked and monitored over time. These KPIs can include revenue growth, cost savings, customer satisfaction, employee productivity, and operational efficiency.
  • Use data to establish a baseline: Before beginning any digital transformation effort, it is important to establish a baseline of current performance. This can involve collecting data on existing business processes, trends, patterns, and other relevant metrics. This baseline can be used to track progress over time and measure the success of digital transformation efforts.
  • Monitor progress in real-time: Digital transformation is an ongoing process, and it is important to monitor progress in real-time. This can involve setting up dashboards and alerts that provide visibility into key metrics and trends. Real-time monitoring enables teams to make data-driven decisions and adjust strategies as needed.
  • Leverage analytics to gain insights: Data analytics can provide valuable insights into customer behavior, business processes, and other aspects of the business. By analyzing data, teams can identify patterns, uncover hidden opportunities, and make more informed decisions. Analytics can also help teams identify areas where digital transformation efforts fall short and take corrective action.
  • Measure ROI: Digital transformation efforts can be expensive, so measuring these initiatives’ return on investment (ROI) is important. By comparing digital transformation costs to the benefits achieved, teams can determine whether their efforts generate a positive ROI.

More articles

Latest posts