Brainstorming gif

Part 3 - Where are we going?

In part 2 of this series, we spoke about the rise in low-code and SaaS tools across businesses and the predictability this brings to data schemas. We also mentioned the data overlap across businesses in similar industries and maturity. The above two points coupled with the significant investment and advances in data infrastructure have allowed us to make launching an end-to-end data platform as simple and quick as launching a website. However, we want to go further and we are discovering we can do more to strive towards our vision and change the way businesses operate.

What if small and medium businesses could plan for future events or be proactive about making decisions in their business?- What impact would this have on their longevity and growth? Sadly, this is an opportunity that is only available to enterprise companies due to high infrastructure costs, large team requirements, and the time it takes to produce value.

At Cerebrium, we are trying to change this and give SMEs predictive and prescriptive insights! To do this in a scalable way our process is as follows:

  1. We chat to our customers about their biggest challenges and assess if there is a use case we can execute.
  2. If yes, we assess if the customer has the appropriate data to achieve positive results.
  3. Lastly, we assess how common this problem is across our customers and whether our customers have the same/similar data sources to feed into the model we have developed.

As a result, the next SME that signs up to Cerebrium, looking to solve a similar problem in their business, can simply connect their data sources and obtain access to the same type of insights for their business.

Currently, we have models such as inventory optimization, revenue forecasting, support agent scheduling to reach SLAs, and more in the pipeline. By watching the usage of these models by companies with employees ranging from 10 to 100 we have learned two things:

  1. Smaller companies don’t mind bad model accuracies.
    • This sounds unusual. However, if you are familiar with machine learning then you know how difficult it is to get a model on a variety of data sets to consistently be 80% accurate and above.
    • Smaller companies seem to find low model accuracies still informative about future events whereas before they had no indication of possible future events.
    • This might sound like it is likely to lead to larger inefficiencies, but we have seen our customers be less drastic with changes or decisions when model accuracy is low.
  2. Adding more data sources increases model accuracy, however, the effect of a particular data source on the model accuracy differs according to the data source and use case.
    • For example, Google Ads seems to provide the largest increase in models that have a relationship to user acquisition/leads and on average provide an accuracy increase of 3-6% however this is dependent on the amount of ad history available.
    • When it comes to predicting the likelihood of a lead converting from a trial to paid Google is still useful but the impact on accuracy is much smaller even though it can be a large high-converting lead generator for some companies.

Being able to predict future scenarios based on a change in variables is useful in many scenarios but businesses and markets change quickly! With multiple data sources constantly collecting data in a business, there are daily opportunities and multiple levers a business can pull in order to affect change and we help businesses take advantage of these opportunities. At Cerebrium, we create relationships between different data sources connected by the business and run the following high-level process (in beta):

  1. We monitor a number of metrics from your connected data sources looking for anomalies, if any.
  2. Based on the results of the above, we identify the metrics that impact this original metric and the likelihood of a significant result being achieved by effecting these metrics.
  3. Once the ranking is complete, we determine if these actions can have a larger impact if executed in combination or if that might have a negative effect.
  4. We return the top options to the user.
  5. A natural extension from here would be to execute a reverse ELT instruction back into the user's tools but this is currently beyond our scope.

Our goal is not to replace analysts and/or data scientists. Like the effect of SaaS on developers, SaaS has not replaced but empowered developers and allowed them to focus on more pressing issues, with the shortage of data specialists globally we hope to do the same. In many cases in machine learning, machines are better at performing certain tasks than humans, however, in many instances that is not the case - at least not yet.

Our initial research and work with clients has been highly motivating and the value to cost ratio clients receive is approximately 20x. Our north star metric is the time it takes to produce value for our customers and we will keep on striving to reduce that. If you are interested in joining our team on this journey, check out our careers page or follow us on social media to keep up with our progress.

Author

Michael Louis

Michael Louis

See how you can become data driven today