Complete Intelligence

Categories
News Articles

How AI-based ”nowcasts“ try to parse economic uncertainty

This post was published originally at https://www.emergingtechbrew.com/stories/2022/06/17/how-ai-based-nowcasts-try-to-parse-economic-uncertainty?mid=13749b266cb1046ac6120382996750aa

This month, the S&P 500 officially hit bear-market territory—meaning a fall of 20+ percent from recent highs—and investors everywhere are looking for some way to predict how long the pain could last.

Machine learning startups specializing in “nowcasting” attempt to do just that, by analyzing up-to-the-minute data on everything from shipping costs to the prices of different cuts of beef. In times of economic volatility, investors and executives have often turned to market forecasts, and ML models can offer a way to absorb more information than ever into these analyses.

One example: Complete Intelligence is a ML startup based outside Houston, Texas, that specializes in nowcasting for clients in finance, healthcare, natural resources, and more. We spoke with its founder and CEO, Tony Nash, to get a read on how its ML works and how the startup had to adjust its algorithms due to market uncertainty.

This interview has been edited for length and clarity.

Can you put the idea of nowcasting in your own words—how it’s different from forecasting and the nature of what you do at Complete Intelligence?

So Complete Intelligence is a globally integrated machine learning platform for market finance and planning automation. In short, we’re a machine learning platform for time series data. And nowcasting is using data up to the immediate time period to get a quick snapshot on what the near-term future holds. You can do a nowcast weekly, daily, hourly, or minutely, and the purpose is really just to understand what’s happening in markets or in a company or whatever your outlook is right now

And what sort of data do you use to fuel these predictions?

We use largely publicly available datasets. And we’re using billions of data items in our platform to understand how the world works…Macroeconomic data is probably the least reliable data that we use, so we use it for maybe a directional look, at best, at what’s happening. Currencies data is probably the most accurate data that we use, because currencies trade in such narrow bands. We use commodities data, from widely traded ones like oil and gold, to more obscure ones like molybdenum and some industrial metals. We’re also looking at individual equities and equity industries, and we track things like shipping times for goods—shipping times…are usually pretty good indicators of price rises.

Who are your clients, and how are the nowcasts used in practice?

Our clients range from investors and portfolio managers, to healthcare firms and manufacturing firms, to mining and natural resources firms. So they want to understand what the environment looks like for their, say, investment or even procurement—for example, how the current inflation environment affects the procurement of some part of their supply chain.

In fact, we’re talking to a healthcare company right now, and they want to nowcast over the weekend for some of their key materials. In an investment environment, of course, people would want to understand how, say, expectations and other variables impact the outlook for the near-term future, like, days or a week. People are also using us for continuous budgeting—so revenue, budgeting, expenses, CFOs, and heads of financial planning are using us…to understand the 12- to 18-month outlook of their business, [so they don’t have to have an annual budgeting cycle].

Tell me about how the AI works—which kinds of models you’re using, whether you’re using deep learning, etc.

There are basically three phases to our AI. During the pre-process phase, we collect data and look for anomalies, understand data gaps and how data behaves, classify data, and those sorts of things.

Then we go into a forecasting phase, where we use what’s called an ensemble approach: multiple algorithmic approaches to understand the future scenarios for whatever we’re forecasting. Some of those algorithms are longer-term and fundamentals-based, some of them are shorter-term and technical-based, and some of them are medium-term. And we’re testing every forecast item on every algorithm individually and in a common combinatorial sense. For example, we may forecast an asset like gold using three or four different forecast approaches this month, and then using two forecast approaches next month, depending on how the environment changes

And then we have a post-process that really looks at what we’ve forecasted: Does it look weird? Are there obvious errors in it—for example, negative numbers or that sort of thing? We then circle back if there are issues…We’re retesting and re-weighting the methodologies and algorithms with every forecast that we do.

We’ve had very unique market conditions over the past two years. Since AI is trained on data from the past, how have these conditions affected the technology?

You know, there’s a lag. I would say that in 2020, we lagged the market changes by about six weeks. It took that amount of time for our platform to catch up with the magnitude of change that had happened in the markets. Now, back then, we were not iterating our forecasts more than twice a month. Since then, we’ve started to reiterate our forecasting much more frequently, so that the learning aspect of machine learning can really take place. But we’ve also added daily interval forecasts, so it’s a much higher frequency of forecasting and in smaller intervals, because we can’t rely on, say, monthly intervals as a good input in an environment this volatile.