Seeing the Forest for the Trees

It’s always a pleasure to be in Boston — such a pleasure, in fact, that I can even forget it’s Patriots territory. As an Eagles fan, I learned an interesting equation last year: The joy of winning is inversely proportionate to the frequency of winning. So it’s comforting to think that while Pats fans have probably already forgotten this year’s win, we’ll still be celebrating well into our 90s …

Speaking of celebrating, I want to wish a happy anniversary to Management Science for its 65 years. I was honored that David asked me to speak on the occasion, and very happy to oblige. What caught my attention in particular was the emphasis on integrating theory and practice, because they live on opposite ends of the spectrum far too often, and there’s not much use for a seemingly perfect model that crumbles under real-world conditions.

I speak from experience. It’s happened to me more times than I care to remember. As a young civil engineer, designing what I thought was an indestructible bridge, only to see it collapse in the testing lab. As a not-so-young university president, watching the economy plummet in the wake of the financial crisis, taking our sublime three-year strategic plan with it. In a host of management positions, introducing structural change, only to find that the organism most resistant to change is an organization’s culture.

So I’m very pleased to have the opportunity to compare notes, and war stories, of theory gone wrong in practice.

Before I do, however, I’ll issue the standard Fed disclaimer that the views I express today do not necessarily reflect those of anyone else in the Federal Reserve System.

Any time you’re dealing with complex structures — in engineering, in operations, in policy — it’s important to take a step back every once in a while to be sure you’re seeing the forest for the trees. And the more intricate the machine, the more necessary that reevaluation is. Because with so many interdependent parts — built on top of, or intertwined with, each other — there’s risk to the whole structure if one of those pieces isn’t working.

So it makes sense to kick the tires every now and again.

Or, in the words of Bertrand Russell, “to hang a question mark on the things you have long taken for granted.”

In the case of management science and operations research, I think the first principles we have to revisit are whether we’re using the right tools for the right jobs, and whether we’ve designed them with the right users in mind.

The Fallibility of Models

There are incredible opportunities from the advances we’re seeing in management decision-making — particularly when it’s enabled by technology — but there are downsides as well. That’s the case with any complex system, especially ones that are programmatic.

Take fintech, where streamlined processes have expanded the opportunities for smaller loans, faster approvals, and greater market reach and penetration. But there’s a downside to algorithmic lending. Even in its early stages, we’re seeing discrimination in selection.

I think the overall risk to any complex system, whether it’s technology based or not, lies in moving too far toward process and away from critical thinking.

In my experience, you run into trouble when you start thinking your model can do all the work.

To give a non-tech-based example: Every once in a while, a discussion pops up about shifting monetary policy to a rules-based regime, essentially advocating for the automation of the FOMC. That regime would put decisions on autopilot, without human intervention or judgment factoring into the equation. And while those rules are important, and inform a lot of our decisions, they shouldn’t be followed robotically.

It’s important to know what you don’t know. Complete control only works in a system where all the variables are known entities. One of my first engineering jobs was designing control systems for trains. That’s an operation that can be set on autopilot. We can control the trains. We can see the layout of the tracks. We know where they’ll be and when they’ll be there.

In other words, we know everything we need to know to keep them from running into each other.

Monetary policy isn’t that precise. Despite some of the smartest thinkers and the best models, we can’t assign a degree of certainty to any of the variables.

We’re not 100 percent sure what inflation is. We can only estimate the natural rate of unemployment when it’s in the rearview mirror — and even then, we don’t universally agree. Productivity is notoriously difficult to measure, and that affects our understanding of GDP growth.

To operate monetary policy mechanically, we’d need a level of accuracy that just isn’t possible. We’d need a lot more data, a lot more frequently, with a lot more precision than the laws of economics allow.

The Taylor rule is an extraordinary contribution to the economics profession. But I wouldn’t get on a train that’s run by it.

Humanity’s Competitive Edge

Monetary policy should be more like a suspension bridge, with flexibility built into the design. It has a core structure, but it can bend to account for real-world events.

One of the reasons monetary policy can’t just be mechanized is that decisions have to be informed by human experience and past behavior — as in any discipline, models may work in theory, but not necessarily under the messy complication of real-life conditions.

If the Fed had followed the regime that some advocate, interest rates would have risen by several percentage points in the years following the Great Recession. Instead, models provided guidance, but people made the judgment — and wherever you place yourself on the ideological policy spectrum, I don’t think anyone would argue for high rates in a slow recovery.

At the heart of this is what can and can’t be mechanized. Some things just require human insight.

Automation in general has been the subject of a lot of conversation recently, particularly in the context of how it’s affecting the employment landscape. Whether it’s policy decisions, management models, or the workplace, the underlying lesson is the same: There are some things technology can’t do. The things that make us quintessentially human can’t be replicated or automated, because machines can’t think. Artificial intelligence is only as smart as the data we input. Models are only as effective as we make them.

So looking at the future of management science, I think success lies in approaches that emphasize the human component as much as, if not more than, the programmatic.

That’s actually grounded in the profession’s genesis.

Operations research was born of World War II, when the British and U.S. militaries tapped the scientific community for their expertise in processes, models, and analysis. There’s no question that deploying data analytics to the battlefield contributed to the Allied victory, whether through strategically allocating weaponry and resources, breaking codes, or building networks.

If there’s any area that’s subject to the unexpected, it’s the theater of war. And while adding the scientific, algorithmic component was crucial to success, so was human judgment.

When the war ended, it seemed clear to researchers that the techniques could be beneficial in the civilian world, as companies and organizations grew larger and more complex, and the advent of the computer age was ushered in.

When the journal launched 65 years ago, the discipline was still in its infancy, but it had undergone incredibly rapid development in the years after the war.

That era is often characterized by the popular culture of the time. And, in fact, 1954 was the year that Elvis recorded his debut album, On the Waterfront was in theaters, and families across the country tuned in to Father Knows Best. At its scheduled time — there were no commercial-free streaming plans — and on an actual television — which was roughly the size and weight of a small rhinoceros.

That all looks especially quaint from the vantage of 2019. But it was, in fact, a time of incredible change — socially, economically, and technologically — and the post-War boom fundamentally changed the business landscape. Which is likely what sped the development of management science as a discipline.

Overcorrecting the Pendulum Swings

We’re in a similar situation now, as innovation in all sectors hits a rapid pace.

As a species, we tend to overreact to pendulum swings — sometimes to correct for them, and sometimes to keep pace with them. Technology has evolved so quickly that it’s become the default mode to digitize it if we can, without considering if we should.

The all-robot hotel in Japan comes to mind — not only could half the mechanized staff not perform their basic functions, the entire fleet was dependent on human assistance and maintenance.

I should make clear that this isn’t a cranky, get-off-my-lawn rant about technology. I am wholly in favor of technological advancement and innovation. It’s fundamental to the American economy, and it’s something we do exceptionally well. It’s connected us to one another in a way that would’ve been unimaginable even a decade or two ago. It’s made life easier, and better, for the vast majority of people.

What I am saying is that we should put it to its best use.

Operations research deals with incredibly complex systems populated by incredibly complicated entities: human beings. Both the theory and the practice should take a step back, kick the tires, and be sure they’re using the right tools — technological or otherwise — for the right jobs.

Now is a good time for that, because the overall race forward in technological capability is going to complicate the workplace even more. And in an increasingly tight labor market, it’s more important than ever to think about who, and how, organizations hire.

As technology advances, the underlying skills that make workers adaptable — the “soft skills” like creativity, communication, critical thinking — are going to be more important than a checklist. No single competence is going to survive that evolution.

Instead, the best candidates will have a core set of skills that can evolve with the market, and we’ll likely see an overall shift to constant training and continuing education.

Existing professionals will need continual upskilling, whether it’s to keep up with industry standards or just learn the office’s new software. People preparing for the workforce will need both proficiency in current programs and to develop skills that will help them adapt with the technology as it evolves.

From the employers’ end, that means investing in the workforce and committing to lifelong learning. Simply replacing outdated skills with new ones just isn’t efficient or cost effective, and it may not even be enough they’ll need workers who can adapt to a dynamic and regularly changing environment, and that means investing in people.

The End … or the Beginning?

Education institutions will also need to consider new models, not just because technology is forcing it, but because they’ll be in a position to offer the lifelong learning that workers need.

The changes happening in the workforce pose both opportunities and risks. On either side, it signals very interesting times coming our way.

I think this time of increasing automation and computerization will only emphasize the importance of an array of skills, and add richness to the management science profession.

And as we continue the research and execute the practice, I hope we’ll remember to take the odd step back, every once in a while, to hang that question mark on our assumptions. Just to be sure.

The views expressed here are the speaker’s own and do not necessarily reflect those of anyone else in the Federal Reserve System.

View the Full Speech