We had a session at last week's
RBB Economics conference in Sydney on "How can a Chief Economist's team enhance competition law enforcement? - an examination of different approaches", with a panel made up of people who should know: Lilla Csorgo, chief economist on the competition side of our Commerce Commission; Graeme Woodbridge, chief economist for the whole caboodle at the ACCC; and RBB partner Derek Rydiard to talk about the European Commission's experience.
One conclusion that emerged was that competition authorities had experimented over the years with various internal structures, ranging from a separate 'consultancy on call', through the EC's model of an internal quality check unit, through to integration in (or at least close involvement with) the investigation teams. Of the various models, integration with the investigation teams, or, at a minimum, close and early involvement with the cases at hand, seemed to be working best.
That didn't surprise me: much the same evolution has been happening in the private sector. Groups of economists, lurking away from the main business of the company in some side-alley of corporate HQ, used (amazingly, in retrospect) to be quite common in the banking world, for example, but got shook out in the 80s and 90s, and rightly so. Employers understandably wanted to know exactly what value-add the economists were bringing to the party. I remember one exercise where I had to go round the various operating 'line' divisions and get them to contribute their share of the economics budget (which they did, by the way): it was quite a good mechanism to try and unearth the end users' need for, and satisfaction with, economics input.
Getting early involvement in the matters at hand is critical. There are few episodes more dispiriting, and inefficient, than an investigating team getting a long way down the track, only to have an economist helicoptered in, late in the piece, with an alternative theory of the case that might necessitate junking much of the work to date. It's not always the economist that's the problem: legal counsel can come late to projects as well and be equally disruptive. Either way, an integrated multi-disciplinary team day one is a better way to go.
It's also professionally better for the economists themselves: most want to make a contribution and help come to a decision, and don't want to be sidelined waiting for the phone to ring. Graeme Woodbridge said that one quality he especially looks for is the ability to get off the fence and make the call, and if you're going to be effective in a competition authority (or anywhere else), you should have that roll-up-your-sleeves-and-give-your-best-take mindset .
One thing that became apparent was that, when you look at the work the economists actually do, econometrics is on the outer, for a variety of reasons, including the uphill struggle to get results through the legal process that have margins of error around them (as I described
here). There is some quantitative analysis going on - I gathered from Simon Bishop's presentation that the EC is still doing (and may be overfond of) merger modelling - but it's deterministic. The assumptions you make drive the results that come out. Stochastic analysis of real world data doesn't happen, or happen much.
I may have got a bit obstreperous at this point. It dawned on me that, these days, economically literate governments wouldn't dream of taking big economic policy decisions without some numerate analysis of the likely effects. Could you imagine, for example, a government deciding to whack up the minimum wage, or bring in an emissions trading scheme, or do pretty much anything of economic significance, without taking account of the empirical work on its potential effects? Well, yes, I know you can, if you're thinking about dumb-ass governments, but in most civilised places we're all into evidence-based policies, as we should be.
So why do competition authorities think it's okay to let mergers of national importance go through without proper quantitative evaluation? And I had what might be a Big Idea: if you're challenging a competition authority's decision, why wouldn't you argue that there's an onus on the authority to do the hard yards on the econometrics? It's settled case law in New Zealand that the Commerce Commission should quantify where it can: should it be able to get away with defining a market, for example, without having a go at empirical estimation of elasticities? Especially as in this brave new world of big data, there's much more opportunity to get at the data you need (as
I've argued before).
So I put this to the panel, and may have been slightly over the top in arguing that these days it's well nigh indefensible for a competition agency to do purely qualitative analysis. Got me nowhere: Graeme Woodbridge made the reasonable point that sometimes the data isn't there, and if it is, it's susceptible to different interpretations, and that's fair enough. But econometrics has advanced a lot these last few years, and it's a good deal better at teasing out relationships than it used to be. If the data isn't there, that's one thing, but if it is, I strongly suspect that, properly interrogated, it can tell us important stuff. And competition agencies shouldn't be making important decisions without seeing what it says.
While I was at it, I thought I'd ask the panel something else: if they had the luxury of researching economic issues that might be important in years to come, even if they're not immediately relevant to cases in the works, what would they bone up on? I didn't get a terrific series of answers, but Lilla Csorgo suggested creeping acquisitions, and assessment of the potential (typically post-merger) for tacit coordinated behaviour, where (if I've read my handwriting right) Lilla said the current state of economic thinking is "very poor".
Panel sessions are tricky things that can fizz or flop: this one worked, so well done to RBB's George Siolis who came up with a good set of discussion questions.