This was only the second time it's been held - as explained here it's a new initiative to add a regional industrial organisation event to the big American and European ones - and it's already attracting some big-name speakers and a good attendance. Over the next three years it will be held in Melbourne, Tokyo and Singapore. Nice to see our local academics presenting too: from AUT, Richard Meade (finance) and Lydia Cheung (mergers and divestments), and from the University of Auckland Simona Fabrizi (asymmetric information, and again on innovation), Erwann Sbaï (auctions), Tava Olsen (incentives) and Steffen Lippert (learning and entry).
It was heavily academic-focused, with a smattering of regulators and economic consultants, so you had to be prepared for a fair amount of pure economics theory, but then if you're at this sort of conference you'll probably comfortable feeding your inner quant. As always with the fancy models, some are down the cleverness-demonstration end, and some are analyses of well-off-the-beaten-path esoterica. But it's worth sitting through the sessions, because it's conferences like this that can present the path-breaking innovations - in ten years' time, they'll be the standard way we think about issues like two-sided markets, platforms, auction and market design, vertical integration, or oligopoly (which were all session topics at the conference).
My own bent leans towards empirical applications of the new ideas, so I especially enjoyed the first keynote presentation. Harvard's Ariel Pakes presented on 'Just Starting Out: Learning and Equilibrium in a New Market' (there's a recent version here). The new market was the UK wholesale electricity market for 'frequency response': Ariel's modelling showed how the players learned how to play the new game, and showed that they got pretty good at it reasonably quickly, with an end-result, once they'd got their heads around it, that was close to an efficient competitive outcome.
This is the sort of market where you might have expected strategic behaviour, and early on there were indeed high bids which looked like invitations to follow. In the event tacit or other collusion didn't happen: I asked Ariel why, and the simple answer was, too many competing participants for it to hold (29 in all, with the top 10 holding 84% of capacity and an HHI of 1100). The paper says that "One area where [this kind of] learning model may be particularly helpful is in simulating counterfactual outcomes, a type of analysis increasingly used by regulatory authorities", and that's true: I'd also like to see it applied to things like our wholesale electricity market.
The other keynote was Columbia's Yeon-Koo Che on 'Optimal Sequential Decision with Limited Attention', which at first glance sounded like it didn't compete with the alternative option of a late breakfast. But it was excellent (there's a version here), and witty: his general theme was how people make decisions when they have a finite budget to spend on verifying their assumptions, and one of his examples included how people should select the media they read in a world of highly partisan "fake news".
I took two things away. One is that when you aren't especially sure about the likelihood of something, you should look for corroborative evidence, but when you're pretty confident about its likelihood (or unlikelihood), you should look for contradictory evidence. Whether that's a Great Universal Law that applies in all circumstances, I don't know, but it makes intuitive sense. And the other was that sometimes longer deliberation leads to worse decisions, something that ought to be bludgeoned into the brains of some of our policy-making and law-making institutions.
I lucked into a particularly good choice from the parallel session menu, on 'Topics in empirical IO'. Lawrence White (NYU Stern Business School) challenged the conventional wisdom that the US economy has become more concentrated (in an HHI sense). Ken Krechmer (University of Colorado Boulder) showed how control of standards (eg on how mobile phones communicate) matters for international trade and for potential use of market power. And Stephen Martin (Purdue) went into utility theory and how compensating losers with gains from winners mightn't be as easy as it looks in the textbooks: one implication was that price discrimination might turn out more welfare-reducing than usually thought.
My own contribution was to moderate the panel session, 'Big data: friend or foe of competition and consumers?', with panellists Reiko Aoki, Commissioner at Japan's Fair Trade Commission, Reuben Irvine, acting chief economist on the competition side of the Commerce Commission, and Greg Houston, principal at Australian consultancy HoustonKemp (and who kindly sponsored both the session and the overall conference).
We'd agreed to take a bit of a risk. After Greg had presented some results showing how big data can be used to better refine geographical markets and to show the impact of new app-based services like Uber on older economy sectors like taxis, we left a good half of the time available for discussion, hoping that enough people would come along and be prepared to have a conversation. And fortunately they did.
Greg starting his presentation |
The panel ready to take questions |
UCLA's John Asker in the discussion, with Harvard's Ariel Pakes and Duke's Leslie Marx (co-author of the excellent The Economics of Collusion) in front |
Broadly we came down on the side of more friend than foe: we (and the attendees) could see a lot of potential for society from better and greater use of the flood of modern data, ranging from better services for consumers and epidemiological and other payoffs from combining diverse datasets through to, for regulators, more accurate market definitions and clearer observation of market behaviour. But - and this was a recurrent theme - it wasn't obvious that regulators could tap into the best econometric experts at will: it's hard, especially in the US, to prise them off the beaten academic path. And the difficulties of cleaning, interpreting and manipulating databases in the terabytes are easily underestimated.
We weren't completely Pollyannas - we could see the potential risks in collusion between pricing algorithms, for example; we recognised that databases can create market power; and we had a range of conviction about whether traditional enforcement analysis and legislation are up with the New Economy play - but broadly we were technology and data optimists.
For those interested in the topic, we compiled a short reading list: a good place to start is the Competition Bureau of Canada's discussion paper (which Reuben had tracked down). And here's a copy of Greg's slides.
Well done to the local conference organising committee - Simona Fabrizi, Tim Hazledine, Steffen Lippert and Erwann Sbaï.
No comments:
Post a Comment
Hi - sorry about the Captcha step for real people like yourself commenting, it's to baffle the bots