Por Terence Kealey. Tradução de Davi Lyra Leite.
Para liberais, crescimento econômico é o crescimento que mal fala por si mesmo. Isso acontece porque a opinião convencional diz que o crescimento econômico é uma dádiva do governo. Aspectos secundários como distribuição eficiente de bens e serviços podem, como nos afirmam, ser deixados a cargo do mercado, mas quando falamos da criação desses elementos em primeiro lugar (especialmente os novos bens e serviços que constituem o crescimento econômico) então — me desculpe, caro liberal — somente o governo pode supri-los: nós somos ricos hoje apenas porque um bom presidente e seus antecessores igualmente bons no Palácio do Planalto e no Congresso foram graciosos e prevenidos o suficientes para garantir que essa riqueza chegasse para nós.
A história convencional é assim uma grandiosa narrativa sobre a generosidade e o conhecimento do governo, e é uma daquelas coisas que grandes companhias, universidades e economistas promovem assiduamente. Entretanto, tem um pequenino problema com ela: está totalmente errada.
A história do erro intelectual mais duradouro do pensamento econômico começou em 1605 quando um advogado e político inglês corrupto, chamado Francis Bacon, publicou seu Advancement of Learning (Avanço da Aprendizagem, em tradução livre). Bacon, que era um homem com interesse sobrenatural in riqueza e poder, desejava entender como a Espanha havia se tornado a mais rica e poderosa nação de seu tempo. Ele concluir que aquele reino ibérico o conseguira através da exploração das suas colônias nas Américas. E como a Espanha descobrira essas colônias? A partir de pesquisa científica: “as Índias Ocidentais nunca teriam sido descobertas se a bússola não tivesse sido descoberta inicialmente.”
Pesquisa científica, explicou Bacon, foi “o verdadeiro ornamento da humanidade” pois “os benefícios das descobertas dos inventores se estendiam para toda raça humana.” Entretanto, ele escreveu, aí se encontrava um problema: toda humanidade pode se beneficiar das invenções, mas nem todas as pessoas reembolsam os inventores, assim a invenção não será recompensada pelo mercado. Pesquisa, portanto, é um bem público que deveria ser fornecido pelo governo: “não há qualquer parcela de um bom governo mais digna do que um maior legado em conhecimento robusto e produtivo.”
O argumento de Bacon foi reforçado em nossos dias por três artigos referência publicados por três economistas renomados, dois dos quais (Robert Solow e Kenneth Arrow) foram agraciados com prêmios Nobel, e o terceiro, Richard Nelson, é reconhecido como estando em nível similar. E nós devemos prestar atenção ao que economistas — ao invés de cientistas — escrevem, pois eles, sendo aparentemente sistemáticos, influenciam políticas de governo mais do que cientistas: é fácil desmistificar um cientista que luta em causa própria ou que conta histórias para seu favorecimento, mas quem poderia duvidar de um estudo objetivo e imparcial de um economista de renome?
A história contemporânea começa com um artigo de 1957 escrito por Solow, que apresentava uma análise empírica que confirmava que a maior parte do crescimento econômico no mundo moderno pode sim ser atribuída a mudanças tecnológicas (ao invés de maior uso de capital). Mas a história teve sua torção dirigista com os artigos de Nelson e Arrow publicados em 1959 e 1962 respectivamente, em que eles explicam que a ciência é um bem público, pois copiar é mais fácil e barato que a pesquisa original: é mais fácil e mais barato copiar e/ou suplantar as pesquisas originais que foram relatadas em periódicos e em pedidos de patente (ou mesmo incorporadas em novos produtos) do que o é realizar novas descobertas. Então, nenhuma entidade privada vai investir em inovação visto que o investimento original servirá apenas para ajudar os seus competidores — os quais, tendo economizado os custos dos projetos iniciais, vão desvalorizar o pesquisador original.
O problema com os artigos de Nelson e Arrow, no entanto, era que eles eram apenas teóricos, e um observador atento, ao analisar dados de fora dos “ninhos” dos economistas, descobriu que no mundo real existiam pesquisas financiadas com fundos privados — um monte delas, na verdade –, algo que vem levando a modificações na história convencional. Essa mudança foi conduzida por três outros marcantes economistas Paul Romer (cujo foco foi pesquisa industrial) e Partha Dasgupta e Paul David (que focaram em pesquisa acadêmica).
Em um artigo de 1990, Paul Romer reconhece que, no mundo real de (i) monopólios garantidos por patentes, e (ii) sigilos comerciais baseados em disciplina comercial, pesquisadores comerciais podem de fato recuperar parte dos custos da sua pesquisa original. Assim, ele criou um modelo matemático pelo qual uma pesquisa original poderia ser recompensada pelo mercado. No entanto, ele ainda assumiu que muito pouco da ciência feita na indústria seria recompensada: “pesquisa tem efeitos externos positivos. Ela gera produtividade para todos os indivíduos que trabalham com investigação científica, mas esses benefícios são não-excludentes, não se refletindo no preço de mercado.”
Dasgupta e David, em seu artigo de 1994, revisaram o desenvolvimento histórico das nossas universidades, sociedades de pesquisa e das conferências científicas, e reconheceram que essas construções sociais de fato promovem a ciência pura, mas como os avanços em ciência básica eram muito imprevisíveis para seus descobridores lucrarem com eles no mercado, esse tipo de ciência: “apresenta uma necessidade constante de se sustentar no patrocínio público”.
Chagamos assim ao presente dogma: a pesquisa científica é fundamentalmente um bem público pois novas ideias, diferentemente de produtos privados, não podem ser monopolizadas por um longo prazo. Entretanto, na prática, nós conseguimos tratar a pesquisa como sendo um bem meritório (que seria um bem que requer apenas uma parcela de financiamento vinda do governo) visto que convenções como patentes ou sigilo industrial, sem falar sobre as instituições como universidades e sociedades científicas, evoluíram para fortalecer um certo — se inadequado — grau de financiamento privado.
Mas a dificuldade com essa nova história da ciência como um bem meritório se encontra na ausência de evidências empíricas de que pesquisas necessitam financiamentos de qualquer modo.
O problema fundamental que atormenta o estudo em economia da ciência é que cada ator na história contemporânea já está enviesado: eles entram no campo já pré-assumindo que o governo deve financiar a ciência. Esses agentes são ou industriais a procura de subsídios, ou acadêmicos buscando proteger seus salários pagos pelas universidades, ou cientistas (que, francamente, vão buscar dinheiro em toda e qualquer fonte sem se envergonhar), ou economistas que assumem que conhecimento não está sujeito a rivalidades e que é apenas parcialmente excludente (forma elegante de dizer que cópias são fáceis e simples de se obter).
Entretanto, nenhum autor contemporâneo já mostrou empiricamente que governos necessitam financiar a ciência — essa afirmação é feita apenas com bases teóricas. Notavelmente, o único economista que olhou para essa questão de forma empírica mostrou que os governos não necessitam financiar pesquisas científicas, mas suas afirmações vem sido ignoradas por um longo tempo, visto ser ele notadamente um liberal — e liberais não tem tração entre acadêmicos, políticos e desejosos de subsídios que dominam o campo. Em 1776, além disso, esse economista apoiou uma revolução, então ele não estaria apenas desatualizado, mas seria um subversor da ordem social.
Nonetheless, if only out of antiquarian interest, let’s look at what this empiricist reported. The evidence showed, he wrote, that there were three significant sources of new industrial technology. The most important was the factory itself: “A great part of the machines made use of in manufactures … were originally the inventions of common workmen.” The second source of new industrial technology were the factories that made the machines that other factories used: “Many improvements have been made by the ingenuity of the makers of the machines.” The least important source of industrial innovation was academia: “some improvements in machinery have been made by those called philosophers [aka academics.]” But our economist noted that that flow of knowledge from academia into industry was dwarfed by the size of the opposite flow of knowledge: “The improvements which, in modern times, have been made in several different parts of philosophy, have not, the greater part of them, been made in universities [ie, they were made in industry.]” Our empiricist concluded, therefore, that governments need not fund science: the market and civil society would provide.
Arguments for the subsidy of so-called public goods, moreover, were dismissed by our libertarian economist with: “I have never known much good done by those who have affected to trade for the public good.” In particular, arguments by industrialists for subsidies were dismissed with: “people of the same trade seldom meet together, even for merriment and diversions, but the conversation ends in a conspiracy against the public.” And our revolutionary underminer of the social order dismissed the idea that wise investment decisions could be entrusted to politicians, even to that nice Mr Obama, because he distrusted: “that insidious and crafty animal, vulgarly called a statesman or politician.”
Our long-dead economist recognized the existence of public goods, which he described as those “of such a nature, that the profit could never repay the expense to any individual or small number of individuals”, but he could not see that scientific research fell into that category.
The economist in question was, of course, Adam Smith, whose Wealth of Nations from which these quotes were drawn was published in 1776. And he is indeed long-dead. Yet the contemporary empirical evidence supports his contention that governments need not support scientific research. Consider, for example, the lack of historical evidence that government investment in research contributes to economic growth.
The world’s leading nation during the 19th century was the UK, which pioneered the Industrial Revolution. In that era the UK produced scientific as well as technological giants, ranging from Faraday to Kelvin to Darwin—yet it was an era of laissez faire, during which the British government’s systematic support for science was trivial.
The world’s leading nation during the 20th century was the United States, and it too was laissez faire, particularly in science. As late as 1940, fifty years after its GDP per capita had overtaken the UK’s, the U.S. total annual budget for research and development (R&D) was $346 million, of which no less than $265 million was privately funded (including $31 million for university or foundation science). Of the federal and states governments’ R&D budgets, moreover, over $29 million was for agriculture (to address—remember—the United States’ chronic problem of agricultural over productivity) and $26 million was for defence (which is of trivial economic benefit.) America, therefore, produced its industrial leadership, as well as its Edisons, Wrights, Bells, and Teslas, under research laissez faire.
Meanwhile the governments in France and Germany poured money into R&D, and though they produced good science, during the 19th century their economies failed even to converge on the UK’s, let alone overtake it as did the US’s. For the 19th and first half of the 20th centuries, the empirical evidence is clear: the industrial nations whose governments invested least in science did best economically—and they didn’t do so badly in science either.
What happened thereafter? War. It was the First World War that persuaded the UK government to fund science, and it was the Second World War that persuaded the U.S. government to follow suit. But it was the Cold War that sustained those governments’ commitment to funding science, and today those governments’ budgets for academic science dwarf those from the private sector; and the effect of this largesse on those nations’ long-term rates of economic growth has been … zero. The long-term rates of economic growth since 1830 for the UK or the United States show no deflections coinciding with the inauguration of significant government money for research (indeed, the rates show few if any deflections in the long-term: the long-term rate of economic growth in the lead industrialized nations has been steady at approximately 2 per cent per year for nearly two centuries now, with short-term booms and busts cancelling each other out in the long term.)
The contemporary economic evidence, moreover, confirms that the government funding of R&D has no economic benefit. Thus in 2003 the OECD (Organisation of Economic Cooperation and Development—the industrialized nations’ economic research agency) published its Sources of Economic Growth in OECD Countries, which reviewed all the major measurable factors that might explain the different rates of growth of the 21 leading world economies between 1971 and 1998. And it found that whereas privately funded R&D stimulated economic growth, publicly funded R&D had no impact.
The authors of the report were disconcerted by their own findings. “The negative results for public R&D are surprising,” they wrote. They speculated that publicly funded R&D might crowd out privately funded R&D which, if true, suggests that publicly funded R&D might actually damage economic growth. Certainly both I and Walter Park of the American University had already reported that the OECD data showed that government funding for R&D does indeed crowd out private funding, to the detriment of economic growth. In Park’s words, “the direct effect of public research is weakly negative, as might be the case if public research spending has crowding-out effects which adversely affect private output growth.”
The OECD, Walter Park, and I have therefore—like Adam Smith—tested empirically the model of science as a public or merit good, and we have found it to be wrong: the public funding of research has no beneficial effects on the economy. And the fault in the model lies in one of its fundamental premises, namely that copying other people’s research is cheap and easy. It’s not. Consider industrial technology. When Edwin Mansfield of the University of Pennsylvania examined 48 products that, during the 1970s, had been copied by companies in the chemicals, drugs, electronics, and machinery industries in New England, he found that the costs of copying were on average 65 per cent of the costs of original invention. And the time taken to copy was, on average, 70 per cent of the time taken by the original invention.
Copying is lengthy and expensive because it involves the acquisition of tacit (as opposed to explicit) knowledge. Contrary to myth, people can’t simply read a paper or read a patent or strip down a new product and then copy the innovation. As scholars such as Michael Polanyi (see his classic 1958 book Personal Knowledge) and Harry Collins of the University of Cardiff (see his well-titled 2010 book Tacit and Explicit Knowledge) have shown, copying new science and technology is not a simple matter of following a blueprint: it requires the copier actually to reproduce the steps taken by the originator. Polanyi’s famous quote is “we can know more than we can tell” but it is often shortened to “we know more than we can tell” because that captures the kernel—in science and technology we always know more (tacitly) than we can tell (explicitly). So in 1971, when Harry Collins studied the spread of a technology called the TEA laser, he discovered that the only scientists who succeeded in copying it were those who had visited laboratories where TEA lasers were already up and running: “no-one to whom I have spoken has succeeded in building a TEA laser using written sources (including blueprints and written reports) as the sole source of information.”
One long-dead person who would have been unsurprised by this modern understanding of tacit knowledge was Adam Smith, who built his theory of economic growth upon it: he explained how the division of labor was central to economic growth because so much expertise is, in modern language, tacit. “This subdivision of employment” Smith wrote “improves dexterity and saves time.”
But if it costs specialists 65 per cent of the original costs to copy an innovation, think how much more it would cost non-specialists to copy it. If an average person were plucked off the street to copy a contemporary advance in molecular biology or software, they would need years of immersion before they could do so. And what would that immersion consist of? What, indeed, does the modern researcher do to keep up with the field? Research.
In a 1990 paper with the telling title of “Why Do Firms Do Basic Research With Their Own Money?” Nathan Rosenberg of Stanford University showed that the down payment that a potential copier has to make before he or she can even begin to copy an innovation is their own prior contribution to the field: only when your own research is credible can you understand the field. And what do credible researchers do? They publish papers and patents that others can read, and they produce goods that others can strip down. These constitute their down payment to the world of copyable science.
So the true costs of copying in a free market are 100 per cent—the 65 per cent costs of direct copying and the initial 35 per cent down payment you have to make to sustain the research capacities and output of the potential copiers. Copyists may not pay the 100 per cent to the person whose work they copying, but because in toto the cost to them is nonetheless on average 100 per cent, the economists’ argument that copying is free or cheap is thus negated.
That is why, as scholars from the University of Sussex have shown, some 7 per cent of all industrial R&D worldwide is spent on pure science. This is also why big companies achieve the publication rates of medium-sized universities. Equally, Edwin Mansfield and Zvi Griliches of Harvard have shown by comprehensive surveys that the more that companies invest in pure science, the greater are their profits. If a company fails to invest in pure research, then it will fail to invest in pure researchers—yet it is those researchers who are best qualified to survey the field and to import new knowledge into the company.
And it’s a myth that industrial research is secret. One of humanity’s great advances took place during the 17th century when scientists created societies like the Royal Society in London to promote the sharing of knowledge. Before then, scientists had published secretly (by notarising discoveries and then hiding them in a lawyer’s or college’s safe—to reveal them only to claim priority when a later researcher made the same discovery) or they published in code. So Robert Hooke (1635-1703) published his famous law of elasticity as ceiiinosssttuu, which transcribed into ut tensio sic vis(stress is proportional to strain.)
Scientists did not initially want to publish fully (they especially wanted to keep their methods secret) but the private benefit of sharing their advances with fellow members of their research societies—the quid pro quo being that the other members were also publishing—so advantaged them over scientists who were not in the societies (who thus had no collective store of knowledge on which to draw) that their self-interest drove scientists to share their knowledge with other scientists who had made the same compact. Today those conventions are universal, but they are only conventions; they are not inherent in the activity of research per se.
Industrial scientists have long known that sharing knowledge is useful (why do you think competitor companies cluster?) though anti-trust law can force them to be discreet. So in 1985, reporting on a survey of 100 American companies, Edwin Mansfield found that “[i]nformation concerning the detailed nature and operation of a new product or process generally leaks out within a year.” Actually, it’s not so much leaked as traded: in a survey of eleven American steel companies, Eric von Hippel of MIT’s Sloan School of Management found that ten of them regularly swapped proprietary information with rivals. In an international survey of 102 firms, Thomas Allen (also of Sloan) found that no fewer than 23 per cent of their important innovations came from swapping information with rivals. Industrial science is in practice a collective process of shared knowledge.
And Adam Smith’s contention that academic science is only a trivial contributor to new technology has moreover been confirmed. In two papers published in 1991 and 1998, Mansfield showed that the overwhelming source of new technologies was companies’ own R&D, and that academic research accounted for only 5 per cent of companies’ new sales and only 2 per cent of the savings that could be attributed to new processes. Meanwhile, contemporary studies confirm that there is a vast flow of knowledge from industry into academic science: indeed, if it was ever real, the distinction between pure and applied science is now largely defunct, and Bacon’s so-called “linear model” (by which industrial science feeds off university science) has been discredited by the economists and historians of science.
Something else that would have surprised Smith about current scholarship is the economists’ obsession with monopoly. The economists say that unless an innovator can claim, in perpetuity, 100 per cent of the commercial return on her innovation, she will underinvest in it. But that claim is a perversion born of the modern mathematical modelling of so-called “perfect” markets, which are theoretical fictions that bear no relation to economic reality. In reality, entrepreneurs make their investments in the light of the competition, and their goal is a current edge over their rivals, not some abstract dream of immortal monopoly in fictitious “perfect” markets.
The strongest argument for the government funding of science today is anecdotal: would we have the internet, say, or the Higgs Boson, but for government funding? Yet anecdotage ignores crowding out. We wouldn’t have had the generation of electricity but for the private funding of Michael Faraday, and if government funding crowds out the private philanthropic funding of science (and it does, because the funding of pure science is determined primarily by GDP per capita, regardless of government largesse) then the advances we have lost thanks to government funding need a scribe—an omniscient one, because we can’t know what those lost advances were—to write them on the deficit side of the balance sheet. Which is also where the opportunity costs should be written: even if the government funding of science yields some benefit, if the benefit to society of having left that money in the pockets of the taxpayer would have been greater, then the net balance to society is negative.
What would the world look like had governments not funded science? It would look like the UK or the United States did when those countries were the unchallenged superpowers of their day. Most research would be concentrated in industry (from which a steady stream of advances in pure science would emerge) but there would also be an armamentarium of private philanthropic funders of university and of foundation science by which non-market, pure research (including on orphan diseases) would be funded.
And such laissez faire science would be more useful than today’s. Consider the very discipline of the economics of science. The important factor common to Solow, Nelson, and Arrow—the fathers of the modern economics of science—is that all three were associated with the RAND Corporation, which was the crucible of Eisenhower’s military-industrial complex. RAND (i.e., the R&D Corporation) was created in 1946 by the US Air Force in association with the Douglas Aircraft Company as a think tank to justify the government funding of defence and strategic research.
RAND’s initial impetus came from the 1945 book Science, The Endless Frontier, which written by Vannevar Bush, the chairman of the federal government’s Office of Scientific Research and Development. Bush argued the Baconian view that the federal government (which had poured funds into R&D during the war) should continue to do so in peace. Bush of course had very strong personal reasons for so arguing, but it was the Cold War and the upcoming space race (Sputnik was launched in 1958) that—incredibly—persuaded economists that the USSR’s publicly funded industrial base would overtake the United States’ unless the United States foreswore its attachment to free markets in research.
It was all nonsense of course—Sputnik was based on the research of Robert ‘Moonie’ Goddard of Clark College, Massachusetts, which was supported by the Guggenheims—but when RAND sponsors military-industrial complex nonsense, such nonsense has legs. That is why a potentially useful discipline such as a credible economics of science (one based on the study of optimal returns to entrepreneurs in a real, competitive market) has been forsaken for one based on public subsidies under fictitious ‘perfect’ markets.
Cui bono? Who benefits from this fictitious economics of science? It’s the economists, universities, and defence contractors who benefit, at the taxpayers’ expense.
The power of bad ideas is extraordinary. John Maynard Keynes once wrote that practical men are usually the slaves of some defunct economist, and economists do not come much more defunct than Francis Bacon. His most recent defuncting came in Sir Peter Russell’s 2000 biography of Henry the Navigator, in which Russell showed that the Iberian peninsula at the time of the great voyages of discovery was not a centre of research, only of propaganda claiming falsely to be a centre of research, which had fooled Bacon. Yet however many times Bacon is defuncted, some powerful group emerges to resurrect his false—if superficially attractive—idea that science is a public good. Unfortunately too many people have an interest in so representing science.
I hope that this little essay will be one more stake in that idea’s heart, yet I fear that this particular vampire will continue to pursue anything that smells of money for another four centuries to come.