Kersti Kaljulaid, President of the Republic of Estonia in 2016–2021
The opinion piece is based on the presentation at the University of Tartu development conference “A university for us, Estonia and the world”.
In 2000, the US federal budget was about 78 times larger than the annual turnover of Microsoft, the largest listed company at the time. Moreover, the budget was in surplus, meaning the public sector had the capacity to do even more if it wished, whether in research, social, or educational policy.
By 2024, the US fiscal capacity was far bleaker: despite a very deep deficit (6%), the federal budget was only 16 times larger than the turnover of the biggest company, Apple.
In Germany, the ratio between the federal budget and Deutsche Telekom’s turnover is less than four, and in France, compared to TotalEnergies, it is just over two.
I am not sure whether we should lament over the state being “thin”, but in the context of research funding, this is clearly a problem. Over a quarter of a century, countries’ financial capacity compared to corporations has sharply declined, and this affects publicly funded research institutions and governments’ ability to commission and finance research-intensive technology projects.
General-purpose technologies, such as large language models, are now created by companies themselves. Without state support or commissions, specific technologies also emerge – for example, longer AC cables or quantum-resistant algorithms. The state’s role has largely been reduced to regulation and recognition.
When countries still had a strong financial muscle, things were different. For example, the development of the internet and nuclear weapon were controlled and funded by the state. From that era, we have a kind of muscle memory: the belief that major technological breakthroughs cannot occur without a commission by the state – or at least without the state knowing it.
The largest widely used breakthrough technologies before artificial intelligence were, for instance, the internet and, earlier still, electricity. Volta, Faraday and Galvani all worked at universities or state-supported institutions. Public and royal funding and academic patronage made these studies possible. Fundamental research without commercial output emerged this way and was freely available for everyone to use. It was noble – what was created with public funds was accessible to all for bold implementation and making a profit.
But even the supporters of Edison Electric Light Company were private investors, such as J.P. Morgan. The same applies to George Westinghouse, while only Nikola Tesla worked in both public and private sectors. So, the broader implementation was left to private investors.
What followed, as usual, was an expansion phase: for electricity to reach everyone and all sectors of the economy, network developers who were private investors eventually had to undergo nationalisation. Public-private partnerships (PPPs) were probably not yet invented, and to turn the network into a unified infrastructure, there was no better idea. For example, in 1946, Electricité de France was established, with the state taking over both electricity production and distribution. New York’s metro lines also had to be bought from private investors to integrate them into a common network. Business interests did not support this.
For the first decades, the development of the internet was handled exclusively by public institutions – DARPA together with Stanford, UCLA, and other universities.
The year I was born saw the first successful internet connection between UCLA and Stanford. Everything up to email was invented with public money, even if the work was carried out by private companies. It was only in 1995 that the US ended the operations of the National Science Foundation Network (NSFNET) and allowed private investors to start developing the network. However, companies like Cisco, AOL, and Microsoft still had to comply with the government standards that had already been established by then.
When we come to large language models, the research behind them – dating back to the 1950s – was also carried out with public funding. Even the famous 2017 paper on transformers, “Attention Is All You Need,” was based on public research. However, it was written at Google already.
And then things took off. At a much earlier stage than with electricity or the internet, the entire development moved into the private sector, where it continues to consume billions to this day. Governments would never even consider nationalising infrastructure to ensure more democratic access or faster universal coverage, as they did with electricity grids in the 1940s.
True, academic data repositories and other learning datasets are often still created with public money. But because of the outdated notion that public funding should create public (free) benefits, academia has virtually no way of earning anything back from companies using the assets it has created.
The same is happening in another rapidly developing field: the military industry. While large language models emerged as a result of natural academic progress, military technology best evolves in far more barbaric conditions. As shown by the figures cited at the beginning, in this century, countries have less and less money compared to major market players. It is therefore logical that governments try to realise their ambitions using other people’s money, offering guarantees in the form of long-term contracts or simply financial assurances. The only role public money plays in weapons development today is that of the purchaser. Rebuilding the military industry is the role of the private sector – and rightly so, because market power remains the best filter for determining what survives and advances.
In 2023, the US Department of Defense (now the Department of War) spent $140 billion on research and development – more than any private company in the world. Yet over the past 10–15 years, we have seen SpaceX, Palantir, Anduril, Microsoft, Amazon AWS Defense and others develop technology in their independent laboratories at an ever-faster pace, leaving the state to buy and adapt what private investment has managed to invent.
Compare this with the era when companies had to be nationalised for the electrical grid to spread, or with how long the government controlled the development of the internet before the private sector was even allowed to start building commercial networks.
Today, companies are contributing at a much earlier stage, on a larger scale, and in ways that aim to anticipate future political decisions – even investing against them. For example, both Google and Microsoft have made significant commitments through future deals in nuclear, fusion, or green energy production, purchasing output or production capacity in advance, even though the necessary technology is still under development. The fusion energy project Helion is already funded, partly thanks to Microsoft’s advance purchases. According to the offtake agreement, production should begin in 2028, but the agreement was signed back in 2023!
Software, AI, autonomous systems and the like no longer originate in the public sector, even if they once emerged through public procurement or guarantees. If governments want to remain involved, they have to pay to access these technologies.
How should this slow but crucial shift influence the way universities see their role in knowledge transfer up to the middle of this century?
Publicly funded research is largely open access. However, the cash flow based on it goes to companies that can successfully apply research in their operations. Developers are also solely responsible for increasingly larger parts of projects commissioned by governments or for which they hope to find a market only years later.
Partly thanks to public funding (open-access research articles, long-term government contracts), they acquire mono- or oligopolistic knowledge, attract substantial private investment, and ultimately create a situation where the financing and later the revenue streams of certain technologies, especially those we will need in the foreseeable future and which will be profitable in the 20–25-year perspective, belong to companies.
SpaceX has received billions in NASA contracts, and with that funding, it has reached a point where over 60% of the global space transport market is in its hands. Neither NASA nor any research institution earns a percentage of that; the company is 100% owned by Elon Musk.
ArianeGroup, the closest equivalent in Europe, is a company funded by ESA and its member states, with ownership split 50/50 between Airbus and Safran. Governments have retained far greater control over its operations, Ariane’s price list is two to three times higher than Falcon’s, and most of the market has been lost since 2020. The goal was to achieve independence from the USA, but this has not happened due to the high price.
China’s CASC, which is both state-owned and state-funded, performs even worse than Ariane in terms of efficiency. In short, there is no way around it – where the state is involved, things take more time and money.
The same can be said about defence-related AI: governments (ministries of defence, NATO DIANA, EU EDF, etc.) provide funding. Companies, from Palantir to Anduril, as well as European firms like Helsing, offer solutions.
In the drone sector, the situation is even clearer when it comes to who can quickly channel large sums into immediate needs – because there is no time at all, yet demand is enormous, not only for quantity but for innovation. You have to be better than the enemy every single day. As a result, even China’s DJI – and certainly Baykar, Skydio, Anduril, France’s Parrot and Delair, among others – have taken over, and all development-related risk lies with the private sector. Governments and militaries are simply learning to use technologies they had not even considered ordering.
And that is the biggest difference – in the past, governments knew what they wanted, largely controlled the development process, commissioned components and parts (sometimes from private companies), acquired the results, and later either released them for private-sector use – as with the internet – or kept them restricted, as with nuclear weapons.
Now I ask: should governments – and consequently universities in Europe, which are mostly state- or EU-funded institutions – not take these shifts into account? Should we not completely rethink how technology transfer from the public to the private sector works? And does it even always happen in that direction anymore?
Let us be honest: in the last decade, private companies have clearly been the stronger providers of capital than governments. And as we know, sooner or later, quantity turns into quality. The USA’s limitless capital markets make it possible to raise funds for developments closer to fundamental research than ever before. The market is now ready to bear the risks associated with technologies at a stage where there is no clear path to practical application in sight!
Overstretched social systems and rapidly expanding defence costs ensure that governments’ capacity, compared to that of companies, will not reverse this trend anytime soon. Consequently – perhaps we need to adapt to the idea that even deep research will emerge within companies? They will, of course, publish free articles for others to study and replicate only after the intellectual property with business value has been well protected. By the way, their units dedicated to social sciences also help analyse how new technologies should be regulated, both nationally and globally.
I have been part of some working groups where academia discussed control over AI and its international framework. And I have heard private companies talk about it. Social scientists from universities are in a very weak position compared to analysts employed by companies. To be honest, I left – I did not want to waste time on activities that, in my view, had no real prospects. But I did learn something: based on their best knowledge, researchers were painting shadows on the wall regarding the risks of the spread and development of AI – and then started fighting those shadows. The risks do exist, of course, but Sam Altman knows them better than academia!
The EU and the USA have both acknowledged that the traditional regulatory cycle is far more time-consuming than today’s innovation cycle and have essentially shifted from standardisation to advisory approaches. The AI Act classifies future inventions as high-risk or limited-risk and requires developers to cooperate with governments through AI institutes. NATO, via DIANA (while the USA remains committed to its DARPA model), also provides funding at earlier development stages, hoping at least to stay involved as a co-developer and understand how best to onboard the private sector’s smart discoveries into national armed forces.
Things work completely differently than a quarter of a century ago! Back then, researchers were encouraged to find practical applications for their work as quickly as possible. Now, governments, the EU, NATO and others – often with the active help of researchers and universities – are struggling to keep pace with developments, to understand and control them, so that we do not accidentally wipe out humanity or society in the rush of private-sector innovation!
I admit I have no idea how knowledge transfer should work in this new world. But I can see that today it runs at least on a two-way road, with public contributions shrinking and private contributions growing very rapidly. The reason is simple: countries are less important in the economy than they were in, say, the 1980s – the sheer volume of capital the private sector can raise has grown enormously and, of course, seeks returns in technologies at earlier, riskier stages of development.
We simply have to notice and adapt if we want to understand what the future role of universities will be in this new world.
Could it perhaps be about helping governments to have sufficient understanding of new technologies – for better regulation and security?
Or maybe the answer lies in universities helping a broad circle of citizens to gain the knowledge and understanding of technologies needed in working life? Supporting the rapid spread of technologies to create social benefit?
Perhaps the answer is that these technologies will transform our societies profoundly, and the question of how ordinary people can distinguish what is research-based and what is not will become increasingly urgent.
Or is the key challenge for researchers the need to make a rapid leap forward in social sciences? Technologies are breaking down the existing structures of social functioning: reliance on authority and hierarchies, edited and fact-checked information flows, the state’s ability to regulate citizens’ activities so that society as a whole benefits. How can social functioning be ensured now, when technological progress has disrupted the old cohesive structures?
I am not saying that universities will have a smaller role in knowledge transfer than before. On the contrary, I believe it will be even more critical. The difference is that the flow is no longer one-way: research – practical application – spin-off. It is two-way – research from companies also reaches universities, which can then study it from a safety perspective, recommend risk mitigation measures, and teach people how to achieve broader economic benefit thanks to these inventions.
Or perhaps it is a circular flow: open-access, fundamental knowledge created by universities is picked up by the private sector, developed into products with far greater advancement than in previous decades, and then brought back into academia so that it could become something like GPT – ready to be applied in materials science, medicine, and cultural management? Maybe the role of universities is precisely this: to ensure faster and wider dissemination of technologies into sectors where they were not originally created.
There are probably multiple answers, and all these hypotheses are partly wrong and partly right.
But even if we understand what the university’s role in this new world is, we are still not much closer to answering the question of how nations, countries and universities should find the money for their tasks. Where should the mechanism be placed that channels some of the created benefit to build the financial base needed for research?
My suggestion is to start with changing the attitude towards research as a freely usable public good – also within universities. Since you cannot catch a bird that has flown away, universities should be much more selfish in protecting their intellectual property and ensuring they earn revenue from it.
As for governments, even if they provide companies with partial guarantees through long-term contracts, they should also receive equity stakes in those companies. SpaceX should no longer belong entirely to Elon Musk – the US government’s share should be substantial! In reality, the state is no different from an early-stage venture investor when it supports such companies with orders and guarantees, and that activity is commendable in itself. But they should demand a stake proportionate to their support, a percentage of revenue from patents put to work, and do other things any normal venture investor would do.
This way, governments can still play a smart role in the advancement of research and technology despite shrinking resources, maintain at least an understanding of what companies have invented, and avoid having to buy back what they helped create at full price later.