CSET: Breaking the Think Tank Mold
Good things happen when smart people get tens of millions to answer tech + national security questions
The inaugural ChinaTalk Think Tank of the Year award goes to the Center for Security and Emerging Technology (CSET) at Georgetown University!
CSET research has transformed policy debates around high-skilled immigration, semiconductor export controls, military-civil fusion, and the role of government in managing AI’s impact (see here for their reading guide). Their reports don’t just advocate policy but add facts to debates around China and technology. They also provide ‘reasoning transparency’ rarely seen in policy documents, expressing different degrees of confidence in claims and recommendations where most of its peers skate by on generalities and often-unsubstantiated assertions. Also, CSET is the closest thing America has to an open-source China research headquarters, having translated over 100 Chinese policy documents.
How did they pull this off? Well, $10m+ per year in hands-off grant money from Open Philanthropy didn’t hurt. As someone who writes reports on budgets for a living, it’s clear that some of these CSET reports creep way beyond the $100-200k range you commonly see from policy shops and into the $750k+ stratosphere. CSET has the resources to buy commercial datasets, do bibliometric analysis, and run surveys far out of the price range of academics, government officials and other think tankers. It boasts a nine-person-strong data team that supports its 30+ deep analyst bench.
But it also took leadership to put that money to work. Earlier this month I sat down with current director Dewey Murdick to discuss what makes CSET tick. The following is an abridged transcript of a podcast you can listen to in full on the ChinaTalk feed.
(Not) learning from the data
Jordan Schneider: So who is worse at using data to answer questions, the FDA or the intelligence services?
Dewey Murdick: Oh boy, that's a great question. I think a lot of organizations want to use data but they don't think they have access to it.
They haven’t invested over multiple years to collect that data in a form that can actually be used to answer a question.
Think about it: if you’re in front of Congress and you’re trying to testify about why you made a mistake that was so obviously a mistake but you didn’t know at the time, you don’t want to say you were winging it. Or say it’s because you didn’t have any data. No one wants to say that.
They want to say they looked at the historical data, looked at what was happening, looked at the trends. They want to say this is why we decided what we did and it turns out we were wrong. That is a much more satisfying answer.
But I think the problem is the policy space in DC is not used to making those kinds of investments in technical folks able to accumulate the data and prepare it so that they can actually answer effectively.
Think tanks and government agencies sometimes just don't have the opportunity to wisely learn the lessons from before because they haven't invested in the infrastructure to do so.
Frenemies at the Pentagon
Jordan Schneider: To what extent do you see receptivity to the sort of data-driven, grounded-in-the-science type arguments that CSET has been putting out? And how, if at all, has that changed over the past few years that you guys have been working?
Dewey Murdick: When I first started doing this type of data analysis in late 2004, this was a very novel approach. People asked how in the world could it even help them to answer a question.
They would say things like following the data is useful, but what you really need to do is follow where there is no data. And I would look at them and say “but there’s no data there”.
When a program manager in the Pentagon is getting threat briefs, they're your best friend before the program starts. They want to hear all about this stuff because they're trying to formulate the project.
But once they are actually executing that project, then you're their enemy in a certain sense. I'm speaking somewhat characteristically here, somewhat cartoony.
When you're presenting data that says the module they're just about ready to finish is not going to work against any of the threats that we're aware of, then basically what you're telling them is they're not going to meet their deadlines.
They're not going to hit their target milestones. They're going to be delayed and they're going to be over budget and they're not going to be perceived as awesome. They're not going to get that promotion that they wanted to get.
You unfortunately become adversarial in those situations. The good program managers do the right thing, but sometimes people really want career advancement and it's hard.
But if you can say here's the whole trend, and here's what an adversarial country has been working on, this is the capability, this is how much research they've invested, how much money, how many scientists are working on this problem, and then you chart the time forward... This is now looking more applied.
That kind of perspective is really powerful. And when you have never seen that kind of analysis before it's super helpful.
The firewall between projects and funders
Jordan Schneider: With a budget minuscule compared to the likes of Brookings ($85m in 2020 revenue) and CSIS ($40m), CSET has brought real rigor to debates ranging from US chip policy to high-skilled immigration. How are you funded?
Dewey Murdick: We don't take money from governments. We don't take money from corporations. We're philanthropically funded precisely so that we can manage that conflict of interest so that we can answer questions.
Honestly, we can speak truth to power, and we can do it in a way that allows people to see the bigger picture rather than just answering a particular question that they've paid us to answer.
Jordan Schneider: You still need to have money coming from somewhere though. The vast majority of it is from Open Philanthropy, which is a major funder among the effective altruism community. Long-termism is baked into their viewpoint.
What do you think is driving them and how does that sort of ideology interact with all of these long-time, relatively non-partisan civil servants who you also have on staff?
Dewey Murdick: Open Philanthropy has been an incredibly good funder of CSET. What they've done is they've basically recognized the value of having monitoring capability that tracks what's happening around the world, that identifies data-informed trends and can see where AI machine learning is going potentially off the tracks.
That’s as well as identifying where there's a risk of not appropriately getting the talent that's necessary to work on these particular problems.
Those shared interests intersect extremely well with CSET’s stakeholder base because we're here to help answer those questions. We're not trying to build any fancy technology or new AI systems. There are already wonderful groups doing that.
But I think Open Philanthropy's vision, and ours, is basically to create a nonpartisan way that people can trust is not coming with a bent.
It's not about trying to persuade people to take an advocate stance, but to be able to be very fair and objective, and to be able to allow the US government and democratic countries that have similar values see what's actually happening. It clearly is very in keeping with their goals.
Jordan Schneider: It's interesting because on the one hand, it’s future-based - we're doing this monitoring, we're being non-partisan - but there's a reason we're focusing on AI.
I've heard [Open Philanthropy Co-CEO Holden] Karnofsky talk about AI and how if we screw it up we're all screwed. How does that line of thinking filter down?
Dewey Murdick: I think if we do screw up AI, we have existential risks to humanity in the long term. We also have the great opportunity to screw it up right now.
There’s value in being able to provide extremely helpful, analysis to bust myths that people might be reacting to but aren't actually true.
There’s also value in being able to lay out clear paths forward that help busy policymakers understand what’s going to happen next. It might be shorter term, but it’s still a necessary insight that will help avoid those long-term risks.
Jordan Schneider: It seems like the whisper of the effective altruism community is that if China takes over the world it won’t be great for humanity.
It's interesting to see how that stuff both does and doesn't feed into a lot of things, even though those sorts of arguments are much more resonant in Washington than Peter Thiel talking about AI overlords.
Dewey Murdick: I agree. One of the core operating principles of CSET has been to have a firewall between funding priorities and engaging effectively with people making active decisions today, as well as training up a future workforce of policymakers.
That is very much under the directive of our funders, but what we've managed to agree to is that we have a firewall between that and the actual production we do. They don't have any editorial control. They don't review who we hire. We need to maintain that credibility.
We're not perceived as having people whispering in our ears directing us on what we do. It's actually really important that we maintain that firewall.
Jordan Schneider: The one thing I will say is, like, ya that sounds great but when there’s only one person writing the check its a different dynamic. It’s interesting that the Brookings, CNAS, or CSIS risk isn’t coorelated with one funder, but if you’re too far out of the mainstream, you still need a lot of people to write $100-500k checks. If you go too far off the reservation or do wacky stuff like your Map of Science, it’s difficult to get that funded.
Thinking back to the experiences you've had outside of the national security community, are there other policy areas that could really use a $50 million blank check and the sort of mindset that CSET has been able to take to these questions of tech and national security?
Dewey Murdick: One of the problems that CSET's been working on, once again from a national security angle, has been talent capability, and the training and development of people with skills that are necessary to implement AI systems safely and wisely.
In the process, we've run across a lot of very interesting issues with the educational system and processes that are implemented in the US. I realize there's a lot of people who are working in the educational space but there are a lot of potential policy actions that can be taken there.
The reason I bring it up is that we've bumped into it here within CSET. There's a lot of questions that are extremely important in other tactical areas. Emerging technologies are obviously a broad area and AI is only one part of it.
Biotech actually was one that I felt strongly about. We now have seed funding in that space and we're hoping to actually respond to that need from a national security perspective.
Jordan Schneider: My pitch is geoengineering.
I feel like there's a lot of institutional biases against funding that sort of research, but it's really important and it really could be a silver bullet stopping us from going through a lot of pain in the next 15 years. That's the one that I've kept coming back to.
Dewey Murdick: You actually bring up a really good point. There's a lot of important work to be done in the policy space and the translation of technical issues to the broader public.
My pitch is in this area of childhood trauma. There's so much interesting work in this space that is not been really accepted and really adopted. Child abuse and other traumatic events translate into adults that have a lot of particular issues and all of us are touched by it. It's not just one part of the demographic, and it really shapes the fabric of our culture, how we deal with trauma.
The Map of Science
Jordan Schneider: Let's talk about some of these wacky projects that you've gotten up to. What is the Map of Science?
Dewey Murdick: The Map of Science is a really cool idea.
Basically we got as much of the world’s research output we could get our hands on, about 240 million articles. Somewhere around 90% of it is in Chinese or English. We then clean the data up a bit and group these research papers into essentially research clusters that are problem-centric.
We had around 240,000 research clusters, ranging from a few papers to a lot of papers. We then pruned it down to the ones that were active and alive, which I think was around 126,000.
Now, with these 126,000 research clusters you can do interesting things. If you are interested in, say, research clusters that have a lot of industry engagement, ones that have military funding going on, or ones that have a lot of AI activity, you can start running these sliders and take a contextual view of what's happening in research. You haven’t just focused in by keyword, which is how people typically do this.
The things that are moving, advancing most quickly, you need to pay attention to. So now you can filter these down and you can get a contextual view where you are able to see the research areas that are AI-related and growing the most quickly, or the ones that have military relevance because there's signals of military interests based on funding.
Jordan Schneider: What about the CSET structure allowed this to be the organization to make this? I remember reading an article about Google Scholar from a few years ago and there’s literally one person at Google working at it now. But this is something Google could have done.
Dewey Murdick: Sure. First of all, every problem we pick on or every solution that we work toward building at CSET is motivated by a policy-relevant problem.
Where should we put investment? Who are the experts we should talk to? What's the emerging research work that is most likely to disrupt capability? Where is there national competition going on that is really intense? Where is the US falling behind?
These kinds of questions are the problems we're trying to solve. And because we're dealing with emerging technologies, the research literature is actually relevant. Now we're working to actually connect that research literature to future literature.
But the point is the reason that CSET was able to work on this problem was because we actually needed the answer to this problem.
Jordan Schneider: There's not a lot of ad dollars from Google scholar.
Dewey Murdick: There's little motivation and there's also a market failure here. For your DOD strategic planner or your program manager who's trying to figure out what trends they should focus on, this kind of capability is really useful.
However, if you step back and ask how many people are willing to pay lots of money for this, it's not that big of a group. There's no real market to get an organization to put the millions of dollars into it to make it work.
The value of slow burn research
Jordan Schneider: Another investment that you guys have made is building up this center of excellence around the semiconductor ecosystem.
Some of that was just spending a lot of money on datasets which usually only the Intels of the world can afford, but it was also letting smart people spend a year reading about stuff and not having to live by really tight publishing schedules or funders’ cycles, which normally wouldn't allow them to go beyond the four paragraphs of insight which your average political science PhD can give you.
How did you end up doing that? How did you end up letting Saif M Khan just go off the reservation for a while?
Dewey Murdick: He thought about it and looked at the problem and said if we're in a competition with China, and they've said publicly what their intentions are for their competing capability, let's see how well they're doing.
Where are the choke points? Where could they end up being slowed down? What could accelerate their speed? And he basically came to semiconductor manufacturing equipment and realized all of the capability was in democratic countries.
Semiconductors are really expensive. They're hard to do, the skills necessary to build them are extremely high and you need lots of tacit knowledge. If we were able to stop the proliferation of that technology, it would potentially slow down development in core capabilities which have security risks.
We gave him multiple years to work on this and he was able to come up with very clear insight.
It's really a credit to him that he discovered this space, but we gave him the opportunity to work on topics that generally make policymakers’ eyes roll back in their heads because they're like, “whoa, what are you talking about lithography for and why do I care?”
And because he's at CSET, we were able to connect it all the way to the policymaker.
Just a quick tangent on this: I've been in places that had gifted scientists able to do amazing things with data models and all these things. What they were missing was the focus on the contextualization of the policymakers.
What levers do they actually have? What are they actually able to make decisions about? What are they actually able to use as information?
Saif was a patent attorney before he came here. He was able to learn what they could actually use. And then he was able to focus that research and do it in what is normally a very technical and very wonky space and put it into context.
Jordan Schneider: Before we wrap up, are there any other sort of CSET special sauce points that we haven't quite explored yet?
Dewey Murdick: I don't think any organization would claim that they like to hire incompetent people, but we've really worked to hire people who are respectful of others, who respect their time, their perspectives and their diverse experiences.
That requirement means all the difference for a work environment. it creates a culture that makes it safe to push, to try new things, to fail, to come back and say that didn't work so let’s try this.
Saif moved into an area that was not well-trodden by think tanks because he knew he had support. He had capable people who respected his views and would give him the time to actually see if he could make this work. And he did.