PEGA-UntersuchungsausschussSicherheitslücken schließen statt sie zu verkaufen

Diese Anhörung des Ausschusses hatte einen technischen Schwerpunkt. Die Abgeordneten wollten verstehen, wie Sicherheitslücken für den Einsatz von Staatstrojanern ausgenutzt werden und wie sie gehandelt werden. Wir veröffentlichen ein inoffizielles Wortprotokoll der Anhörung.

Thorsten Schröder sitzt auf dem Podium und spricht zu den Parlamentarier:innen. Im Hintergrund eine Flagge der EU, sowie das Emblem des Europäischen Parlaments.
Thorsten Schröder vom CCC, der sich seit den 1990er-Jahren mit Zero-Day-Schwachstellen beschäftigt, war als Experte geladen. – Alle Rechte vorbehalten Europäisches Parlament

Am 24. November 2022 hielt der Staatstrojaner-Ausschuss eine Anhörung zum Thema „Handel mit Zero-Day-Schwachstellen“ ab. Die Anhörung untersuchte den Handel mit Sicherheitslücken, die den Herstellern noch unbekannt sind. Dieser Handel ist eng verzahnt mit der Staatstrojaner-Industrie.

Die Anhörung befasste sich insbesondere mit den Auswirkungen dieses Handels, der oft im Verborgenen, gegen nicht zurückverfolgbare Zahlungen und an Gruppen, die mit Cyberkriminalität in Verbindung stehen, stattfindet.

Von der Anhörung gibt es ein Video, aber kein offizielles Transkript. Daher veröffentlichen wir ein inoffizielles Transkript.

  • Date: 2022-11-24
  • Institution: European Parliament
  • Committee: PEGA
  • Chair: Jeroen Lenaers
  • Experts: Thorsten Schröder (expert in information security and functioning of spyware, CCC, Chaos Computer Club), Dr. Max Smeets (Senior Researcher ETH Zurich Center for Security Studies), Ian Beer (White hat hacker, part of Google’s Project Zero team), Dr. Mailyn Fidler (University of Nebraska)
  • Links: Hearing, Video
  • Note: This transcript is automated and unofficial, it will contain errors.
  • Editor: Emilia Ferrarese

Trade in zero-day vulnerabilities

Jeroen Lenaers (Chair): Good morning, everybody. Yes, dear colleagues, welcome to our hearing today of the Pegasus Enquiry Committee. I would just like to announce, first of all, that we have translation in German, English, French, Italian, Dutch, Greek, Spanish, Hungarian, Polish, Slovakian, Slovenian, Bulgarian and Romanian. I’ve been looking forward to this hearing very much because we have heard a lot about Zero-day vulnerabilities in in many of the hearings that we’ve had so far. But it also seems to be one of the more technical and transparent parts of the information that we would like to gather as a committee. So the overall goal of this hearing is to explore the trade and vulnerabilities as it often escapes the light of day. So it’s a big part and a foundation for the spyware industry that we’re talking about. So we’d really like to hear more about how this is organised and also what we can do as regulators here at the European level to do something of course in the, in the draught report of the rapporteur. There are some suggestions made. So also feel free to reflect on those.

Now, in terms of vulnerabilities. Yesterday in this European Parliament, we saw we experienced what it is to be vulnerable, I’m not sure if this has anything to do with a zero-day vulnerabilities. But we had a huge DDoS attack at our own IT infrastructure. So in that sense, perfect timing for our meeting here this morning.

We will have four speakers today. Thorsten Schröder who is an expert in information security and functioning of spyware from the Chaos Computer Club. We welcome Dr. Max Smeets, who is a senior researcher at the Zurich Centre for Security Studies. Ian Beer Whitehat Hacker, part of Google’s Project zero team, and Dr. Mailyn Fidler from the University of Nebraska. I would pass you the floor for about 10 minutes each to make a first contribution, and then we’ll collect questions from the from the members present. We do a little Q&A session. So I would first give the floor to the gentleman on my left, Mr. Thorsten Schröder for 10 minutes.

Thorsten Schröder (expert in information security and functioning of spyware, CCC (Chaos Computer Club)): I’m going to speak in German. My name is Thorsten Schröder. I’d like to thank you very much for inviting me to today’s hearing on this issue. This is an issue that I’ve been dealing with since the middle of the 1990s. For around 20 years, I have been working in information security and I advise NGOs, companies and authorities in the past. And now I am still a very curious researcher in my free time and in my work. I am always looking for new vulnerabilities and new technologies and I analyse these and I have seen many vulnerabilities over my career. I have shared these vulnerabilities and discussed these vulnerabilities with manufacturers. But. I have been part of manufacturing spyware. And I think this is why. One of the reasons why I’m here today to talk about this issue, because I am somebody who knows about information security. In reality, I have been working with the Chaos Computer Club for many years to draw attention to deficiencies in secure security and in order to carry out analysis. One of the first digital spyware tools. And I’ve looked at that. There’s also a comprehensive analysis of. On the export of dual use software beginning of the noughties, a new market developed and this is the zero day market. And this market gave rise to problematic business models. And this is particularly in relation to vulnerabilities in software and hardware. This involves researchers looking for so far unknown vulnerabilities. They tried to contact the manufacturer so that the errors can be corrected. And they try to compile this information over a period of time and then to spot these vulnerabilities on smartphones and systems. And as a. Stores information for five years. The vulnerabilities are used by criminals. To attack systems across the world. Now, these are the two costs of several million euros worldwide. And it also leads to errors in critical infrastructure. No. This means that a large number of end devices are at risk of being attacked on the Zero-day market. Brokers receive a great deal of money to find these vulnerabilities, and they get more money if they’re able to ensure that the vulnerability has not been patched for a long time. It’s not just governments. And we can assume that the NSO group. Not just exclusively sell this to governments. There’s also secret services. And Android, Apple iOS. And it to be to assume that other intelligence services from Russia and Iran also find the same security holes and exploit them. Or there is also the exploit of security holes and they are able to reconstruct those. So. This means that. Access information is available on vulnerabilities for many different actors as long as they have the knowledge on how to find these vulnerabilities. This is a real threat for national security, and the example from the European Union shows this. Spanish authorities use zero day vulnerabilities to hack activists from Catalonia, the same vulnerabilities used by the Moroccan government to spy on the Spanish Prime Minister and Defence Minister. The problem is that the information is not often forwarded straight away to the manufacturers. The. States and business owners are often therefore exposed to the risk. Security researchers. Like researching everyday activities in the digital sphere. Why would they therefore not be paid to research into this issue? And this is about earning money from researching into secret issues but people who are not necessarily thinking about the ethics of this. So we need to really look at the zero-day market. In order to minimise this risk, the researchers need to be given incentives. And we will do this if we remedy the legal uncertainty for researchers. Researchers that find security holes are often put under pressure. The legal situation needs to be on the side of the researchers and the vulnerabilities need to be communicated to manufacturers. They should and must not be intimidated. Secondly, we also need a ban on the purchasing of information on security holes on the zero-day market. The prospering of the black market for zero-day vulnerabilities needs to be stopped. And the information needs to be communicated to the manufacturers. Thirdly, authorities and organisations need to share information about the vulnerabilities with the manufacturers. When there is a security gap, there needs to be a process. Which ensures that the vulnerability is communicated to the authority to the manufacturer. Fourthly, the European Union also. Needs to have a shield. And security researchers receive research money. The market. It’s between €25,002.5 million. The European Union needs to put into place a programme which helps research researchers actually communicate these vulnerabilities. This is the only way that we can have incentives so that researchers don’t sell the information to the black market. Often the. If it’s on the black market, we’re talking about much higher much higher sums of money. No. The Defence Ministers of EU Member States calculate that there are higher expenditure for air attacks and air defence systems and. The costs are between 25,003 million. And this is also is similar to what is paid on black markets for zero day vulnerabilities. Now the into internal blue from NSA caused damage of around €1,000,000,000 worldwide. We need to invest in this digital shield in order to give researchers the incentives to communicate the vulnerabilities. And this is how the problem can be dealt with and resolved globally and swiftly. Now to summarise. We need to put an end to the purchasing the zero-day vulnerabilities with EU members. Member States need to protect against vulnerable vulnerabilities. This needs to be the focus. We need proactive participation through the open communication of security gaps. So. The digital security of EU member states that allies and citizens are very important. Thank you.

Jeroen Lenaers (Chair): Thank you very much, Mr. Schröder. I’m sure there will be plenty of questions on this, but we’ll do first all the contributions. And we move to Doctor Smeets, please.

Dr. Max Smeets (Senior Researcher ETH Zurich Center for Security Studies): Thank you. And I’d like to thank the Pega Committee for the invitation to provide a statement and answer questions about the trade in zero exploits in the context of the use of spyware by EU Member States. I would also like to commend the Committee for the excellent work it has done so far.

As Mr. Thorsten Schröder has already mentioned, over the past years, numerous security exploits have been deployed by commercial surveillance companies in order to install spyware on tiger devices. These exploits often only contribute to specific parts of multi-stage operations, part of what we call a larger attack chain. Sometimes they are combined with known exploits, also referred to as Andy’s. The level of sophistication can be incredibly high. Some surveillance companies have managed to link up a series of exploits in such a way that they can conduct remote zero click attacks. It is a method of installing spyware on a device that doesn’t require interaction from the user, such as clicking on a malicious link sent in a text message or email. So how do these companies, surveillance companies, get these exploits? Well, either they develop them internally, and suddenly groups like an NSO group will have a significant in-house team to do so, or they buy them directly from the developer or indirectly through an exploit broker. And the split in terms of supply. We often do not know. And it also depends for company. But what is known is that they are frequent, keen and resourceful customers, increasingly so on this market. And the surveillance companies indeed are not the only customers on this market. States and criminal groups like to shop for several days, too. In 2013, the NSA had already a budget of more than 25 million to purchase days in an internal budget document referred to as cover purchase of software for an abilities called the Vault. Seven leaks revealed that of the 14 exploits for Apple’s iOS owned by the CIA at the time, four were purchased. The market for zero days is said to be flourishing, global and active. However, it is worth pointing out that often is much more inefficient than people realise. The reason is because the zero market shares many characteristics of what George Akerlof would call a market for lemons. So Akerlof won the Nobel prise for his research showing how information asymmetries can lead to adverse selection in market when car buyers have imperfect information, not knowing as much about the car quirks and problems as the seller who has owned the car for a while? The seller of low quality cars, these are called lemons can crowd out everyone else from the from their side of the market stifling mutually Fantasia’s transactions. And if the buyer is an able to tell the difference between a good car and a lemon, she’s unwilling to pay top tier prices. So it means that sometimes the price is bound to be lower for sellers. And it could be. Now to zero. The export market is also market with extreme information asymmetry, and that’s for three reasons. First, the seller’s much more information about whether the exploit is actually working. Second, the market is also flooded with low quality exploits. Many of the exploits offered are a lot less reliable than the seller’s initially report. And third, to buyer of an exploit is not always able to test the exploit before purchasing it, as economic value would be lost once given the buyer for testing. So the structural set up makes even sometimes beneficial transactions difficult, but it also makes trust a crucial dimensions of exploit sales and localises the market. It means that spyware companies will tend to buy often from a select group of preferred sellers, and they also need to invest many resources developing trusted channels that carry repeated transactions between a developer and a buyer. So the other option in the other option is to work with a reliable broker or platform as buying exploits through a broker reduces the number of parties a buyer has to engage with, allowing them to more easily vet the selling party and develop a long term business relationship. So one case that shows some of these market dynamics is that of a milan based company for a long time, registered as a hacking team. Now, Moment to Love Hacking Team was founded in 2003 by two Italian entrepreneurs with its aim to develop and sell intrusion and surveillance software. One of its earlier clients was the Milan police, but its clientele grew over the years, selling to police departments, intelligence agencies and military forces of various other governments, including those with poor human rights records. In 2000, 15 over 400 gigabytes of data, including internal emails, invoices and source code, were leaked from the company, and the breach revealed that hacking team would seek Zero-day exploits from Rogosin developers to integrate into their surveillance tools. What becomes clear from the internal correspondence is the company’s management’s frustrations with the market. It was often unable to buy what it wanted, and when it did buy zero-day exploits, they were hit and miss. So, for example, Hacking team did manage to build up a relatively productive, trust based relationship with Vitaly Tripathy, a Russian freelance exploit developer and hacking team exclusively produce what we call Adobe Flash Player exploits from Vitaly and for these purchases. Hacking team was given a three day evaluation period to test the exploit and make sure it reliably worked against the advertised targets. Hacking team asked to report to come to Milan to present for testing, but the Russian developer assumed good faith on their part and allowed for remote testing. But hacking team also had less positive experience with sellers. For example, the company appears to have purchased the fake Microsoft Office exploit from Munish Kumar of Leo Impact Security and from Lupin, now an expert broker known as Rhodium One of the bigger ones. Hacking team was frustrated that it only received exploits that were old for very specific software configurations. The story of Hacking team illustrates that some surveillance companies are indeed regular and keen customers on the market for Zero-day exploits, but this has inspired a failure they often face at first adverse information asymmetries and scarcity, sometimes of reliable supply.

There are other complications associated with buying zero-day exploits, which in turn affects how they are used. And I wanted to mention one here. Buying serially exploits rather than a victim internally increases the chances of early discovery due to potential non-exclusive sales, which subsequently incentivises the early use of the exploits. So the likelihood that two or more independent parties will discover a vulnerability is known as the vulnerability to collusion rate. And a RAND study has found that for a given stockpile of zero days after a year, almost 6% have been publicly discovered and disclosed by another entity. Normally this risk of discovery drives semi calculated decisions about exploit use. A hacking group may, for example, decide to only go after the most valuable victims or keep exploits for some unusual circumstances to avoid discovery. We know from leaked documents that intelligence agencies have also various tools to observe, optimise, exploit, use. But when an intelligence agency, surveillance company or entity buys exploits and develops them internally, it further complicates the decision making process around their usage. A buyer can either purchase an exclusive or non-exclusive exploit. Exclusive purchases mean that the expert is only sold to one client and as a result of that are pricier, vice versa. Non-exclusive exploits can be sold to multiple clients and are thus cheaper. In the case of non-exclusive sales, the client has to take into consideration the chances that the exploit is sold to one or more clients and whether other who buy the exploit will use it discreetly so it incentivises a use it or lose it behaviour. The belief that an exploit should be used quickly before it becomes ineffective. This additional risk is also non-zero in the case of exclusive sales, as there is no certainty that the broker does not sell it on to other actors or the developer doesn’t shop around to multiple brokers in some surveillance companies as well as other hacking entities are buying zero-day exploits in the market. But there isn’t. There are many difficulties associated with buying zero days. Once again, thank you for the opportunity to join today’s hearing and look forward to your questions.

Jeroen Lenaers (Chair): Thank you very much, Dr. Smeets. That was very, very interesting. We move to Mr. Ian Beer. You also have 10 minutes, please.

Ian Beer (White hat hacker, part of Google’s Project Zero team): Thanks. I’m Ian. I’m the tech lead at Google Project Zero. So since 2014, we’ve been a team of dozen researchers with the mission to make zero-day hard. I think probably the reason I was asked to come here and speak at this hearing today is I’m one of the few people to have run Pegasus on a device with the crucial difference being that I did it with my knowledge and consent to analyse it. Crucially different to perhaps the way that some people in this audience may have had their software running on their devices. This was a project in collaboration with Citizen Lab and Apple’s security team to take this first captured sample of a real world zero click exploit. A big team of us took it apart, analysed exactly how it worked, and published this detailed write up. And in that write up, I described this exploit as a weapon against which there is no defence. You cannot know that you have been targeted and this is a pretty scary thing. So I’m really happy to see the implications of the proliferation of this kind of technology being debated and discussed. This is something that we have tried to raise awareness of since the founding of our team over eight years ago.

Speaking of proliferation, we’ve seen as Thorsten described, the market starting smaller in the 2000s is shifting to what we see today, where it’s gone from a handful of larger state players towards this pay to play model where there is near global availability with very low set up costs in terms of infrastructure requirements. You can simply pay or rent. Surveillance as a service. Personally, I see this as quite a terrifying, destabilising force potential to entrench power and a pretty clear threat to democracy, which is, I think, exactly why it’s being discussed here so close to home in Europe. And. For me, this is a great opportunity to meet some policy people and view this problem from a different angle for me and the approach of our team. We can often see things through a purely technical lens that. The fundamental problem here is one of complexity. The complexity of these computer systems that we have created, yeah. As a society, but do not yet have the science to comprehend all the different ways they can go wrong. And it just so happens that the ways they go wrong can lead to these kind of things that we are going to be discussing here.

So I can give some more background about our team and our strategy. So we are, as they say, a team of around a dozen vulnerability researchers. So looking. In a technical way for the same root cause zero-day vulnerabilities that were used, for example, in Pegasus. The key difference with our approach is that after we found the vulnerability, we then work with the affected vendor to get the vulnerability fixed remediated, and we give them a maximum window of 90 days from when we report the vulnerability to them. So when we expect to see users no longer being affected by this vulnerability, one of the primary goals here is to also attempt to prevent divergence between the public and private state of the art. When I worked with Citizen Lab to analyse the deep technical details of how Pegasus worked, we were only able to do this because of our decades of experience doing the same kind of offensive security research, albeit for positive effect. So we are trying to show the public what is possible, ideally in a controlled setting, unlike Pegasus, where this was happening in the wild to give a scale of numbers over the eight years we found and reported over 2000 zero vulnerabilities. And like Max discussed, it’s very hard to get concrete data on the collision rate. But anecdotally, from talking to other people in this market, we would estimate around half of the vulnerabilities we find collide with capability either known to attackers or for sale in the zero-day market. Beyond this, we also in the last few years have tried to systematically track all instances of in the wild exploitation. It’s important to emphasise here that every instance of in the wild exploitation that we discover and can analyse is a failure case for the attacker. This again, comes back to one of the scary things that I described earlier, which is that you do not know that you have been attacked or that you are being monitored. It’s only attackers failures that enable you to collect some data. So this data is very noisy, but at least gives us some baseline level to track. At least this much is happening and to put some concrete numbers there. Last year, 2021, we tracked 58 in the wild zero-day exploits. Shockingly, about half of the actual root cause vulnerabilities were variants of things which the industry already knew about and could perhaps have done a better job at fixing. And beyond that, also, in a couple of when we get the opportunity, we work with, for example, citizen bright groups like Citizen Lab or Amnesty to analyse and do a deep dive into in the in the wild exploits the handful that are captured. So two of the most prominent cases that I personally have been involved with in 2019, we acquired a cache of capability that was being used by China to target the weaker and Tibetan minority populations. We estimated that this had been used over the span of probably at least two years with over a dozen zero day vulnerabilities. All that was required for you to be targeted was to visit a community website site, and there was, again, nothing you could do. You were on the website. It used zero day vulnerabilities to compromise you and then install spyware to track your movements, your photos, all of your communications. And then, as I say, last year we had this collaboration with Citizen Lab and also with Apple to take apart at least the initial zero click infection vector for Pegasus. Figure out how it worked and publish the first documentation that enables us to speak about facts about how zero click exploitation really happens rather than discussions of what might theoretically be possible. Thank you.

Jeroen Lenaers (Chair): Thank you very much, Mr. Beer, and our last colleague to join us will be presenting remotely Dr. Mailyn Fidler from the University of Nebraska. You also have the floor for 10 minutes, and I would like to ask the colleagues who would like to take part in the Q&A session, too, to indicate to us who we can already make up a speaker’s list. But first, we’ll listen to Dr. Martin Fidler from the University of Nebraska. You have 10 minutes.

Dr. Mailyn Fidler (University of Nebraska): Hello, everyone. I’m Professor Mailyn Fidler. I’m a law professor at the University of Nebraska Law School. Thank you so much for having me here today. So I first wrote on this subject ten years ago when I published an overview of the possibilities for international regulation of the global trade in Zero-day vulnerabilities. So I appreciate the opportunity to take this testimony as an opportunity to reflect on those possibilities over the last ten years. Should also say it is 3 a.m. here, so I apologise if I am not as sharp as some of the other folks in the room this morning. So my key argument for today is the outlook for legal regulation of the trade in zero-day vulnerabilities is not bright, I am sorry to say, and that outlook is not bright for two reasons because of global political dynamics. These dynamics are not something specific to the global zero-day trade. Multilateral cooperation is very hard right now. Even domestically, Zero-day is touch on one of the most sensitive and difficult areas to regulate its intelligence, intelligence and military operations. And because of the broader geopolitical situation, there are very high domestic incentives not to restrict use of zero. It is at this point in time. So it’s hard to say anything other than the window for any kind of serious regulation, I think has closed. So with that rather gloomy outlook, let’s first take a look at the last ten years. In the last ten years, we’ve sort of essentially experienced an abject failure to regulate the global trade and zero day vulnerabilities. What have we seen? As some of my colleagues have mentioned, we’ve seen growth in bug bounties. We have also seen some voluntary self-regulation by particular governments in terms of the establishment of vulnerabilities, vulnerability, equities, processes which govern when and how governments themselves disclose or exploit zero days. We’ve seen some unilateral export control measures, but that’s about it. So we have not seen global multilateral mechanisms that are best positioned for regulation of this tool succeed in curtailing this again. As I mentioned, the broader geopolitical situation, including Russia’s activities, make that difficult. There’s also been substantial disagreement amongst like minded parties, which has also made it difficult. We’ve also seen a lack of willingness of European and other countries who benefit from this trade to curtail it. There is certainly no coalition of the willing within one’s own borders or outside of that. Now, all of that said, I am sure the committee is more interested in hearing about possible solutions than it is the failure of the last ten years. So, again, I won’t sugarcoat this. I think this is a difficult path to tread in terms of legal regulation. I’m going to give a few suggestions, but just outline that the key problems going forward there. There are two essential problems facing EU member states with respect to Zero-day vulnerabilities. So one is the participation of EU companies in the global trade and zero-day vulnerabilities. That’s problem one.

And problem two is the involvement of EU member state governments themselves in this in this process. These two particular problems are hard for a couple of reasons. Again, lack of political will to regulate is the first. So countries, especially in a time of increasing geopolitical uncertainty, are unwilling to buy in themselves in ways that could constrain their opportunities for action. And the second is, again, deep involvement of intelligence agencies and sometimes law enforcement apparatuses in the zero-day trade. This is tricky because historically that’s precisely what bodies like the European Union are not best positioned to regulate the bargain that has been struck as that’s best left to countries themselves to regulate. So again, those difficulties are on the table. What are our options in terms of stemming participation of EU companies in this trade, going from sort of the broadest down so the EU can encourage action and in multilateral forums that you Member States are also part of so narrow, that kind of thing. And that’s going to be hard for some of the reasons I mentioned above. Perhaps a more viable option, but still hard and you only regulatory approach. So honing in on the outstanding problems that this committee and others have already identified in the EU recast dual use regulations. Again, working with an existing tool is going to be probably more politically efficient in terms of using resources. Could also push to revise the Budapest Convention on Cybercrime to strengthen the requirements and specificity around domestic legislation prohibiting trade in these kinds of tools. Again, a key difficulty with any of these options is going to be it’s a sensitive area in terms of cooperation, and there it is in particular forward in the heart of the most sensitive kinds of intelligence and cyberwar operations. So last I wanted to turn to this, the second prong of this problem. So options for regulation of EU member state government, use of these tools themselves. Again, this is tricky, but I think a couple of things that could be a good source or a good place to spend resources is to encourage member states to develop vulnerability, equities processes and this will likely be seen as a weak solution. It’s not going to do a ton in terms of curtailing member state government use of these. That said, some procedure is better than none. Also good would be to encourage efforts, including by civil society, to get member states to pass domestic intelligence reform. This is one of the only ways to actually get states to restrict the types of internal government actors deploying spyware and the circumstances in which they do so. So this might sound like a hard road that I am outlining. I would agree. I do think the outlook for regulating the trade and use of Zero-day vulnerabilities is very dim. In some ways, the genie is already out of the bottle. That said, I do think there are a few avenues worth focussing on that I mentioned. Thank you.

Jeroen Lenaers (Chair): Thank you, Dr Fidler. And let me apologise to you for making you participate in our committee meeting at 3:00 tonight. And I don’t think it in any way impacted your sharpness because it was a very interesting contribution to listen to. We have quite some time, so I would like to take the questions of members individually and then give the opportunity for each of the panellists to answer them in order to have the most added value to this exchange. You will start with our rapporteur, Sophie in ’t Veld.

Sophie in ’t Veld (Renew): Yes, thank you, Chair. I have to admit that after this panel, I’m beginning to get a feeling of sort of despair and hopelessness, to be honest, because we are in a situation. We seem to be in a situation where both governments or government authorities and criminals seem to share the same interest, namely in maintaining the trade in vulnerabilities. And that leaves citizens out in the cold, and it certainly leaves democracy out in the cold.

Now a few random questions. First of all, what could be done to technically because I’m a I mean, we are we have to sort of stretch our brains in all directions to more or less understand the technical stuff that you’re talking about. But are there technical tools that we could have to, uh, to, to, let’s say, detect spyware, malware before it is actually placed before the infection takes place? I was struck by the remark of Mr. Schröder, who says we spend a lot of money on missile defence systems intercepting missiles. But we you know, we make very little efforts to set up systems that detect this kind of, uh, of spyware. I think that’s a very good comparison. So but are there technical means? Could we buy, build a kind of shield?

Maybe to Mr. Smeets in particular, do you think there are situations where stockpiling by the authorities is justified? Because we always get this argument, yes. But ultimately there may be situations where, you know, it’s necessary for our security. But as we’ve also just heard, this whole argument seems to make us more vulnerable and not more secure. But would there be an argument for certain situations whereby it’s justified then to. Professor Fidler and I would like to thank you profusely for getting up at 3:00 in the morning. I’m sorry we did that to you, but you said we could use the Budapest Convention. We’ve just had a heated debate on that this week. But I would be interested to hear how you how you see we can use the Budapest Convention to regulate this. And then finally, some more remarks and then the question to Mr. Schröder, indeed, is this idea of setting up a fund and giving a bounty to people who report vulnerabilities, that might be an interesting idea. So those would be my questions to all of you, basically.

Jeroen Lenaers (Chair): And I suggest also we take the answers in. Yeah.

Sophie in ’t Veld (Renew): One more. Just one last question, because I don’t know if you’ve had the opportunity of having a look at the preliminary, let’s say, recommendations that we’ve made, and if there’s anything that you would add, change, eliminate, etc..

Jeroen Lenaers (Chair): Thank you, Sophie. We’ll take the same order of speakers. So we start with Mr. Schröder.

Thorsten Schröder (expert in information security and functioning of spyware, CCC (Chaos Computer Club)): Sorry I didn’t get all of those questions. I’m afraid I didn’t get all the questions which were posed.

Sophie in ’t Veld (Renew): But if you wish to answer to the other questions, that’s also…I’m interested.

Jeroen Lenaers (Chair): Just quickly to recap. So the question is about what technical possibilities do we have also with your remark on the money spent on the missile interception systems? And if you would do that with this technology, could you also have like a proper response? What if there was questions made about whether stockpiling could ever be justified? And Budapest Convention was a specific question to Dr. Fidler, but also to you, the more elaboration on the bounty the bounty fund.

Thorsten Schröder (expert in information security and functioning of spyware, CCC (Chaos Computer Club)): Now as to how often the could really be set up. That’s not something I can say at the drop of a hat. I don’t know all the ins and outs of the European Parliament. I don’t know how that could be rolled out. I think this idea should primarily be an incentive for us to go into more depth on this issue in the future. I’m quite convinced that if the necessary funds for the payment of research could be made available, particularly if you bear in mind all of these major software manufacturers will make a bulk out of this.

And of course, if these companies were taxed more, more money would be available. And we could compare this idea with the Environmental Protection Fund, which industries and various states are forced to pay into. We could take our cue from that environmental fund. That means it will be a carrot for researchers to report these vulnerable vulnerabilities to the manufacturer. Now these researchers will be paid from the revenue in the EU coffers. There are bug bounty programmes out there already, but these are funds from the individual manufacturers. Often we’re talking about fairly minimal figures, minimal amount. Of course the bugs have to be detected in the first place, but that’s a different kettle of fish.

Jeroen Lenaers (Chair): Thank you. Doctor Smeets.

Dr. Max Smeets (Senior Researcher ETH Zurich Center for Security Studies): A couple of thoughts from my side. And let me start with the rapporteurs comment about a couple of different actors in this space, both government as well as criminal actors, aspiring companies that share the same interest of keeping this market alive. And that’s indeed very much so. We see, though, interesting dynamics as to how purchases can shape the market. And you see this particularly around us, see purchases of zero days from the CIA, NSA and cyber command in that its buying power and its previously trusted relationship with some developers can. Allow them or try to force them to create exclusive sales and has a result that you see. Sometimes many of these smaller companies being directly around Fort Meade where they basically say essentially you’re selling whatever you’re developing only to us, you can’t sell it to anyone else. And that’s tends to be something that many of these smaller firms are okay with because, one, they pay out well.

And two, they often have already a trusted relationship because they work there for a long, long time. And so in a way, they can use their market power to sometimes allow for exclusive sales that some other government entities are unable to do. Of course, as the resources of surveillance companies increases in the market, some of those dynamics may change as well. And as and that’s a that’s an open question but it’s an interesting one on yes. They have a similar interest to keep the market alive. But equally, there are interests to make sure that some of these sales are only exclusively done, of course, to one entity and some use their purchasing power to do so on the technical tools. Aeon is a much better position to do that, but I think it’s worth just making one brief point around a little bit separating the kind of. Preventing against OTAs and installing malware and what we’re doing against old exploits versus what we were doing against spyware more generally, of which all days are used to install, install this spyware. And it comes back to a point that has been mentioned now I think three times around ultimately also but collusion rates and this is why did the work of Google is so important if the collusion rate is indeed so high then discovering zero days early on. You know you take away opportunities for a lot of other entities to then use them later if the collusion rate i.e. the chances of me finding one versus you is very low. Well then you can say, well, the efforts of Google are not that important. And so there is but and the data is sparse on this. And as you’ve heard now, the numbers range massively from just a couple of percent collusion rate all the way to 50, 60%. So that’s and that’s, I think, a quite an important element there. The question of whether it’s justified or not. That’s a tricky one. And, of course, very widely debated and we’ve seen this particularly debated in a couple of key cases around FBI investigations and shootings as to whether they might be able to purchase certain type of capabilities that allows them access to devices, because it’s such a noteworthy and important case for the FBI to then that that that it allows them to do so for me. But this is a there is no hard and fast rule here, of course. And it then becomes very quickly one of what are the what are the trade offs? How important is it to get more information about whatever is on this device? And how can this be used in an investigation? Does that weight to a greater good than not having access to do so and keeping the market alive? Now, whether that’s, you know, how these activities unfold, that’s one the question here is then, do you have the right procedures in place to actually make that decision? And oftentimes they aren’t. And that’s the key question. The last question around just one minor point around the bounty pay out to researchers. So we have to go off quite some interesting there is could some public advertisement of expert brokers as to how much they are paying out to developers for their exploits. Some of them will provide very detailed price list. So if you go to the website of sodium, you will find a very detailed price list for which type of exploits they’re willing to pay up to a certain amount. And as Thorsten mentioned, you know, that can range from the tens of thousands all the way to two and a half million for an Android exploit. The difficulty, though, is still and I just want to put it word of caution. So it’s an enormous amount of money and then they sell them on for a lot more, right? This is what they’re paying for to the developer. Then they sell them on, of course, to the buyer. So the amounts that they actually are selling them for are much higher. But again, there are some shaping dynamics here that are worth bearing in mind. Right. They have an incentive to certainly list quite high prices on the website because that draws in research as to not go initially through a a block grant programme from a specific vendor, but to use these companies as the first touching point and say, okay, you know, I can get maybe a very small amount from a relatively small amount from the vendor or I can go to the Rhodium and get these super high amounts. So there is also this kind of intentional signalling dynamic that whether they actually, in the end, pay that amount. That is not always the case. Right. So they say up to 2.5 million. And oftentimes you get a kind of a negotiation strategy where some of the power lies then with the broker and actually the developer gets a much lower price or the conditions are actually really annoying for them. For instance, the condition is that they get paid out over the year if this exploit can still be used across a longer period of time. But the developer itself doesn’t have control over that. So it’s a it’s a market where it’s dangerous to sometimes purely look at the public prices and think that this is super efficient because often that’s part of their marketing strategy. I’ll leave it there for now.

Jeroen Lenaers (Chair): Thank you.

Ian Beer (White hat hacker, part of Google’s Project Zero team): Yeah, I am, so to speak. First to the question about a sort of defensive shield or this notion of identifying badness. Thorsten mentioned this, the comparison with a missile defence shield. I think one of the fundamental differences is that in the software world, your shield is just another piece of potentially vulnerable software. And in comparison to the missile defence shield, where maybe you can launch a couple of test missiles to shoot over and see if your system works, in this case, you can run. The attacker has access to exactly how your shield functions and can test their payloads. A million or a billion times against it. Figure out the weaknesses and work around it. And this is exactly what top tier attackers will do against the existing antivirus programmes, which are designed to do exactly what you have described. I identify badness. That’s not to say there are some things that can be done. So, for example, Amnesty International publish or published a report detailing. All of the technical forensic hints that they use when they’re given access post facto to a device to figure out, was Pegasus here? But again, these are all just mistakes that the developer made. And is one player who may make a certain rate of mistakes. That is not to say that there aren’t players that do a much better job of tidying up what they left behind. And to the next point about stockpiling. I think something that I think that we’ve all mentioned here is collision rates being relatively high. The space of possible software vulnerabilities is vast. And we are still figuring out. How computer systems go wrong. But one observation we have made is that attackers will take the path of least resistance, and we as defenders can also try to follow them along that path. And this is our insight into why the collision rate is so high. It’s that attackers and defenders can do the same thing to try and find the same vulnerabilities, which then leads to the risk in stockpiling, which is the and the likelihood that those stockpiled vulnerabilities collide with capabilities currently in use. And you’ve got to remember to view this from a global scale and a global perspective, it may well be the case that a vulnerability stockpiled within Europe is not used or known by other actors within Europe. But it might be known in China, in my opinion. In India, we’re all running the same set of vulnerable software.

Finally, to the question about bounties. I would draw a line here between bounties or using a bounty programme to disincentivize researchers from selling to zero-day brokers. Which I don’t think is a market. I don’t think you can compete monetarily on that. I think the way you do that is through you work of, for example, our team publishing transparently about what is the impact of zero day exploitation and then the individual researchers can come to their own conclusions about Do I sell to the shady iridium broker who’s telling me they give me two and a half million dollars and don’t ask too many questions? Well, but I’ve actually read those reports looking into what happened in China, what happened in Poland. Maybe I don’t want to be involved with that. I think trying to compete financially, though, it’s not feasible. I think what the goal of a bounty programme should be is something that leads to actual improvement in the software security. That means the vendors have to be in the position to, for example, when they receive that piece of information, get the maximum value out of it. So these are the kind of best practises that we attempt to push vendors towards, which are, for example, fixing that vulnerability promptly, doing a root cause analysis to figure out what is actually the root of this problem, looking for variance across the whole code base and ideally also sharing that information so that the whole industry can learn from these individual instances of failure and we can all improve the security in a meaningful way. Thanks.

Jeroen Lenaers (Chair): Thank you. And Dr. Fidler, on the Budapest Convention, maybe just for your information, we had a vote in the lead yesterday here in the House, whether or not to request the opinion of the European Court of Justice on the Budapest Convention and its compatibility with the EU legal framework. There was a majority that did not seek to request such an opinion, but there is some differences of opinion here in the House on the merit of that. So just for your background information.

Dr. Mailyn Fidler (University of Nebraska): I appreciate that. Thank you. I lost Internet there for a second. One of the downsides of working on this is you’re always paranoid that maybe a zero day attack just happened.

So on the first question. Thank you, Sophia. Is there ever a good reason for stockpiling dirty vulnerabilities? I think the case for that is very slim. So in terms of, you know, governments using these tools domestically, I think that’s hardly ever going to have a good, good argument behind it. The sort of small use case I want to put forward is there is some literature suggesting that allowing governments to use zero-day vulnerabilities in the international context of not within your own borders, but sort of in the context of geopolitical conflict, is it can be a de escalatory option. Right. There’s varying thoughts about this. Some folks don’t buy that argument. But the argument goes, you know, if you engage in cyber conflict, you might be less likely to engage in in traditional conflict. Now, yes, that comes with downsides, security risks, but maybe that’s still better than the traditional conflict. But that said, I think the case for that is limited to very few high level actors. In terms of the Budapest Convention and that’s helpful background. Thank you. I am not very convinced that the Budapest Convention is going to be a good tool in this sphere. That said, I think the one sort of place that I see scope for this is potentially I believe it’s Article six, changing the language there. Article six requires the signatories to pass domestic legislation to prohibit the trade in tools that promotes cyber crime. Know Zero-day vulnerabilities can be wrapped into that. Currently, they are sort of not thought to fall within that scope because they’re much broader than that. But you could adjust the language a little bit there. That said, the Budapest Convention is not well-positioned to address some of the most concerning aspects of a zero days, which is, you know, government’s own use of these tools. So the Budapest Convention is just not scoped to do that kind of regulation.

On the last question you asked, which is where should you be focussing your time? I think your current conclusion to focus on it. Again, just pushing for provisions to dual use export controls is a really good thing. I think shifting the focus to restrictions on commercial spyware is going to be is going to be good in some ways because that skirts the really sort of heavy intelligence and military and government use cases about this and shifts that to lower level police force access to sort of off the shelf tools. I think that’s a good place to focus energy. Thank you.

Jeroen Lenaers (Chair): Thank you. And then we move to Hannes Heide.

Hannes Heide (Socialists and Democrats): Thank you very much to all of the speakers today. I am particularly interested to hear about the black market and how this what shapes this market, how contact is made with people. What kind of supply there is on that market and what we can do against to combat the black market. What kind of measures could be taken? Yeah.

And I’d also like to ask there is the proposal for a moratorium on the use of spyware. And so the question is. Could this be a disadvantage for security systems? Could Lex Luthor, for example, be happy about such a decision? That’s the question I would like to ask.

Jeroen Lenaers (Chair): Okay. Now you have to put yourself in the position of Lex Luther and ask this answer these questions. They were addressed to all four of you. So if you don’t mind, I’ll just start with the same sequence again and now we start with Mr. Schröder.

Thorsten Schröder (expert in information security and functioning of spyware, CCC (Chaos Computer Club)): Thank you. Well, on the black market and how that works. So let’s think about the road. I’m a big broker. Often. They are open about wanting to buy the vulnerabilities. They communicate the prices very clearly, and they are very aggressive at it. Security conferences. This is the case for many brokers. This market isn’t really regulated. Payment. I have to say. I haven’t sold anything on that market. But cryptocurrencies that exist. In order to be able to pay anonymously on online. This simplifies things. Everything is sold remotely. So contacts are made through chats, through the Internet, through direct contacts. Also at relevant security conferences. People who deal with vulnerabilities meet regularly at conferences. And it’s possible for researchers just through that to make contact with people. No. There aren’t institutions managed by states which target certain people who engaged in research. This has happened to me personally. And I was offered to be included in a research programme and to receive money for specific research.

I’m not sure about the weakening of security systems. Could you perhaps outline what why you think that security systems have been weakened?

Hannes Heide (Socialists and Democrats): Well, this is about the proposal for a moratorium on the use of spyware until the legal framework has been concluded. And it’s about whether this transitional phase weakens national security or the fight against crime. So if we do not have those means available.

Thorsten Schröder (expert in information security and functioning of spyware, CCC (Chaos Computer Club)): Well, I fear that I’m not able to answer that question in full. So zero day vulnerabilities. And these are able. So this is about and relating to spyware. I would not say. That security systems are being weakened. But. I’m not a specialist in law enforcement or security services. Thank you.

Dr. Max Smeets (Senior Researcher ETH Zurich Center for Security Studies): A couple of points from my side. Let me directly go to your first point around how does this black market function and how is contact made? The first is to note the diversity of sellers and the market has certainly changed over time. They used to be terrific and it’s still online. A terrific presentation of one of the sellers is called Charlie Miller, who talks about a market when it tended to be still very much individual based someone you know maybe with a previous past working for an intelligence agency now is doing this in its free time and sells this. This market has changed and has become increasingly become more sophisticated with key brokers on the market who buy them and then sell them on all larger companies as well. Right. You will inevitably get to bigger defence companies. That will also start to look to develop their teams and to provide not just other potentially military tools, but say, hey, we can also provide what they would call military grade exploits to some of the cyber commands that have been established over the past. And so the way that contact is made can range quite significantly from one individual trusted relationship who previously worked at such organisation to you will have trust that at meal service where on a monthly basis or more irregular basis, people who are trusted get an email with some details around what type of system this exploit is for. It will even tell you the percentage of reliability. Maybe some notes. I’ve tested this on these and these and these devices. It works on these in these platforms quite reliably. It might work on this too, but I don’t know. So you get these mail servers and all the way down to the really public brokers as well, or at least semi-public in that they are, you know, out there at events tell people publicly, explicitly that they are part of this company will put things on websites. So you get a again quite a range here and not a single individual answer. I feared also similar to the question that the rapporteur asked to give a very strong statement as to when spyware should or should not be used. And as a result of that, whether there should be a binary or should ever be used at all or not. The question is, of course, whether the right processes are put in place, and then we can debate on certain use cases whether this was sufficiently followed and important enough. I’m thinking of a Dutch case of an hooligan dog. Was it the right use case there that you are using spyware that was very powerful to catch or at least listen to the investigation of one of the biggest international criminals out there. You can make a strong case. And in that case, if the internal services don’t have the capacity themselves, that might be a worthy investment because you costs so much less harm down the road in other and other cases. But with the amount of those cases that truly exist and how many there are, that’s highly debateable.
I wanted to point out one thing because I feel – one additional thing, if I may – because I feel we if we have moved across in this, how important are zero days actually to conduct operations? That’s an important question, right. For many nation state activity, they’re actually not that important. Right. They are super exotic. But if you will hear Rob Joyce still at the NSA and previously led the Taylor axis operations unit, the hacking unit of the NSA. He gave a public talk at Enigma in a conference where he says ultimately it is about knowing the adversary network better than they do themselves. And when you know the adversary network better than data themselves, particularly these larger networks, you often find ways to get in that are much less exotic. You will find known ways to get in. There’s always something that isn’t patched that will allow you access if even if you need it at all. Sometimes social engineering will simply get you in, too. And so there is a danger here to say, you know, it’s not essential for many of these services to use their bodies for all their operation. And there shouldn’t be such a conception here. And I think that’s an important one. Even if you fully address the Zero-day market, it doesn’t address the greatest majority of certainly nation state activity that is going on a day to day basis. It’s slightly different for spyware, right. You’re using these days part of often a larger chain to install spyware on devices.

Now, why is that different? Well, because the attack surface tend to be much smaller. Right. So you’re not going after many of these companies, large corporate networks of which you or other networks or critical infrastructure of which you want to gain access to, often with many layers of unpatched systems, but you often have only one chance on one mobile device of one specific person who you really want to target, who you want to do this in a non detectable manner. And so the importance potentially of using an order for that increases. And so it there is a there is a greater importance, you could argue, for spyware companies to rely on them. But it is also important to put it in a broader context that not every operation, again, is conducted with zero days.

Finally, a few points. As it was previously mentioned, I forgot to list that to the rapporteur around language that potentially needs to be changed that is currently listed. Let me point out briefly to here, but there might be a few others. I think the first one is a question around disclosing in a standardised manner. And standardised is tricky. I would in simple terms, the language here needs to be in a responsible manner, but a standard. I am not sure about a standardised way and the question around obligation to disclose is a crucial one us to. To what degree disincentivizes people coming into the vulnerability market that are trying to do good? And to what degree you may disincentivize actually initial exploit research that starts not saying that we shouldn’t further incentivise them. There is a danger of putting to strong language India that incentives that disincentivizes early research into some vulnerabilities that can be later be useful down the road.

Jeroen Lenaers (Chair): Thank you. Mr Beer.

Ian Beer (White hat hacker, part of Google’s Project Zero team): I’ll add a few things to what Max said about the use of Zero-day why we’re discussing Zero Day versus the broader insecure practises in keeping large enterprise systems up to date. So I think the way we view or why we view Zero-day as important is that this is the technique that’s used against people who are actively trying to defend themselves. This would be a group like the MEPs present here, or politicians, activists. And this is where I get back. Gets back to the Zero-day is the weapon against which there is no defence. The second part comes also to this change in the business model towards the proliferation of these pay to play surveillance as a service vendors who are interacting with much smaller, often much smaller states with much less developed intelligence capability. The beauty of the Zero-day especially Pegasus, for example, the sample that we looked at targeted using iMessage. So all that was required was your phone number or email address. And it’s this ability to offer. And the targeting to a specific individual without requiring a huge intelligence apparatus to, for example, be able to monitor all Internet traffic in your country or globally and inject exploits that way. This is a huge investment. What Peter Play says is give us the phone number and we can inject it globally to target that individual and we’ll do it for $10,000 per extra user. So this is the role that Zero-day has to play there versus Max is completely correct that where I target work i.e. targeting an institution, they would likely be softer attack surfaces.

Coming to the broader issue around the development or the current state of the market, I think it’s also important to have some historical context. Maybe Thorsten in the CCC, you can talk a bit more to this that like in the nineties, which is a little bit before my time, there was not a multi-million dollar market for this kind of stuff. All of the but there were plenty of people doing it and these were hobbyists doing it for fun. And so what we’ve really seen is a shift from that to enormous defence contractors. You can, if you want to find out who’s doing this kind of research, I would just go put in some key phrases to European defence job postings and you will find that the number of them that are doing this, I would say that there perhaps is an aspect which is tied up with secret payments, crypto, this kind of world, but at least a large part is also done pretty openly. These are the chip taxpaying businesses and some of which are some of whom potentially are more naive than others. And of course, the key issue here is that once you sell your capability, no matter what contract you made with the institution or entity you sold it to you, you lose complete control. There is no way for you to actually audit and follow what happens despite what NSO might have told you. I’m sorry, briefly to the moratorium on the use. I would say I would echo the points here that we are more focussing on what impacts can we have on the supply side and have. Yeah, I’ll leave it at that. Thanks.

Jeroen Lenaers (Chair): Thank you. And then Dr. Fidler as well.

Dr. Mailyn Fidler (University of Nebraska): Okay. Thank you. I’ll just add one comment to my colleagues points and that is, you know, one argument around or against regulating zero-day vulnerabilities has been this concern that legitimate security research might be held. That has certainly been something that has been raised at multiple stages, including at the Wassenaar Arrangement and discussions around that. I have always been of the opinion that there are ways to read these regulations that would carve out enough space for legitimate security research. Some of my colleagues might disagree with that. That said, there has been a history of extremely poor draughting in this space that has caused real concerns about showing legitimate security research. And so that’s the there is space between those two outcomes. But so far, we have not been very good at finding that space. Thank you.

Jeroen Lenaers (Chair): Thank you. Róża Thun.

Róża Thun und Hohenstein (Renew): Thank you, Chair. Thank you very much. I have several questions. Um. There were different issues.

One is you said that the Spanish authorities use zero the available vulnerabilities against the against the Catalans. Now, do we have any technical proof that this was that it was really them, the authorities, or who would have done it? Are there any technical proofs for that?

I would like to know also. According to your knowledge, is there any reliable method today which could be used for tracking, which could be used for checking our devices? If they are infected. Or if they have not been infected, because of course, the technologies developed as surely are new systems. And not only our dear old Pegasus know.

And, maybe the third thing out of many questions that I have on my list, is the contracts for the purchase of those of those with a number of abilities. Do you think that they can be established, for example, at the annual, I guess, world fairs in Prague? We also have this huge mobile World Congress in Barcelona. Would those be the meeting places or purchases there. Thank you very much.

Jeroen Lenaers (Chair): Thank you very much. Maybe also out of respect for all the guests. Well, we’ll take another sequence now. So we’ll start maybe with Dr. Fidler, if you wouldn’t mind.

Dr. Mailyn Fidler (University of Nebraska): Thank you. I think I’m actually going to cede my time to my colleagues. I’m less well positioned to answer those than my colleagues. Thank you.

Jeroen Lenaers (Chair): Thank you, Mr. Beer.

Ian Beer (White hat hacker, part of Google’s Project Zero team):
Yes. When it comes to the technical proof, I can’t talk to that. But what I can point you towards are this report I mentioned earlier from Amnesty International. Which details? All of the forensic points which they were able to piece together, say, for example, I don’t know the specifics from this case, but certainly for other cases that they’ve been involved in, these would include all of the fishing as a message that was sent containing malicious links, which are then stored on the device and later on can be analysed. And then also just the pattern of behaviour of the device pre and post compromise in terms of reliable methods for checking the devices. This then comes down to. These devices are extremely locked down. In fact, Citizen Amnesty have said in the past that they need things like jailbreaks, which use exploits in order to be able to even access these forensic traces to determine whether a device was compromised. So, yeah, I would say they would be the people to talk to about how this can be detected after the fact regarding ISIS world and Mobile World Congress. And I’m not sure remember what Congress is, I think much more focussed towards just selling new consumer devices. ISIS World is a defence contractor sales event and as you said before, Zero-day vulnerabilities absolutely fall within what they would consider their remit. Building weapons. And yeah, this would be the kind of one of the places that it would be discussed.

Jeroen Lenaers (Chair): Doctor Smeets.

Dr. Max Smeets (Senior Researcher ETH Zurich Center for Security Studies): Two points for me on the last one on. And it goes also to the question that was previously asked. You know, how do you meet? Where do you make contact? Indeed, it may be one of the more public cases where you see also sponsorship and so on that is then publicly showcased and that gives you a sense that these companies are selling.

But again, I would like to point out it happens on a real variety of fora. So it’s not just this one stop in Prague or whether that’s in a country in the Middle East or somewhere else where they all come to. Everyone gets their business cards and that’s where the deals are made. Much of it happens at smaller events. You will find certain people at the smaller hacker conferences that will be organised in a place somewhere in Berlin or in Florida or you name it, where also trust is built and sometimes where relationships are developed over the years. And so it’s not to say, particularly in the in the exploit trade that that much of that happens in the in the highly public or in the more public conferences that people have heard of.

A colleague of my right, as you mentioned, it is better positioned to answer the Spanish question.

And the reliability question was already answered as well. Of course, you even. And with that still caveat ting don’t believe any company that we say so that they will have 100% proof that they are able to do so. Right. And so it’s important to keep in mind and devices, if you have a reliable access method, you can also just in fact, the ME in fact. Right. You don’t need to linger on certain devices. You can if you truly believe over a period you can have reliable access to a device. You infect, you leave, you reinfect, which makes it a lot harder for forensics to then to come after you at a given point in time.

Jeroen Lenaers (Chair): Thank you, Mr. Schröder.

Thorsten Schröder (expert in information security and functioning of spyware, CCC (Chaos Computer Club)): Thanks very much. Technical evidence for access by the Spanish authorities. That was the first question. Attributing attacks. Attacks is always very tricky. Quite often you cannot furnish 100% proof. There have been numerous pointers. And but these are still a bit up in the air. I think there’s a report from Citizen Labs on this issue. Maybe you know about that incident. I don’t know.

Maybe you know about the Spanish incident. Colleague to the left. As to whether there’s a reliable method to check whether any particular device has been subject to a successful incursion, that that’s not 100% certain. Presently we can pinpoint attacks using present methods. The colleague next to me mentioned there about various methods of getting rid of evidence once you’ve infected at Vice. So you can in fact leave and reinfect without leaving traces. We don’t know what fresh incursion technology will be developed in the coming years, so we don’t yet know whether it will be feasible to automatically check devices or regularly scan them. Maybe the technical methods are quite limited there.

On the third point where business is conducted. What’s the venue for this? There are a host of small I.T. conferences which are very, very. Granular conferences catering to a limited audience. The conferences deal with reverse engineering and the following vulnerabilities. These conferences, which are very small scale, are dotted around the world. As Max already said, contact does occur. He might make an appointment in a hotel and go out for food or whatever. That’s the general approach to forging contacts with the people you do not yet know.

Jeroen Lenaers (Chair): Dr Smeet, you had a small addition.

Dr. Max Smeets (Senior Researcher ETH Zurich Center for Security Studies): Just one add-on to the attribution question. The best way to conceptualise it is a little bit like a Sherlock Holmes type of process, right? Attribution where like in any Sherlock Holmes type of investigation, you will go and find out the fingerprints that were on the knife, perhaps the motive as well. You follow the daily patterns of certain people and if you collect a lot of information, you will ultimately come to a conclusion that is either with a high or low reliability and probability as to who was the suspect suspected actor behind this. When it comes to much of the research on spyware, a lot of puzzle pieces have been collected and actively tracked. And of course, we have to commend again the work of Citizen Lab in particular, who has done such a meticulous work and effort to really collect as many puzzle pieces as possible, to give us the highest level of confidence to attribute many of these activities, particularly in their case, to, yeah, particularly collect a lot of puzzle pieces around NSO Group’s use of Pegasus. Of course that gives us often, yeah, a higher level of concern and confidence that these that this specific spyware was used by as an actor. So that’s kind of like a very kind of way to conceptualise attribution. And in many of the cases, again, we do have a relatively high level of confidence because of the public showcase, also of the puzzle pieces that were there.

Jeroen Lenaers (Chair): Thank you and Mr Schröder has something to add to that.

Thorsten Schröder (expert in information security and functioning of spyware, CCC (Chaos Computer Club)): Thanks. Just briefly, I wanted to add something because the colleague really hit the nail on the head describing this as a puzzle. Attribution is something which is highly individual. There are no particular tools you can use to discern whether a sample comes from a particular manufacturer or government. I cannot but confirm that in analysing the FinFisher FinSpy Trojan. I work together with the CCC and Linus Neumann. We carried out a granular analysis of dozens of samples. We tried to tease out who was behind this. Given that from a certain time export directives and laws applied in Germany and in the EU, and we had to establish whether FinFisher had acted after the introduction of this directive and continued to use the need to deliver the exports to Turkey. We tried to conduct this analysis. But it was really baffling. It was a very difficult to puzzle. It was hard. It’s hard to determine where something comes from, looking at the technology used in the various items of spyware and exploits. Sometimes you can come up with very detailed information. Nothing can be proven 100% referring to the issue of the Spanish authorities.

Jeroen Lenaers (Chair): Saskia Bricmont.

Saskia Bricmont (Greens):
Thank you very much. And thank you also for allowing us to try to understand this completely unknown universe. And I have an impression we just got a little piece of it today which is already valuable. But it’s also, I think, difficult for us to find the good way to legislate on this. And listening to you, my question would be how to regulate. I’ve also heard something striking to me, which is that, for instance, the pressure that researchers have when they try to reveal vulnerabilities to governments and is this whole confirming also that the work we’ve done previously with other stakeholders, that we are in a system of surveillance where unlawful behaviours prevail.

And so I would like to know first, if you have been a victim at your level of such pressure or even spying or attempts to be surveyed.

And secondly, on how to regulate and starting from your point on the shield, would somehow an agency, a new agency on cybersecurity with a specialisation on zero-day vulnerabilities help to remunerate the researchers to have their work recognised and being considered not as a commercial tool but a service in a public interest service? I would say ensure control having a quick sim, a quick system of alerts if vulnerabilities are uncovered and overreaction. And also would it help to have somehow sanctions to apply to protectors and obligation for them to repair the vulnerabilities within a certain time limits? Because otherwise I don’t know how we can come out of this completely a black market situation. So I don’t know if these things are interesting or not. I just wanted to dig into the possibilities that we have to regulate sector. Sorry for my voice. I’m really trying to, yeah, force me.

Um, Mr. Smeets, you talked about an instrument to optimise protection. Which would be that instrument? Or are the instruments you are thinking of?

Mr. Beer you seem more sceptical about the possibilities to really fix the vulnerabilities. And what are your thoughts also on this, this kind of centre or you know what I’ve just been talking about and when you see the sector knows the vulnerabilities and all its variants, usually they know very clearly. Can you just explain a little bit further who is the sector? Because you said the sector should fix the vulnerabilities faster. But it remains unclear to me who is the sector and who can be addressed.

I would also like you to come back on the system used against the Wigmore and compare it to two Pegasus or equivalent spyware to understand also the differences. And to you or the other speakers, do you know what is the volume of exploits being sold. Yeah. For an idea of the volume and amongst the actors buying on these markets, do you know how many states are clients and which ones are they known?

And finally, I have a general remark because I’m relieved that many of you refer to or value the work of Citizen Lab since this all. Enquiry work. We noticed that some actors, state actors, but others also tried to discredit the work they’ve done, although it’s thanks to that work that this whole scandal has been uncovered. In many in many countries. So I think it’s important to mention it. Since next week, we’ll here we have a speaker that is mobilising and actually even saying this is a whole lie and that because it’s never existed. So if you have any reaction on that, welcome. Thank you for your answers.

Jeroen Lenaers (Chair): Thank you very much. There were concrete questions to Mr Smeet and Mr. Beer. So maybe we start with Mr. Smeets and Mr. Beer and then of course Mr. Schröder and Ms. Fidler can also add to that. Mr. Smeets.

Dr. Max Smeets (Senior Researcher ETH Zurich Center for Security Studies): Thank you. Thank you. I’m not entirely sure what the question on instrument of full protection referred to. Just to clarify, what I said is we shouldn’t trust companies that may advertise that they have a perfectly reliable either instrument of protection or way with 100% certainty to say whether your device is infected or not. That was my claim. On what should be done and the need for a potential EU agency to investigate either zero-day specifically broader spyware. Well, you’ve mentioned it yourself. Now it has come up in this hearing a couple of times and I’m sure many, many times in many hearings. And it was key of this starting, this investigation, that citizen lab. Why are there not more citizen labs? And seriously. If you’re thinking about putting money in right places now, don’t put it on an EU institution either. Put it at a university, a university that is willing to actually also legally back you in case deeds organisations come after you and say, no, we will take on this legal case or put it as a separate institution and but make sure it also has the legal resources to defend itself. But there should be more citizen labs. It’s something that’s Ron Deibert from Citizen Lab said many times. And I think if I would make maybe one case, this is where I would put much of my money in and think really through the resources to do something like that in not just Toronto, but also, you know, in the EU and perhaps not just at one, one place, but let many flowers bloom and see how this how this further than then develops. This is not to say that there is no terrific research done in a variety of institutes across Europe, whether this is in Prague or other places that do good technical analysis. But I do think, yeah, nothing of this kind we still see in Europe today and the EU can certainly help with funding to make some of this possible. So that would be a big one that I would focus on. But then also that drives a wide range of different aspects, whether this is in the more internal attribution sphere and education, but also in educating the public.

And of course, in this, what we’ve already referred to, you can call more public attribution elements that we keep coming back to where it is important to do this reporting, not just to name and shame those who are behind it, but of course, also to disrupt their activities by giving deep insights into their working methods that people such as Ian have done as well. It also will make it harder to use those same methods in the future. Right. And so what you want to do is this continuous cycle of disruption, even if assuming that there is this constant incentive to use this activity, but this constant cycle of disruption that allows them that that forces them to go back to the drawing table each and every time, to increase their cost, to increase their in-house development, to increase their pressure to go to the market and making it harder as a result of that to conduct their activities.

The last question to ask are perhaps the hardest ones. I haven’t seen any reliable numbers around the exact volume of ODIs that are being sold. I may have missed it. Some have estimated on how many are bought by some entities. But that’s it’s a really tricky one. And equally tricky one is, is around which states are actually buying them. We’ve seen significant developments over the past decade. So clearly, we’ve seen some states already doing it for four, four decades. Right. And intelligence agencies of which because of leaks, we know in particular of the budgets of some U.S. intel agencies, we haven’t seen similar leaks in Europe or in other parts of the world, which makes it hard to know their budgets. Right. What we have seen is a development around where you’ve got the law enforcement, you’ve got the intel agencies. And over the past decade, we have seen increasing the number of countries, also establishing military cyber commands. The public record, to my knowledge, is that at least 40 to 50 countries have publicly declared they have established a cyber command. However, it’s an open question to what degree this institutional development that we have seen also leads to a further uptick in actors that are not actively acting on this market on the state side. You could argue that many of these military entities, at least one of I’ve argued as well as they have been established, they’re not yet operationalised sufficiently and might not yet have the capacity or willingness to go on to this market. But yet the exact number of states and then which state entities are also there, we can only see what’s kind of. Yeah, it’s a hard to. It’s hard to give any concrete figures here and we only can guess from the institutionalisation that we have seen. And there we have to be very cautious.

Jeroen Lenaers (Chair): Thank you. Mr Beer.

Ian Beer (White hat hacker, part of Google’s Project Zero team): There were a lot of questions, but I will try to get to as many as I can. So in terms of things that the EU can do. As Max just discussed, we’ve seen the incredible impact of the work citizens have done to bring light to this dark world. And on a certain level, it’s also perhaps many of you here see it as strange. That isn’t. I have to come to Google for lots of the deep technical assistance, which then goes beyond what they are able to provide. And so absolutely from our founding, we have wanted to see more teams like ours, be they at government level or other companies trying to help shine some light in this world through the deep technical work that needs to be done. That’s not to say that probably most European governments don’t have nominally defensive security research organisations. However, they are all fundamentally tied to that state’s offensive capabilities. We haven’t touched on the topic of the vulnerabilities in equities processes, but as soon as you get into this world, the dividing line is no longer there. Whereas you can come to Project Zero and we will dive in and publish what you find. So it would be incredible to see a similar technical group with centralised European funding. I think that would be brilliant.

And you asked who were or how can I better define the sector? I’m specifically here. I’m referring to the software vendors. In all of the cases that we’ve been discussing so far, these boil down to vulnerabilities in software, mistakes in software, written, either collaborative and collaboratively in an open source fashion, perhaps closed source within within a company. It’s this group of software vendors that I would see as the sector. And what we are looking for, what we try to advocate for would be codification of those best practises. So we, for example, pioneered this 90 day disclosure deadline. So whenever we report a vulnerability, we tell the affected vendor, the affected software author, we think this vulnerability is sufficiently important that users should be patched within 90 days. If it takes longer than that, then we see it as our responsibility to tell those users it took longer than this, and you can make your own decision whether you want to put pressure on your vendor or avoid use of this software and so on.

You mentioned how the number of states or the number of governments around the world. Frankly, I would be shocked if you could find an example of a nation state that has never used this type of technology. I mean, we’ve discussed here the hacking team leaks in hacking teams sold to the local police force. Where I live, this proliferates well beyond just top level governments. It’s used very broadly across the whole world. Does Pegasus exist? Yes. I have a copy of the exploits on my laptop if anyone wants to debate me on the existence of that. Again. Volume of exploits sold. Everything that we know about this. Market or about that? Certainly about the use of these exploits is information gleaned from failure cases. It’s meant to be a secretive thing. You’re not supposed to know that you’re being targeted. This is why we try to collate all of these failure cases to, at least as I mentioned earlier, give us a baseline level of last year. We saw evidence of Zero Day that was detected and then disclosed by vendors 58 times. Pushing vendors towards being more transparent about sharing this information when they know it, when they know, for example, that their software has been targeted is something that we’ve also been advocating for. Again, trying to move the debate forward and bring some facts. So to discuss. To give a feeling for just how widespread these things are. I mean, last week our team were involved in five new in the wild exploits and the analysis of it. So this is an ongoing ever present threat.

You mentioned the. If I could say anything to the distinction in the level of sophistication between this attack we saw in 2019 by China against weaker and Tibetan minority populations versus Pegasus. Certainly Pegasus is designed to be as stealthy as possible. It is never meant to be detected and a lot of effort has been put on the engineering side into that. This is why there are so few samples that you’re able to actually analyse. I mean, we have one that Citizen Lab were able to capture. There was another one in 2016. That’s it. Everything else is just these forensic traces which, as Max discussed earlier, you cannot rely on. You cannot definitively say this is here. Therefore there must have been infected or they were never infected. The sophistication of the exploits we saw thrown we saw used in China were certainly lower, but this does not diminish that threat. What we saw, in fact, was an almost production line ability when they were formed of chains of multiple exploits. And we were able to track the development of this chain over a period of two years. One vulnerability would get fixed, for example, by colliding with the vulnerability research my team does. They were then able to find or acquire a new vulnerability and reinsert it. This was all done pretty quickly and as the vulnerabilities got found, they just replaced them with another one. Yeah.

Also, you asked, have we ever been victims? And the answer is yes. Tag. Google’s Threat Analysis Group published a blog post earlier this year alerting security researchers to vulnerability researchers like myself that we are being targeted. They list in the blog post various fake Twitter accounts that were interacting with the security researcher community, including people legitimately participating in bug bounty programmes. To build up a rapport in order to then get them to visit a compromised Web site from which they would try to hack those researchers. And yet, I have been through the process of realising I interacted with this group, which has later turned out to be a front for a group associated with North Korea. I mean, what do I do? So why then? I mean, the fortunate position of being able to work with Google’s team to deal with this and we try to figure out, well, was I compromised? Was I know. But certainly it’s an experience to even go through that emotional process of I’ve potentially just been opened up. Completely to the North Korean government and it’s not have place to be. Thanks.

Jeroen Lenaers (Chair): Thank you, Mr Schröder.

Thorsten Schröder (expert in information security and functioning of spyware, CCC (Chaos Computer Club)): On the question of whether we can regulate the zero-day market. Well, it the best way to regulate it would be would be if it were banned in order to prevent the market being used. It needs to be looked at very closely. This is why I have made the proposal for some kind of shield defence against vulnerabilities. There was a slight misunderstanding. I don’t mean a technical solution for this kind of protective shield because. This wouldn’t work. What I, what I’m talking about is an organisational measure. And a significant part of that measure. Is that. Researchers through EU funds would be what would receive money. And that money needs to be spent on this to prevent this market from being used like that. So and. One researcher may see a very valuable vulnerability. And if we were able to do this, this would prevent the researcher from then turning to a broker. If we were to implement such a protective shield. This would have long lasting impacts on the market. It’s the prices getting more expensive private firms. And so if the prices do get higher, then this may mean that certain states, certain organisations would not be able to afford it. Now, another issue if these possibilities are published, the pressure on researchers comes from the software manufacturers, not from the governments that want it. There are cases of pressure being exerted by governments, but most of the pressure on the researchers comes from the software manufacturers. Either they want to protect their reputation, so they want to silence the researchers to prevent information on vulnerabilities being published. That is one particularly important reason some software manufacturer manufacturers simply have no idea how to deal with researchers that. Carry out what you could call quality assessment. Quality assurance. In Germany, there is a federal agency for information technology. There’s these trained software manufacturers in Germany, they say, if something comes along, someone have an ability, then deal with it properly. I can say that my company has often fallen victim to such pressure not from governments, but from software manufacturers. Often we have communicated very important vulnerabilities to these people. Whether we’ve been victim of spyware I can’t really say that. I hope not. I think a successful attack would be quite difficult to notice or perhaps too late once you noticed it. Now, when it comes to this protective shield. This would be a message to researchers that you don’t need to fear legal issues that you could receive money from the EU budget. So that they do not sell these vulnerabilities to the bad actors.

There’s also a question whether it be possible to impose sanctions on software manufacturers and to put them under pressure. To remedy certain security gaps within a certain period. I don’t think this is necessary because generally manufacturers have an interest in remedying vulnerabilities. They do this sometimes quite swiftly, sometimes more slowly. It depends, a. There is Google Project Zero. They have a 90 day deadline before information is published. But I don’t think that we need to have sanctions on that. And then when the information is published, this says everything about the software company on Citizens Labs. I don’t want to add anything, but. People work voluntarily. They invest a great deal of time and energy and they carry out a very they do very important work, not just in Google Project Zero. In the CCC, our analysis. Is often been based on the work of citizens like that. Doing a great job. As I say, the volunteers and the volunteer researchers. Thank you.

Jeroen Lenaers (Chair): Thank you. Miss Fidler.

Dr. Mailyn Fidler (University of Nebraska): Hello. Hi. I think I’m back on now. I’ll just add quickly one point to my colleague’s remarks about whether we can regulate this. I would encourage folks to focus in on the question of what is it that we are regulating? Right. There are two interrelated problems here. The first is you actors engaging in this trade. So selling CRT vulnerabilities, selling spyware. That’s one problem. The second problem is you member states using these tools. I think there is more scope for the European Union to regulate the trade aspect of this than there is member state use. Again, because this runs up against so many issues of national security and intelligence organisations. Thank you.

Jeroen Lenaers (Chair): Thank you very much, Mr. Lebreton.

Gilles Lebreton (Identity and Democracy): Thank you very much, Chairman. I would like to thank all the speakers who have spoken today. This is a very worrying, worrisome, worrying issue, talking about vulnerabilities. I have two questions.

First of all, for Mr. in Beer when it comes to Zero-day vulnerabilities and you said something which surprised me, you said it’s a weapon that we cannot protect ourselves against. I’d like you to go into a bit more detail on that very pessimistic view, and it really surprised me.

My second question is for Professor Fidler. You said that we could perhaps rely on the Budapest Convention to try to fight against the trade in Zero-day vulnerabilities. This is a tax which is not has not been adapted to this purpose, but there are several articles which talk about the fight. We’re talking about Article four, which is about data integrity, Article five. This is on in threats to inequality, integrity of the system. These are all focussed on the fight against these vulnerabilities, these everyday vulnerabilities. So I would like to have your view on that. Is this something that we need to look at, in your opinion.

Jeroen Lenaers (Chair): Thank you. So first, Mr. Beer.

Ian Beer (White hat hacker, part of Google’s Project Zero team):
So I can clarify a little my point that I quoted the blog post I wrote where I described Pegasus and indeed this whole class of zero click zero-day vulnerabilities as a weapon you cannot defend against. What I’m really trying to illustrate there is that in that particular moment, there is nothing you can do. Even individuals who take every precaution when they use their digital devices, that means things that is in there. For example, a recommendation in the past, like not clicking suspicious links, keeping your device up to date have no chance of defending against this. There is no visual indication that you’re being attacked in the moment. You do not know. And I would counter that. Admittedly pessimistic view with a longer term optimism, which is to say that it’s not that there is long term nothing that can be done against this. And the work I do on Project Zero is specifically focussed on what can we do long term to raise the cost beyond some certain threshold that we don’t see use of this technology proliferating to the extent that we do today. So there absolutely are steps we can take. The things that we’ve discussed earlier about codifying best practises, disclosure, patching, sharing these best practises across industry, across all the software vendors so that it becomes some. New event. Every in the wild exploit that gets discovered is then something brand new and unexpected versus the situation now where it’s very rare. When we dig in to the root cause of an in the wild exploit that we say that was something we could never have known before. It’s very often the case that we say, Oh, well, actually this is very the root cause. The software problem here is very similar to something that we fixed last year, but we just as an industry of software vendors, didn’t do a good enough job fixing it and ensuring that that fix applied across the whole ecosystem. And not just this one tiny area where this particular in the wild exploit no longer works. Like Max discussed earlier, an exploit might be sold to work against one particular version of a device or an operating system. It no longer works there, but with some changes to move. Not to a vulnerability here, but 1 to 1 lines away in the source code, the same dangerous construct is still present. So that’s how I would answer that question.

Jeroen Lenaers (Chair): Thank you. And Ms Fidler on the Budapest Convention.

Dr. Mailyn Fidler (University of Nebraska): Hello. Hi. Thank you for your question. So there’s been several questions on the Budapest Convention of option for regulating this. At this point, I feel like I must have uttered some magic words that everyone is so interested in the Budapest Convention. Because I feel I should clarify, I do not think that this is the best option for regulating that. The topic that we’re talking about, particularly because the Budapest Convention is not a tool for regulating government surveillance, national security issues, governments using these tools in the spying and international politics. That said, there is some scope for regulating the trade in or out or use of zero to vulnerabilities to Budapest Convention because they are tools that are used in cyber crime rate. That said, the options that are available through the Budapest Convention are essentially the same things that have been tried through other EU regulations on dual use exports that have essentially at this point failed. Right. So I’m not sure why the Budapest Convention would be much more successful than that given the sort of lack of political will behind the EU dual use export regulations at this point. That said, it sounds like there is political interest in this option. So therefore perhaps there’s something going on here that would allow the Budapest convention to be more meaningful on this front.

Jeroen Lenaers (Chair): Thank you. Mr. Schröder, Mr Smeets, you have anything to add to these questions? Mr. Smeets.

Dr. Max Smeets (Senior Researcher ETH Zurich Center for Security Studies): Perhaps one brief clarification as to what we’re talking about. Ultimately, we have here spyware and it’s malware is something that these organisations want to install on your devices, right. The key question for them is how can we make sure that this piece of malware gets on your device in the most reliable way possible, i.e. the highest chances of success and in a non-detectable manner, i.e. you’re not finding out that this is being done, you’re not being held hostage and this is installed. Right. And there are various ways in which you can potentially access that device. Right. One way less covert way, perhaps, would be to force a person here to say, please download X or something. Right. Physically for someone. There could be other ways of social engineering to actually get this malware on a device, and there are other ways in which you may exploit a vulnerability on a system. And often you have to exploit multiple vulnerabilities on a system to ultimately gain access to such a device that that then allows you to install it and see where it is come in, because they exploit unknown fraud abilities, at least to the vendor. And sometimes you need, again, multiple to do so in a chain to ultimately get access.

And so there is also a broader question for the committee on the zero days for installing this malware have shown to be important and there are many of them, many unknown vulnerabilities that then can potentially we can find a method to exploit them. But there is a question here on also what you want to regulate, right? Like you may want to regulate these methods of access that have increasingly become integrated with the specific malware you want to create malware that has integrated access methods. But equally, these are semi separate issues to discuss. One is the spyware itself on a device and the other ones are the access methods. And the question is to what degree you can basically regulate each and incentivise or disincentivize each and disrupt each.

Jeroen Lenaers (Chair): Thank you very much, we move to Mr Puigdemont.

Carles Puigdemont i Casamajó (Non-attached): Thank you very much, Chairman. First of all, thank you very much to the speakers. Now I have some questions that have already been answered.

I’d like to ask Mr. Schröder first of all, would you be able to give us a specific example of the types of threats that researchers have received? You’ve already answered that question, but could you perhaps give us a little bit more information, more precise example has? That had a chilling effect on the community of researchers that are trying to deal with these issues.

Before you gave the response to Mr. Lebreton, Mr. Beer I had a similar question because I feel that we’re almost entering the gates of hell. And citizens may feel hopeless because we are facing a technological situation. There is global availability, more and more customers. So I get the impression that we are facing a zero day pandemic, zero clicks, zero protection. So I’d like to focus on that point because the aim of this committee is to have to end up with a message for citizens. In an honest manner. So people can protect themselves. And so the next time that Mr. Beer comes here, just to say there’s nothing to do. By that point, hopefully, we would have started to do things. Is that? Anything that we can do to. We need to think about this because Professor Fidler said that we are facing failure. Failure to not being able of not being able to regulate the situation. And there are governments who are benefiting from this software. Should they be the ones who need to reassure us? Will they change to not be able to get more benefits from this situation?

Jeroen Lenaers (Chair): Thank you very much. Then first Mr Schröder on the threats to researchers and then Mr Beer on the gates to hell.

Thorsten Schröder (expert in information security and functioning of spyware, CCC (Chaos Computer Club)): A specific example of threats. Obviously. Over the past few years, I have sent a few examples of vulnerabilities to software manufacturers and in many cases my company and myself as well, we were threatened with legal action. And we were we had to contact the legal department. We were threatened as a company. I have a certain degree of responsibility for that. If my company had to pay certain compensation, then I would perhaps need to close that company. So as a business, this would be very serious now.

More specifically. In one case, we were informed what our customer would say if they would if legal action was taken against them. Perhaps they didn’t stick to the user conditions for certain product. We are an IT security consultancy company. We communicate it errors and we are working in a big gay grey area, just like Google zero. Project zero. We gave our information and results findings to our customers, not to the general public. It’s about minimising risk. And informing the software manufacturer that there are problems. So it is a grey area.

In some countries, it’s not even permitted to engage in reverse engineering of software. This is banned in many countries. So it’s a grey area. Legally, there are certain conditions which are allowed. But as security researchers, we always under threat, if you like.

In Germany there are Hacker paragraphs, which is a certain paragraph in the Criminal Code. I don’t think it’s ever been applied, as far as I know. But it does generate legal uncertainty for researchers. You know, when it comes to threats. Often copyright may be infringed. If we publish something in our report where it says there is example, quote, code that we found in software. And legal experts may try to see find infringements of copyright. No specific threats and intimidation.

During my career, I spent a small amount of time developing tools and spyware. Once I broke off contact with that and I left that project and work behind me, there were numerous. Incidents. That I can’t 100% tie to that, but they definitely do fall into the category of specific threats. There were several strange events. I was contacted through social engineering. Several things happened that I interpret as threats. Perhaps they were originally just intended as intimidation. These are specific examples that I can talk about. Other than that, there are many security researchers that openly report about what happens to them when they send, when they communicate vulnerabilities. Some researchers don’t like to communicate. That they found even one vulnerability because they don’t want to see damage to their reputation.

So does this have a chilling effect on researchers? I don’t think so. The situation isn’t great. Many software manufacturers have good processes and what they need to do when dealing with hackers and reporting on vulnerabilities. But some don’t. Time and time again, I hear about this. And some people don’t want to launch a closure process or coordinated disclosure process with the manufacturer. Many companies do this. Google, my company. This is when the software manufacturer received a security advisory saying that this is not working, that they can look at the causes and how it can be prevented in the future. Then the software manufacturers given a certain period of time, generally 90 days. This is for it to be practical. If the error has already been remedied before, it can be sent given to the public as a patch. And some manufacturers need more time. Sometimes they’ve extended the period to one year. Because perhaps this is a very serious vulnerability in a very broadly marketed product. No. Some may say I will publish this without contacting the manufacturer. That’s full disclosure that existed before, especially in the nineties. When hackers didn’t want to deal with the software manufacturers. And the sulphur manufacturers at that time didn’t have an experience of how to deal with hackers.

Now when it comes to a message. For citizens what they can do to protect themselves. I don’t think this falls under the scope of what we’re doing here. Much in Germany and in many other countries. There are organisations in Germany, there is the Federal Agency for Information Technology and their task is to ensure that the economy and citizens are protected when it comes to it. The aim of the organisation is to raise awareness and to train people. And to ensure that researchers are able to turn to such agencies if they are afraid of legal action being taken against them. And I think that the message needs to come from these kinds of organisations and this is what they are doing in Germany relatively well.

Jeroen Lenaers (Chair): You, Mr. Beer?

Ian Beer (White hat hacker, part of Google’s Project Zero team): Yeah. To the gates of hell. I think one of the goals of our team has been to raise awareness of this issue. So, I mean, or a certain level, I’m really happy to be having this discussion at all in an open forum. In terms of what can be done. We’ve thought deeply about this and the approach in terms of what is a short to medium term win in the situation. All we can really do is attempt to raise the costs. Whether that helps you as an individual. Everyone here, individuals of geopolitical significance, you may find that the cost in the cost and benefit of targeting people here, we are still not at that level. We have plenty of evidence that that’s not the case. Do I think we’re at the level where every European citizen should be concerned going about their day to day business? I don’t think that’s the case. I think there is a bar and we are trying to raise this in terms of the costs, but it’s still far, far too low. I’m. I also discussed the costs and benefits and this dichotomy or this issue that. Almost every state represented here almost certainly is using zero-day vulnerabilities for purposes that they would consider legitimate. And they would make their own cost benefit analysis here and say, well, the failure cases that we’re discussing here are actually in your small proportion of the much larger wins that we’re having over here. But we’ve got to keep this whole secret. We’ll never tell you. So may say, you know, trust us. And so what I would like to see and what we are seeing here today is a more open discussion about that. I want society to consciously reach that decision that they’re okay with that or not. Okay. And I would also say that plenty of people involved in this industry on both sides would accuse me of being very naive in my approach to these things, and I would throw that right back at them as well. I’ve mentioned before that plenty of these organisations finding and selling this capability are legitimate businesses registered in European countries paying taxes in Europe, but they may well be naive in that they sell to someone who they trust or an organisation that they trust and they lose complete control. And then it’s impossible to say what happens. So yeah, concretely I think what we can do is freeze all the work that Project Zero do, specifically trying to focus on collisions, something concrete that we can do to say we are pretty confident that this has an impact. This raises the cost. This means that. These vendors creating these exploits have more work to do. Then on the other side, also pushing for best practises on the software vendors themselves in terms of at a minimum by ensuring that the new code, the new software that’s developed, learns from all the mistakes that were known about in the past, that the new stuff is better than the old. And frankly, we often see that that is not the case.

One thing that also came up. Here was about security researchers not feeling empowered to, in fact, publish the details of what they found. And it’s the publication and sharing of these details that allow software vendors, software developers to learn from mistakes others made. But you will very often see large vendors in rooms code, say, for example, a vendor who had a product that was vulnerable, a security breach. A researcher reached out to the vendor, told them, Here’s a problem. I’ve described it. And the vendor will say, okay, well, thank you. You may never tell anyone about this. So this fixes that tiny issue for that individual vendor, but doesn’t do that much to really raise the bar. Bug bounties have also come up a little bit here, and bug bounties also have their role to play in sort of controlling the relationship between a security researcher and a vendor where a security researcher may find themselves tied into some contractual obligation to not share that information, even though, in my opinion, sharing that information more broadly after the vulnerability has been fixed within a reasonable timeframe is really important for us all to be able to learn how to make software safer. Project Zero have this 90 day disclosure policy. We are able to enforce this effectively, but we are also a very large company and I have legal I have lawyers who will help me when we also get threatened to not do this, which we absolutely do. We would love to see a world where every researcher is able to apply these very reasonable deadlines on vendors. And so this is perhaps an area where you can help.

Jeroen Lenaers (Chair): Thank you very much. Just because I think maybe Mr Smeet and also Ms Fidler would like to add to that, we have about 17 minutes left. Katharina Barley also has some questions. So I would first give the floor to Katharina Barley and then all panellists can then still add to any questions that were raised. I’m really sorry because it’s a hugely interesting session. I think one of the most interesting we’ve had and normally we don’t mind taking extra time, but we have the plenary vote at 12:00. So the result would be that you would have nobody here anymore and you’d still be speaking. And this is something I want to avoid from happening. So first Katharina Barley and then you have all the opportunity to respond to whatever you still feel like responding to.

Katharina Barley (Socialists and Democrats): Thank you very much. And apologies for running in and out. It’s crazy days here, so maybe you have already answered these questions and I missed them. In that case, another round of apologies. It’s actually that we’ve heard very much about those players who want to keep the vulnerable points open. But not so much about those. There are few who are interested in actually closing them. And I was thinking, for example, of Apple. I mean, they are very proud of the security measures that they take. And when we were in Poland, Apple themselves actually informed people who were, in fact, whose mobiles were infected by Pegasus. So how do they act? I mean, do they sort of employ people themselves to detect these abilities? Or maybe you can elaborate on that.

And the second one was already answered that actually those the bad guys always pay more. I think Mr. Beer already elaborated on that.

And the third question was for Professor Fidler. You mentioned that there are attempts by some countries to regulate the whole problem. So is there something like best practises? And again, if they have been answered already, I’ll get the information somehow.

Jeroen Lenaers (Chair): Thank. Thank you. Can I want a specific question for Mr. Beer, just because you’ve had your hands on Pegasus and the technology behind it and it’s more technical question, maybe it’s a very stupid question, but you said there is no defence if you if you understand the way it works, can you then after that also designed a defence because is Pegasus based on a very good vulnerability that they found or developed, or is the technology of Pegasus so good that they can use the technology in several vulnerabilities that they could potentially find? Is it the vulnerability that has the high quality here in development or detection or is it the technology that can you make use of different vulnerabilities? So we have safe, if all speakers could take maybe about 4 minutes to answer any…very briefly, Ms Thun.

Róża Thun und Hohenstein (Renew): Just to be sure that I understood, you said that you had a copy of Pegasus. So what is a copy of Pegasus? You copied? And now can you use it? In fact, someone’s. I just wanted to know what it means. Thank you.

Jeroen Lenaers (Chair): Okay. Excellent. So if we’ll have about 4 minutes to respond and then everybody gets home in time for the votes, and we start in the same order we did. So Mister Schröder first.

Thorsten Schröder (expert in information security and functioning of spyware, CCC (Chaos Computer Club)):Apple is doing a lot for the security of their products, but of course, Apple can’t cover all of the glitches in products worldwide. The question as to whether manufacturers like Apple can employ researchers to do this, software companies like Apple and Microsoft that do employee research as if they can afford it. But it’s a very pricey and time intensive exercise, and there are too few specialists in this area. So it’s quite a tricky exercise. But many companies do this and they’re barking up the wrong tree. If as a software manufacturer, of course, you can get outside researchers coming to you, pointing out where the glitch is. And of course, monetary incentives can be offered to such researchers.

Jeroen Lenaers (Chair): Thank you very much. Mr. Smeets.

Dr. Max Smeets (Senior Researcher ETH Zurich Center for Security Studies): Let me add my remarks by providing a perhaps a framework to think about regulation and incentives. And to start very briefly about the ecosystem of exploits. And so there are three types. The first one are these zero days that exploit unknown vulnerabilities. And here, when we think about regulation and incentives, we can think about making sure that these days are not sold to spyware companies to help them install the spyware. It’s for governments to set up potentially a VAT or that developers are incentivised to indeed provide this to the vendor and not go directly to brokers.

The second, when an exploit is patched, we call these patched end days. Well, so sorry. The second is when this exploit is disclosed but not yet patched, we call this unpatched and this. So now you have an exploit that has shown to the vendor. But the vendor hasn’t yet developed a patch to make sure that the system is no longer furnival to that exploit. And here we discussed a number of different incentives as to what should take should happen. One of them is the question around how much time should these vendors have to actually patch their systems? And should there be some enforcement to patch relatively quickly or not?

The third one we haven’t discussed. But it’s actually an important one too. It’s like once this unpatched and there is patched, we call this patched and this. And some systems, of course, are still vulnerable to what we call patched. And this these exploits which for which already there is a patch developed and if someone would have installed it, they are no longer fashionable. But unfortunately there are many systems still today that do not have the latest software updates, that do not have all the relevant patches on their system and as a result of that, remain burnable as well. And so there is a different incentive fixation, right? To what degree do you need to incentivise organisations to patch adequately? So it’s just a three pronged framework to think about the variety of exploits that exist and as a result of it, the incentives and disincentives that can potentially be created to improve the market. That indeed today is quite um, regulated and is, is highly problematic. Thank you.

Jeroen Lenaers (Chair): Thank you very much.

Ian Beer (White hat hacker, part of Google’s Project Zero team): Yeah, I’ll end to it. Some semantics. I was asked, what is a copy of Pegasus? Or indeed what? What does this word Pegasus even mean? I would see Pegasus as NSO’s brand name for this entire suite of capabilities that they sell to target individual mobile devices. So these Android phones, these iPhones targeted mostly using phone numbers, email addresses, things like this. So Pegasus is that whole solution? You as a consumer of Pegasus, I’m not pointing anyone here. You as a consumer of Pegasus are not exposed to the zero-day vulnerabilities that underlie this capability that you’re renting. I gave a talk before where I detailed exactly what you actually get from NSO when you pay them for your contract. They ship you enormous rack of computers. They ship you modems. They like to connect into the network and as a terminal and you put in the phone number and you get back the intelligence. That’s the level that the customer actually sees. The fact that there are zero-day and that they are not exposed to and those zero-day might get patched and another one will replace it. And they don’t see that aspect of it that’s hidden from them. And so and I mentioned that I had a copy of Pegasus. What I really mean there is that there was the accusation made that none of this was real. So I can categorically state at least the exploit that I analysed and published was definitely real. And so an individual or a group of individuals invested a lot of effort into building this thing. The copy aspect, I guess, just implies the same with all software that it can be trivially copied. So I have a copy for analysis. Would it work against someone now? No, because part of the analysis we did was to figure out what are the underlying zero days. We found the two zero day vulnerabilities that we were able to with the sample we had. And they got patched within seven days after we finished our analysis. The point was also raised about victim notifications. I think these are also something really positive to increase transparency. I think this is really important to driving the debate forwards. If people know or organisations know that they have been targeted or that their users have been targeted, telling those users and telling the world helps us have a more open and frank discussion about this being a real issue. And we can point to it doesn’t affect just one person or just one government, but it affects an actually remarkably broad number of people across a whole swathe of society. Thanks.

Jeroen Lenaers (Chair): Thank you. And then back to Mr. Fidler. It should be around five, 5:00 now I guess in in Nebraska. So once again, apologies, but thank you. Thank you very much. And you have the last opportunity to the of any of the speakers to speak to us. So thank you.

Dr. Mailyn Fidler (University of Nebraska): Thank you. I want to close by just refocusing us on one aspect that I think we’ve neglected a little bit in our discussion. So we’ve talked a lot about the trade and zero-day vulnerabilities, regulating disclosure and research and use of commercial spyware tools. We can’t solve this problem without addressing the root desires of governments to use zero-day vulnerabilities in intelligence, military, or other kinds of operations like that. So that’s really the central driver of this problem, and we can’t solve it without looking at that piece of the problem. So I know there’s been interest in ongoing things like Citizen Lab as a solution. I think another great place to focus is funding civil society on issues of domestic intelligence reform. That’s another good place to put resources. Again, it’s not going to be as flashy because that’s a broader topic, but I think it could be really good. Another option is to model best practises in terms of what those domestic regulatory frameworks look like. Again, thank you so much for having me.

Jeroen Lenaers (Chair): No, thank you and thank you to all the speaker. I think it was a very rich exchange of views. It again, it also, as with everything in this committee, the more we learn, the more questions we have. I was about to reinvite Mr. Beer to come and give his talk on Pegasus, also ones for our committee. But it is recorded as I been given to understand. So if we get sent a link, we’ll send it around so we can all watch this. I think it be very helpful. Thank you to all the speakers and Mr Smeet, you already did it a little bit, but I would also invite you also, upon the request of the rapporteur, to have a close look at what has been written down in the report on your area of expertise. And also feel free to give us any comments or assessments of this if you have the time, because I think your expert opinion is very important for the final report that we will deliver in the end. So thank you all very much. Thank you for the members for the attention, I think has been an excellent session and I wish you all the best with the votes, which will start any time now. So thank you very much. Oh, sorry. Next meeting is Monday, the 28th of November from 3:00 till 6:30. So thank you, pm, by the way.

No Tracking. No Paywall. No Bullshit.

Unterstütze auch Du unseren gemeinwohlorientierten, werbe- und trackingfreien Journalismus.

Die Arbeit von finanziert sich zu fast 100% aus den Spenden unserer Leser:innen. Werde Teil dieser einzigartigen Community und unterstütze jetzt unsere Arbeit mit einer Spende.

Jetzt spenden

0 Ergänzungen

Wir freuen uns auf Deine Anmerkungen, Fragen, Korrekturen und inhaltlichen Ergänzungen zum Artikel. Bitte keine reinen Meinungsbeiträge! Unsere Regeln zur Veröffentlichung von Ergänzungen findest Du unter Deine E-Mail-Adresse wird nicht veröffentlicht.