PEGA-UntersuchungsausschussMicrosoft, Google und Facebook kritisieren Staatstrojaner

Am 14. Juni waren Vertreter:innen drei großer Tech-Firmen zu Gast im Staatstrojaner-Untersuchungsausschuss. Wir veröffentlichen ein inoffizielles Wortprotokoll des Treffens.

Eine Frau spricht auf einem Panel, im Hintergrund die Flagge der Europäischen Union.
Kaja Ciglic von Microsoft spricht vor dem PEGA-Untersuchungsausschuss. – Alle Rechte vorbehalten Europäisches Parlament

Am 14. Juni hörte der Staatstrojaner-Untersuchungsausschuss unter anderem Vertretende von Microsoft, Google und Meta. Sie alle mussten den Parlamentarier:innen Rede und Antwort stehen. Im Fokus stand dabei die oft komplizierte Interessenslage zwischen großen Unternehmen und Institutionen der EU.

Sophie in ’t Veld:

Wenn Sie aufdecken, dass europäische Regierungen ihre eigenen Bürger ausspionieren, dann bekommen Sie auch Ärger mit denselben Regierungen, die Gesetzgeber und Aufsichtsbehörden sind.

Von dem Treffen gibt es ein Video, aber kein offizielles Transkript. Daher veröffentlichen wir ein inoffizielles Transkript.


  • Date: 2022-06-14
  • Institution: European Parliament
  • Committee: PEGA
  • Chair: Jeroen Lenaers
  • Experts:
  • Panel 1: Kaja Ciglic (Microsoft), Charley Snyder (Google), David Agranovich (Meta)
    Panel 2: Professor Ross Anderson (Cambridge), Patricia Egger (Proton AG)

  • Links: Hearing, Video
  • Note: This transcript is automated and unofficial, it will contain errors.
  • Editor: Tim Wurster

Big tech and spyware 1


Panel 1

Jeroen Lenaers (Chair): Good morning, colleagues. Good to see all of you again. We’ve had an intensive schedule with meetings on Thursday, yesterday and today. But I think it also shows the great importance of the work we do and the amount of information that we need to gather in in terms of doing our job. So, I’m very happy that we are all joined here together this morning again, both in the room and online connected. We have interpretation in the following languages and hopefully I might add, German, English, French, Italian, Dutch, Greek, Spanish, Hungarian, Polish, Slovakian, Slovenian, Bulgarian and Romanian.

The first point in our agenda is the adoption of the agenda, but I don’t think there’s any problems with that. So that is considered adopted and to not lose any more time, I will jump straight to the first panel. And today we are organizing a hearing on big tech and spyware. Now we want to hear from tech companies on their reactions and the measures they have taken when faced with spyware such as Pegasus.

So, in the first panel, we will hear from Kaja Ciglic, if I pronounce that correctly. Kaja Ciglic is senior director on digital diplomacy at Microsoft. We’ll hear from Mr. Charley Snyder, who is the head of cyber security policy from Google. Mr. David Agranovich, who is a security policy director of Meta, and Mr. Ivan Krstić, who is the head of security, engineering, and architecture at Apple.

Now, of course, in the second panel, we will then have two speakers from a different perspective with Professor Ross Anderson from Cambridge and Ms. Patricia Egger, who is security and Risk and governance manager from Proton AG. So, in the first panel, I will not take too much time. So, I immediately pass the floor to Ms. Kaja Ciglic, the senior director of the digital diplomacy at Microsoft. You have the floor for about 10 minutes.

Kaja Ciglic (Microsoft): Thank you. And I’m glad I could speak Slovenian, although I won’t. First, thank you very much. Thank you for the chair. And thank you for inviting Microsoft as well as the Parliament for taking on this important topic. We are particularly appreciative of the focus, the impact these technologies and tools have on individual rights online, but also note with growing concern that unscrupulous, unscrupulous use of these technologies can have a much broader and inadvertent effect, putting large parts of the ecosystem at risk. And if you permit me, I will spend a few minutes to just explain why Microsoft has observed trends related to cyber conflict for many years, focusing both on the evolving threat landscape and the political processes designed to keep the grievous attacks in check.

It is clear that over time what used to be the tools that were in the hands of a small set of sophisticated state actors about a decade or so ago have significantly proliferated in numbers. Moreover, we have reasons to believe that at least certain state actors across democratic and also authoritarian regimes increasingly outsource some of their activities to groups that then can then act in their behalf. We refer to private sector actors operating in this great grey market as cyber mercenaries. These types of activities that cyber mercenaries can engage in include breaking into systems or encryption on devices, on computers, on phone or on network infrastructure, such as cloud computing and then surveilling specific targets spyware. The main thrust of this enquiry is therefore one of the tools and practices they engage in the market cyber mercenaries operate in is opaque is. And as a result, their clients are difficult to identify, as are the practises they engage in.

Ironically, groups selling malicious tools are very particular about the confidentiality around their products, services, contracting and pricing associated with their offensive tools. Government entities acquiring these capabilities are likely to include intelligence services, police, military, ensuring that procurement contracts are subject to national security restrictions and therefore by definition, not public. Most of the information we have about the actors involved, therefore, comes from painstaking and difficult research into these tools, as well as on into understanding of the victims, who they are and how they were targeted. Organisations such as Citizen Lab at the University of Toronto’s Munk School and Cyber Peace Institute work with victims of such attacks not only to identify the malware, but to help individuals secure their devices in the future. Through their work, we know that some of the actors targeted include human rights defenders, journalists, politicians, and other private citizens with negative impacts on their human rights.

We saw this in our own research with a group called Sour Gum, which we believe is an Israeli based cyber mercenary that Citizen Lab has identified as a company called Candiru. We believe Sour Gum enabled a compromise over 100 victims. Approximately about half of them were found in the Palestinian Authority, with most of the remaining victims located in Israel, Iran, Lebanon, Yemen, Spain, United Kingdom, Turkey, Armenia, and Singapore. These attacks have largely targeted our customer with our consumer rather than our enterprise services, indicating that their customers that our customers were pursuing individuals. Victims include human rights defenders, dissidents, journalists, activists, and politicians. We initially started this stream of work after receiving a tip from Citizen Lab about the malware used by Sour Gum. The Microsoft Threat Intelligence Centre and Microsoft’s Security Response Centre spent weeks examining the malware, documenting how it works and building protections that can detect and neutralise it by examining how the program’s customers were delivering the malware to their victims. We saw they were doing this through a chain of exploits, including ones leveraging zero days that impacted popular browsers as well as our Windows operating system.

Our analysis also established that the spyware could send messages from logged in emails and social media accounts directly on the victim’s computer. This could allow malicious links or messages to be sent directly from a compromised users computer, proving that the compromised user did not send these messages could be quite challenging. Following the discovery, we built protections against the malware into our security products, and we’ve shared these protections with others in the security community so they can protect their customers in turn. We also released updates that when installed, protect Windows customers from two key exploits. We also obviously talk publicly about this case to raise awareness of the security risks, as this case does demonstrate the impact of these type of company’s tools. And the computing ecosystem is really clear: when cyber mercenaries exploit vulnerability in a product or service, they put the entire computing ecosystem at risk. Rather than reporting the vulnerability they have found to the platform that could fix it, they leveraged the vulnerability and attacked. When the vulnerabilities are identified, identified publicly, companies are in a race against time to protect their customers before broad based attacks ensue from cybercriminals or other malicious actors. This is a dangerous and a very difficult cycle for both software providers, as we need to quickly develop the patches and consumers who then need to apply the patches.

As a founding member of the Cybersecurity Tech Accord, which is a leading industry alliance bringing together over 150 companies, including Meta, Microsoft has committed to not engaging in offensive operations online. We stand by this commitment by human rights responsibility in this space. Beyond that, reaffirm our responsibility to act when we see others violate those rights. This has also led to our filing and an amicus brief in a legal case that WhatsApp brought against the NSO group. We are committed to working with others in the industry, civil society and for us like this to help governments curb this dangerous market. This might include advocating for responsible government action, including around procurement, establishing greater transparency of cyber mercenary business practises, urging all private sector actors, including cyber mercenaries, to respect human rights, as well as determining accountability frameworks, establishing legal standards, and pursuing multilateral consequences for particularly egregious acts. I will end here, thank you.

Jeroen Lenaers (Chair): Thank you. Thank you very much. That was very interesting. And I’m sure there will be some questions on what you have said, especially with regards to our government and Candiru, which is also very relevant for the work of our committee. We move on to our next speaker who is connected online, Mr. Charley Snyder, head of cybersecurity policy at Google. If we could connect him.

Charley Snyder (Google): Thank you, Mr. Chairman, and members of the committee, for addressing this important topic today. Hi, my name is Charley Snyder, and I’m responsible for security policy at Google. And I’m pleased to be with you today to discuss Google’s efforts to protect users from commercial spyware and raise awareness about the worrying trends we’re seeing.

The commercial spyware industry appears to be thriving and growing, and this should concern the European Parliament as well as all Internet users. Google has been tracking the activities of these vendors for years and taking steps to protect our users. Android was the first mobile platform to warn users about NSO’s Pegasus Spyware. In 2017, our Android team released research about a newly discovered family of spyware related to Pegasus that was used in a targeted attack on a small number of Android devices. We observed fewer than three dozen installations of this software at this time, but at that time we remediated those compromises and implemented controls to protect all Android users. In 2021, our Threat Analysis Group discovered campaigns targeting Armenian users, which utilised zero day vulnerabilities in Chrome and Internet Explorer. We assessed that these exploits were packaged and sold by the same surveillance vendor. Reporting by Citizen Lab linked this activity to Candiru, an Israeli spyware vendor. Other reporting from Microsoft has linked this software to the compromise of dozens of victims, including political dissidents, human rights activists, journalists, and academics. Project Zero, our team of security researchers at Google who studies zero day vulnerabilities in the hardware and software systems that are depended upon by users around the world, has also tracked the use of commercial spyware. In December, they released research about novel techniques used by NSO Group to compromise iMessage users. iPhone users could be compromised by receiving a malicious iMessage text without ever needing to click a malicious link. Short of not using a device, there’s no way to prevent exploitation by zero click exploit in this scenario. It’s a weapon against which there is no defence. Based on our research and findings, Project Zero assessed this to be one of the most technically sophisticated exploits they had ever seen, further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states. Most recently, we reported in May on five zero day vulnerabilities affecting Chrome and Android, which were used to compromise Android users. We assessed with high confidence that commercial surveillance company Cytrox packaged these vulnerabilities and sold the hacking software to at least eight governments, amongst other targets. This spyware was used to compromise journalists and opposition politicians. Our reporting here is consistent with earlier analysis produced by Citizen Lab in Meta. Our findings underscore the extent to which commercial surveillance vendors have proliferated capabilities historically only used by governments with the technical expertise to develop and operationalise exploits. We believe its use is growing, fuelled by demand from governments. In fact, seven of the nine Zero-day vulnerabilities our front analysis group discovered in 2021 fall into this category. These were zero days developed by commercial providers and sold to and used by government backed actors. So just to sum up, out of all the vulnerabilities, the zero-day vulnerabilities we are detecting across all activity, the substantial majority of them fall into this commercial spyware category. Our Threat Analysis Group is actively tracking more than 30 vendors with varying levels of sophistication and public exposure, selling exploits or surveillance capabilities to government backed actors. This industry appears to be thriving. In fact, today there’s a large industry conference taking place elsewhere in Europe sponsored by many of the vendors I just mentioned. This trend should be concerning to the European Union and all citizens. These vendors are enabling the proliferation of dangerous hacking tools, arming governments that would not be able to develop these capabilities in-house. While use of surveillance technologies may be legal under national or international laws, they are often found to be used by governments for purposes antithetical to European values targeting dissidents, journalists, human rights workers, and opposition party politicians. Aside from these concerns, there are other reasons why this industry presents a risk to the Internet. While vulnerability research is an important contributor to online safety, when that research is used to improve the security of products vendors, stockpiling zero-day vulnerabilities in secret can pose a severe risk to the Internet when the vendor itself gets compromised. This has happened to multiple spyware vendors over the past ten years, raising the spectre that their stockpiles can be released publicly without warning. The proliferation of commercial hacking tools is making the Internet less safe and threatening the trust on which a vibrant, inclusive digital society depends. This is why when Google discovers these activities, we not only take steps to protect users, but we also disclose that information publicly to raise awareness and help the entire ecosystem in line with our historical commitment to openness and democratic values. We think it is time for civil society, industry, and government to come together to change the incentive structure which has allowed these technologies to spread in secret. We welcome steps taken by the U.S. government in applying sanctions to the NSO group and Candiru and governments should consider expanding these restrictions as well as initiatives to improve transparency and know your customer requirements. As I highlighted above, there are many other vendors engaging in practises like NSO group and can vary. We are committed to working with partners in industry, government, and civil society to further our efforts to detect and disrupt these threats. Many of these vendors are very sophisticated and it will take concerted action to understand the full scope of these activities and to protect users. We thank you for the opportunity to be with you today, and I look forward to your questions.

Jeroen Lenaers (Chair): Thank you, Mr. Schneider, for being with us today. And, again, that was highly interesting. And there will be many questions of our colleagues to some of the issues that you mentioned, especially your comment that this is a weapon against which there is no defence. That does give us is some pessimism, I think, in this room. But we’ll continue with our third speaker. And I will come back also with questions through the first two speakers. So, I move to Mr. David Agranovich, who is a security policy director of Meta. You have the floor; he is also connected online.

David Agranovich (Meta): There you go. Sorry it took us in there to get permissions to unmute my microphone. Apologies, everybody. First off, thank you so much for pulling together this important conversation. I appreciated the comments from my colleagues at Microsoft and Google with whose teams we work very closely in countering these types of adversarial threats. From the Meta perspective, I also wanted to address some of the mercenary spyware, what we call surveillance for hire firm activity. Both our teams have seen that across the Internet and then some trends I think, that are particularly concerning; some things that we can do as an industry. The first issue that we’ve generally confronted, right, is that these types of surveillance capabilities have traditionally been the purview of governments, sophisticated surveillance, access and capabilities into personal devices, accounts, across the Internet that in democratic governments are generally subject to democratic oversight. The challenge of the surveillance or hire industry is that it makes this type of democratic oversight difficult to impossible and further democratises access to these sophisticated surveillance capabilities to countries or private individuals who may not necessarily employ them in a manner that would be subject to democratic principles. Our teams aggressively pursue, identify, disrupt these types of surveillance for hire firms alongside other types of espionage activity on our platforms and have for years. However, in December of last year, we released a report focussing on seven different “surveillance for hire companies”, some of whom have been mentioned already in this conversation, not just NSO Group, but also Black Cube as well as Cytrox and a few others. And one of the important things that we were hoping to do with this report was to both raise awareness of the breadth of the surveillance for hire industry, that this isn’t just one or two firms in a couple of countries, but a broadly geographically diverse set of companies offering a variety of different capabilities, oftentimes to anyone who’s willing to pay for their services. And the second thing that we are hoping to do with that report was to help build some taxonomy into how we talk about these threats. And so, we laid out what we call the surveillance chain, essentially a three-step process that we see many of these surveillance firms fall into. The first of which is reconnaissance activity. So, efforts to provide clients with the ability to collect information on a target, whether they be an open-source target on a social media platform or information about them elsewhere on the Web. The second phase of those operations, what we would call engagement, involves social engineering, providing clients the ability to engage with their target directly to build rapport or relationship, to lead to the third phase of their operation, what we call exploitation, which is the point at which a surveillance for hire company is offering the services to actually exploit a device, hack an account, gain access to a particular target’s private communications or devices or networks at matter. We generally find that operations that attempt to leverage our platform fall into one of those first two buckets: reconnaissance or engagement. And this raises a particular challenge for us. It’s often difficult for us to confirm whether a target was successfully compromised by a surveillance actor because the compromise occurs off our platform. The goal is to build a relationship, social engineer, someone to click on a phishing link, for example, or download an application. But when they visit that website or download the application, we lose visibility into whether they were hacked. All of that said, what we found in our reporting was a breadth of different companies that effectively were hiding the origin of their clients. Though many of them would claim that their clients were governments fighting crime or fighting terrorism. And when we looked into the victims, the people that the clients were looking into trying to hack, we found more than 50,000 people all over the world who we sent notifications to. And those people tended to be anything from opposition politicians, democracy activists, dissident communities, ethnic minority groups, journalists, human rights activists and the like. In particular, I feel it’s really important to emphasise this, given how often companies like NSO tend to call out that they only work to help enable investigations into terrorism activity, for example. That’s just not what we saw in our investigations on our platform. And what was particularly problematic is the way that these firms effectively hide the clients who are paying for their services in an industry where there are very few requirements for these firms to do any sort of know your client type activity. That means that anyone willing to pay again, whether that’s an authoritarian regime or a private individual engaged in litigation, as we saw in some of our cases, can simply hire these firms and then deploy very sophisticated capabilities against whoever they wish. So, what can we do? I don’t think this is it. As pessimistic of a conversation as it may sound, there are opportunities both for governments, platforms, and civil society to take actions here that will meaningfully constrain the operations of these firms. My colleagues from Microsoft and Google alluded to some of these, and I think it matter. We would also encourage some of these potential lovers on the government side. There is tremendous space for governments to take regulatory action around surveillance for hire firms in particular, whether that be around encouraging know your client requirements on these firms themselves or some of the self-regulation that governments can do to limit their own purchases of some of the capabilities offered by these firms, by their own domestic law enforcement agencies within the platform and tech company space. As my colleagues at Google and at Microsoft also both mentioned, our teams are actively looking for these operations, taking them down, investigating their activity, patching vulnerabilities where we find them, and critically sharing information across the industry. When we see this type of activity occurring, though, Facebook may only be able to see recon and social engineering activity. We see efforts to exploit Android ecosystem or our partners at Apple or Microsoft will share that information in their teams often will share that information with us. It makes the industry stronger, and it makes it harder for these firms to slip through the cracks between the different companies. We also believe there’s value in deterrence and disclosure. One of the reasons we published this report, the reasons that we do quarterly threat reporting already on a variety of different issues, is we believe that for some of these companies, what we can do best is provide some sunlight where there has traditionally been very little visibility by the public into what they can do, who they’re targeting and how they’re doing it. On the civil society side, we, like our colleagues from the other companies, work quite closely with groups like Citizen Lab or Amnesty Tech who can do tremendous work, both investigating some of this activity in a cross-platform way, as well as helping enable victim notifications and providing, for example, the amnesty tech helpline so that journalists or activists who are concerned that they may be targeted have a way to easily get help without having to rely on any particular platform. And I’ll leave you with maybe one small anecdote of how effective that can be. I noted there was some interest in sidetracks, which was one of the firms that was included in our December surveillance for higher threat reporting in which several of the other companies have also looked into. In a few cases, we’ve seen the indicators of compromise that we published related to sidetracks back in December enable civil society members and journalists who are concerned they may have been targets. To look for those indicators within their own devices, realise that they may have been targeted and then work with some of our civil society partners to do the mobile forensics work to confirm that type of compromise. And we’ve seen that type of work in a space where matter would never have known or had any visibility that that targeting was occurring elsewhere, enable a broader understanding of this high threat targeting ecosystem. So, I think there is a lot of opportunity here. Look forward to your questions. And again, thank you for having this conversation.

Jeroen Lenaers (Chair): Thank you. Thank you for the contribution. I will move immediately to the Q&A section of this hearing. I propose that we start with a question by question, some member by member, and then answers from the three guests. If we run out of time, at some point, we might have to start grouping questions. But I hope we can allow for members to ask individual questions. I will start with our rapporteur, Sophie in ’t Veld, please.

Sophie in ’t Veld (Renew): Yes, thank you, Chair. And thanks to the 3 speakers. I’m not really sure where to start there. We’ve heard a lot of things, but I would like to hear a bit more detail about a number of things. First of all, the court case that has been launched by WhatsApp. I’d like to hear what the situation is and what you expect. I’d like to hear a little bit more because you’ve been doing a lot of, let’s say, mapping of the situation. I heard Mr. Schneider speak about 30 vendors that you have identified. Is there a list that you can share with us? You have in different ways referred to the activities of Cytrox, spyware that they’ve been selling to two governments. Can you be a bit more specific about the use of that spyware or similar spyware in Europe? Because that is the special focus of our committee. Can you say a bit more about the stockpiling of vulnerabilities? Because we heard about that yesterday as well. And it seems to be very much, you know, not an activity, which is very, very worrying, where we could possibly regulate. And then specifically to Mr. Agranovich, I’d like to hear a little bit more about one of the, I think still board members of Mr. but soon to step down, Mr. Peter Thiel, who seems to be investing also in in companies which are engaging in similar surveillance activity. And we hear rumours that he and his company Palantir might be interested in buying parts of NSO. And we also know that Mr. Peter Thiel is one of the major donors to Mr. Trump and he has a track record of trying to interfere in in politics. He was also involved in Cambridge Analytica case. So can you say a little bit about that? And, uh. Yeah. No, the thing that interests me most is to hear a bit more from you, uh, in particular, Mr. Schneider, about what you, what you discover about activity in Europe. Because we, we know or we have, let’s say, well-founded suspicions of at least four governments Poland, Hungary, Spain and Greece. We know that other governments, like Germany, the Netherlands, Belgium, are buying this kind of spyware to be used by the police. But can you say a little bit more about the search vendors that you have identified and their activity in Europe and against Europeans? Thank you.

Jeroen Lenaers (Chair): Thank you. Sophie, I propose we then take the answers in the sequence of the questions that were answered. So maybe first Mr. Agranovich on the court case and on the board member. And then Mr. Schneider. And then. So, Mr. Agranovich, you have the floor.

David Agranovich (Meta): All right. Thank you so much. And thank you for the questions. So, taking those in order and starting with the WhatsApp case. As you can imagine, I’m somewhat limited in how much I can share given we have ongoing litigation in the WhatsApp case. What I can say, though, is we believe that it’s important that companies can take this type of legal action when we see our terms of service being violated by surveillance for hire companies. One of the levers that technology companies have in that deterrent framework I talked about in my opening remarks is holding surveillance for hire companies that attempt to target users by abusing their services responsible for doing so in the legal system. So, I’m hopeful given that, you know, WhatsApp, I think initially filed this lawsuit, but we saw other tech companies do similar actions and then follow an action by the US government. I’m hopeful that this type of action will spur more efforts to hold these types of companies accountable when they are violating platform policies in such flagrant ways. On your second question, so I can’t speak to Mr. Thiel’s personal actions in part just because I don’t know very much about them. What I will say, though, is whenever we see this type of surveillance for hire activity, attempting to target people on our platforms, regardless of what company happens to be doing it, we will investigate it; we will take action on it. And when we see people being targeted on our platform, we will notify them of that targeting and give them steps to take to secure their accounts. One of the important aspects of that December report was that of the seven companies, they represented a number of different countries of origin. They appeared to be working with a wide variety of different types of clients. And at matter, we don’t really believe there is an easy way for a company to determine whether some type of surveillance for hire activity is benign or being conducted for a positive purpose. I know this was the topic of an earlier hearing, and rather, if we see this type of efforts to violate the privacy of people who are using our platform or try to trick them into downloading malware, we’re going to take action on that. It’s just unacceptable.

Jeroen Lenaers (Chair): Thank you. Mr. Schneider, on the list of vendors and the activities in Europe.

Charley Snyder (Google):Jeroen Lenaers (Chair): Thank you. Ms. Ciglic.

Kaja Ciglic (Microsoft): Maybe I’ll start at the vulnerabilities stockpiling as well. I a little bit like my colleague from Google echoed I want to echo. This is an incredibly concerning area for us, I think for all the vendors. The combination you look the combination of a10 day vulnerability out there is it is already a challenge, and particularly if there are more than if there is more than one group exploiting it. And as I mentioned earlier, it very often it doesn’t leave it just sort of the whether it’s the cyber mercenary or the one group looking at it. Once these things get leaked very quickly, they get either reverse engineered new exploits are developed on the top of them and it’s used by criminal groups across the board. With the WannaCry non Petya attacks that were just reference you know we’re one example we’ve seen others where bad actors have used more than one zero day vulnerability as part of a single attack. And as and this puts a lot of pressure on the providers as oftentimes the vulnerabilities take a long time to fix. It’s not as easy to sort of just seeing there’s a vulnerability and finding a fix, applying it, offering. Oftentimes there’s implications down the line depending on the customer that that that uses them through the protection for patching. So, the patches will have to be tested as well. We’ve also seen vulnerabilities that affect more than one private sector group, particularly the ones in hardware, where you then sort of have a number of different organisations having to work together to find a creative solution to do to basically plug the hole that can take months. And if these vulnerabilities are hacked and out in the wild, it frequently means that there’s continuous exploit the exploitation. What we would recommend, I think, you know, encouraging cybersecurity research, academics, researchers in general to look for vulnerabilities, encourage them to report to vendors in a coordinated, responsible manner. So, it’s reported in secrecy. So, there is time to work on a on a solution. And then on the other side on the on the government side, look at developing processes sort of I think in the US is called the vulnerability equities process where the government finds a responsibility or it’s reported to it in some way rather than retaining it for its national security intelligence purposes. Obviously, we would prefer the impetus for it to be to report to the vendor, but also when they look at it like balance the different equities. So, balance national security with human rights and balance national security with the broader impact on the ecosystem. Will it we really expose everybody who uses Windows, or will it expose one person and sort of balances it, balance the economic impact of vulnerability being out there. So, I think putting those processes and processes in place as part as basically an oversight of the intelligence services or the police is something we strongly recommend doing. The only other point I would say is on the list of vendors and you saw, and it was mentioned again and again, you know, we all see a different aspect of the ecosystem. We see who’s on our systems, Facebook, see metrics, what’s a who’s on their system. Same with Google. And we compare, compare and contrast. But this is also where it’s great to have groups like Citizen Lab, but also, I think the Atlantic Council. I think that in the US have started compiling across the platforms what are the vendors out there that they’ve seen that they’ve heard of, that they’ve followed through sort of some of the conferences that were mentioned earlier, that there’s one today that these groups have to sell their tools to governments and other clients. So so looking at some of these lists and we can share some of the links afterwards to the list out there, I think will also be helpful and provide a more comprehensive picture.

Jeroen Lenaers (Chair): Thank you. Move to Mr. Zoido on behalf of the EVP.

Juan Ignacio Zoido Álvarez (European People’s Party): Thank you very much, Chair. And thank you to all the speakers that we’ve had today. So, thank you for those who have had a presence and those who have connected remotely. I think you are. A key to us better understanding this situation with the spyware. When it comes to large tech companies such as Microsoft, Google, or Meta, they have also been victims of attacks of spyware such as Pegasus. These are also key actors when it comes to implementing these digital policies and cybersecurity that we are promoting from the site of the European institutions. So, or further information you come provide to us today is particularly valuable. So, I will now go on to my questions, which are related to what the colleagues have already asked, but I would like to get some specific information. So, we’ve talked about management of these revelations and Pegasus exploiting deficiencies in your platforms. Can you confirm that you have patched up all of these vulnerabilities that have been detected? Secondly, in relation to this point. How much cooperation do you have with states when you find these vulnerabilities in your systems? Also, when it comes to cooperation with intelligence services from the EU. Fourthly, with regards to the United States. How do you view the cooperation there and then looking to the future? I know it’s very difficult to predict future vulnerabilities, but how are you trying to anticipate these threats to help protect the privacy of your users and prevent these threats in the future? I know that you have a privileged observer status. So, I would like to know. What new type of spyware have been detected lately aside from Pegasus? Where is the provenance of this spyware? So particular a country that is active in this area. Thank you very much.

Jeroen Lenaers (Chair): Thank you, Mr. Zoido. There were questions to all of the speakers, so I would say we do the reverse order and I start with Ms. Ciglic.

Kaja Ciglic (Microsoft): Thank you and thank you for the questions. I would say to start at the beginning on the vulnerabilities being patched, we are effectively, constantly in the process of identifying vulnerabilities and patching them. So, the ones that we have talked publicly about have been patched. There might be others that are in the process as per usual, some of them from the private sector. So cyber mercenaries and some of them get more general in terms of corporations with states a little bit where as my colleague from Meta also said, when we find when our abilities, we patch them and we work to get these actors out of our systems no matter where they come from in terms of how much we coordinate with governments, either in the EU and in the U.S., we coordinate strongly in terms of increasing the security of the ecosystem. So, we work a lot with certs around the world when it comes to issuing patches. When it comes to talking about the protections that users, whether it’s enterprise, our customer or consumers can apply and are working on identifying new ways and new opportunities to secure our systems more fully. And that’s kind of I think also the last point that you mentioned, I want you to sort of a little bit double down Microsoft. You know, I think from one from we tried to tackle this area from several perspective. One is a level like here today, trying to work and identify opportunities to encourage greater regulation, whether it’s other actions on specific leaders, groups, other we work with, others in the industry as well as civil society on specific cases. But we also work to secure our products and services, investing in them continuously, whether it’s in new technologies and in innovation. Microsoft Current Next five years are investing over $20 billion in security of our systems. And that has been paying off already in terms of, you know, making sure that our systems and services are as strong as possible. We also working with our consumers and our customers, but also more broadly to raise awareness of what they could do to protect themselves and, you know, simple things. And it’s, you know, you’ll never be 100% secure. But, you know, from patching to not to raising awareness on around phishing on two factor authentication, on turning off applications on your systems that are not using that could potentially be leaky issues like that are things that we do. Thank you.

Jeroen Lenaers (Chair): Thank you, Mr. Agranovich.

David Agranovich (Meta): Thank you for the questions and taking them in the same order. On the vulnerabilities piece. As my colleague from Microsoft myself mentioned, we also, for the vulnerabilities we’ve started to talk about publicly, we’ve taken steps to patch them often. I think Mazda also finds itself in an interesting position where what’s generally happening on the platform are efforts to either do that reconnaissance activity or to try and build relationships with people so that they can then send them somewhere where they don’t get compromised by malware for those types of operations. What we’ll do is we’ll send notifications to the people that are being targeted through our own platform in notifying them that we believe they’re being targeted by either a sophisticated state linked attacker. If we think that it’s a state linked attacker or that they’re being targeted by a sophisticated attacker, if they’re not state linked and walking them through specific steps that they can take to kind of lock down their profile, to make it harder for an outside entity like one of these firms or their clients to try and do that type of reconnaissance or social engineering activity. So it’s kind of an interesting facet of where we sit in the surveillance targeting chain that we probably see less of the actual malware and vulnerabilities related to the platform, but have to do more work to make sure people can protect themselves from kind of the softer side of the industry, the attempts to actually build relationships as far as how we work with governments both in the EU and the US, when we see operations that are targeting particular governments or employees of those governments, we will absolutely work closely to make sure we’re sharing information with those governments. And then we’ll also routinely brief. I was actually just in Europe for the last three weeks. I just got back to California to routinely meet with and brief kind of the search teams in countries similar to our colleagues in Microsoft, as well as just broader briefings to folks in government to understand that the threat picture that we’re seeing as far as are forward looking, what are we seeing from other types of threat actors? The way that we tackle this problem is we have an investigative team made up of folks who spent a lot of time either in the cybersecurity research space, including at some of our partner companies, as well as in kind of the government and investigative space looking for these types of threats all over the world, focussed both on known kind of sophisticated, adversarial, persistent threat or apt actors as well as some of the companies that are starting to enter this space. In addition to those investigative teams, we’ve built a bunch of automated systems looking for this activity. And so, the seven companies that we talked about are by no means the only firms that we’ve tracked and investigate. And we expect to do more of the type of reporting that we did back in December in the coming months.

Jeroen Lenaers (Chair): Thank you. Mr. Snyder.

Charley Snyder (Google): Thank you very much for the questions. Like my Meta and Microsoft’s colleagues, similar story on the state of the vulnerabilities and whether we’ve passed them all public reporting of these groups and this type of malware. We have issued patches for all vulnerabilities mentioned in that reporting. There may be other reporting or there may be other detections we’ve made where we’re still working on patches. It’s also worth mentioning that in some platforms we’re able to issue a patch and apply those patches automatically. In others, we rely on user interaction to apply the patches over a longer period of time. Questions about cooperation with various governments. I think Google is closely cooperating with governments in Europe and around the world to improve and enhance cybersecurity. Like the other participants, we tend to engage most with certs and the entities responsible for the defensive mission in countries, and in particular in cases. When we detect threat activity either emanating from or affecting a particular country, we will reach out and engage bilaterally to share what we know and enhance both of our views of the threat picture. The last question was about how what is Google doing to anticipate threats? And what are we seeing coming over the horizon? I think this is an area where we try to really focus, which is, you know, monitor the threat landscape, anticipate how threats are evolving, and then build our products and in some cases, fully re-architect our products to eliminate entire classes of threats. And I think that’s an area where we have a pretty good track record. I’ll give you a couple examples. One area we really focus on is outreach and outreach to and protection of high-risk users. And these are the same users, not incidentally, who are being targeted by the vendors mentioned in this hearing. So, journalists, human rights activists, opposition politicians, we have specialised tools that we offer to protect these users. We have a programme called the Advanced Protection Programme. That’s our strongest form of account security. And through the various controls that we’ve applied to that programme, there’s actually since we started the programme more than several years ago, there’s never been a documented case of any of these users being successfully phished or compromised through phishing through that. Another example is, you know, we produce an open-source operating system and have laptops, Chromebooks, and there’s never been an example of a Chromebook being compromised and made part of a ransomware botnet. So, we’re always trying to figure out what are the what are the serious threats that our users face and can we architect our products to prevent those threats? And we do that. I think, as the other participants mentioned, we do that through layered defences. And so that’s building defences, controls and protections at the hardware and the operating system level, the application level, combining that with various threat intelligence and analytic capabilities to detect and block threats, ideally automatically. So, the users never know that they’ve been targeted until we notify them. And then extensive cooperation with civil society and industry, which I think if you are, you know, anyone listening into this hearing, you know, that’s been a constant theme is the interplay and interaction between industry as well as cybersecurity research firms, civil society organisations like Amnesty International, Academic Research to build that full threat picture and to come back to a point that the that Mr. Chairman reacted to, you know, a00 click exploit, exploiting a zero day vulnerability in the moment that is a weapon for which there is no defence. And but I did not want to be overly cynical there. You know, those vulnerabilities have been patched. And that’s largely because of that interaction with industry as well as those layered defences that responded and reacted to protect users. Thank you.

Jeroen Lenaers (Chair): Thank you very much. Then, Mr. Kohut.

Łukasz Kohut (Socialists and Democrats): There are hundreds of millions of people all over the world who are putting their trust in you. They are transmitting that data, assuming that you will protect their data. A lot of these countries come from of people come from authoritarian countries, and they believe that these large Western countries will protect the data because of the values in place. Protecting the holiday photos, you are responsible for protecting all of this data. So, I would like to ask Mr. Agranovich from letter from Facebook, essentially as a platform, as a company, what have you done to help protect consumers when it comes to surveillance? For example, from Pegasus, that is information that was released in Poland just a few months ago. And I’d like to know more about this. What additional protections have you put in place, for example, with WhatsApp? As I understand it, Pegasus could not just look at all the data on a person’s phone but could add things to a person’s phone. Do you have tools to tackle these kinds of threats? For example, Messenger, which is one of the most popular chat apps.

Jeroen Lenaers (Chair): Thank you. That was basically one particular question to Mr. Agranovich. So, I’ll give him the floor. Mr. Agranovich.

David Agranovich (META): Thank you for the question. I think it hits right at the core of the issue that as companies where users are trusting us to be safe on our platform, we have a responsibility to make sure that we’re doing the work both to find and disrupt these types of operations and to create, as you noted, additional measures to protect our users on the platform and their data from this type of abuse. So maybe I’ll break my answer up into three different categories. One, what are we doing about the actual bad actors, the surveillance for hire firms and their clients that are attempting to target people? Two. What are we doing in terms of kind of what I would call product interventions? How are we changing the product itself to protect people? And then third, what are we doing to enable kind of the disruption of this type of bad behaviour off our platform, but where we might have visibility into it because of our platform. So, in the first piece, what are we doing about the bad actors? Our investigative teams look for and disrupt these types of activities, not just from surveillance for hire companies and their clients, but also oftentimes kind of state backed, advanced, persistent threat actors, kind of your traditional, sophisticated espionage operations on a constant basis. In addition to the December report, I referenced specifically about surveillance for hire. We also released quarterly reports and previously released monthly reports that would uncover operations from a variety of different sophisticated threat actors, including those from Iran, from Russia and other countries. We think that’s really important, right? These operations try to remain clandestine. They try to stay below the radar. And so by exposing what they’re doing, we believe that that meaningfully hopefully deters them from engaging in that type of activity on our platform. And we’ve seen it lead to particular advanced persistent threat actors abandoning operations on the broader matter of Facebook and Facebook platforms because of the concern that they’re going to get caught and then exposed in that way, in addition to taking down and disclosing the operations. And this kind of starts to bleed into what things that we put into the platform to protect people’s data. We think it’s really important that we notify victims. Other companies do this as well. In particular, I know our colleagues at Google do this when they see people being targeted through their platforms. But notifying victims does two things. First, it meaningfully increases the cost to the bad actors who are often trying to collect information or intelligence on a target without them knowing. By telling that target that they’re being targeted and giving them steps they can take to make targeting harder or meaningfully undermining the effectiveness of whatever that surveillance activity is supposed to be. And then second the second value of notifying victims is it provides a service to the victims themselves who oftentimes wouldn’t have known that they were being targeted in the first place. On the product interventions, as you noted, in addition to patching the vulnerabilities where we’ve seen them, I think one of the challenges to here is that oftentimes these vulnerabilities aren’t just about a vulnerability in particular, let’s say WhatsApp platform, but about trying to get someone to. Then if the vulnerability in WhatsApp has been passed, trying to get them to go and download malware that might give them access to the device itself. Right. That device level compromise may not be something that we as a, you know, an application-based company are going to be able to fix. But we might be able to fix it by sharing information with the companies that control the underlying device ecosystem, for example, the folks at Apple or the folks at Google. But before I get onto that third piece, that how are we enabling the broader ecosystem, we’ve also released some additional tools Facebook Protect, for example, which is a product designed to create additional kind of account compromise protections, particularly for accounts used by governments, government officials, politicians, campaigners, democracy activists, journalists and the like. And we’ve seen that be really effective in a variety of different places, including most recently in Ukraine, to help kind of tip and alert more quickly when we see potential compromise activity on the platforms. There are a few different levers there. There are vulnerabilities and there’s creating additional tools that can help kind of stop a compromise from happening earlier in the chain. The last piece of my response was about how are we enabling the ecosystem? So, in addition to the public reporting that we do, which includes indicators of compromise, so domains, for example, that we see these surveillance for hire actors using to host their malware as well as when we have it kind of reverse engineer malware, hashes of what their malware can do and then hashes that other people can use to detect that malware on their own systems. We’ll also share information with our other companies threat intelligence and investigator teams where we see, for example, malware targeting a particular device ecosystem or operating system. In that way, we’re hoping to enable that company to then go and identify or. Hatch a vulnerability that we ourselves wouldn’t be able to patch. So, it’s kind of a multipronged effort, right? First, how can we make the bad actors have a harder time on our platforms? Second, how can we harden the platform to make exploitation more difficult? And then third, how can we make sure that we’re enabling other companies to similarly be able to do that type of work using the visibility that we have?

Jeroen Lenaers (Chair): Thank you very much. Then we move to next speaker, Ms. Bricmont.

Saskia Bricmont (Greens): Thank you very much. Thank you. Also, too, to our speakers today for their presentations and the very extended answers you provide to us. I have to say that I’m still how you say shocked, probably by what you’re saying, because you are actually deploying safeguards, safety measures to prevent governments to actually act illegally and you are doing what governments should do, protect their citizens. And you were mentioning, and I think the three of you were on the same line saying that we should work hand in hand to enhance. Also, on the legal perspective, the answers we provide to prevent such and such situation from from happening. You mentioned that you have contacts with the governments. Do you have also contacted directly with, for instance, the secret services? How does it work concretely? Do you do you receive concrete answers from them, or usually do they take the information and then you have no further, further links? I was wondering, when you say we have to work together, how do you identify legally the current loopholes? And how could we, from a new perspective, address this market of security vulnerabilities to improve the situation if it is possible at all from a legal perspective? Because you also mentioned, I think it’s Mr. Agranovich that mentioned that it sometimes it’s impossible to to to block. I would also like to ask a direct question to Madam Ciglic. You mentioned Candiru. Could you develop a little bit and explain us a little bit more about it? I also would like to know, and this is probably a question to the three of you, if you know and if you have contacts with the companies such as NSO and if you know what kind of services they provide to their to their clients. So, when once they sell the software because this or another one, what comes with it? Is there any services provided and for how long? If you have any information about that, I would be interested. And I have also a question to Mr. Agranovich. You talked about the social engineering. Could you develop a little bit and explain us how it works concretely and what it enables? You also mentioned that some clients are not governments, but individuals. So, do you mean by there that, for instance, clients of NSO are not only governments but there also could be other clients? Could you please explain is this and last question to all of you. If if you have an answer to this, when you have the notifications sent to potential victims, what is the percentage of positive for false positive? Is it usually a proven that that a victim has been targeted or does it happen often that once it has been checked, there’s no there’s no problem? This was not my last question. Excuse me. One very last one. Could you the three of you identify the amount that you have to spend and the resources you have to spend in order to react on this cybersecurity issues and to provide answers to protect your own clients. Thank you.

Jeroen Lenaers (Chair): Thank you, Ms. Bricmont. There were plenty of questions for all three speakers, so I will just take the original order. So, I start with you, Ms. Ciglic. Please give the floor.

Kaja Ciglic (Microsoft): Thank you. And I am sure I will forget some. But I, and so maybe I’ll start at the end, I think, in terms of the amount that we spent. As I said earlier, I think Microsoft is investing over 20 billion over a period of five years, not just on the the sort of the cyber mercenaries, but overall, in sort of increasing the security of our ecosystem. That includes over 6000 people that work at Microsoft on security every day. But it’s hard to sort of carve out just, you know, this is the part that that goes to this, I think the protections in the systems. I think our best line of defence here and we need to work on that in terms of a little bit, our colleague said in terms of notifications, we, we, we, we do inform customers that have been targeted. That does not mean they have been necessarily exploited but we have seen targeted targeting of their systems. Microsoft and this is not, again, not just cyber mercenaries but sort of state based actions as a whole have provided over 20,000 notifications over the last couple of years of sort of state based, advanced, persistent threats. And a little bit like my colleague said earlier, you know, this area of states investing into operations where they target private sector companies on a daily basis to get into the systems for whatever reason. I it’s is a growing one. Some of them most states do it as and in their own capacities through their intelligence services. Some of them they use private sector entities like the cyber mercenaries and some of them they use sort of this loose groups sort of loosely associated with the state, but not organised in sort of a private sector sort of capacity. And that for us is an important and very worrying challenge. And a little bit like our colleagues in the other companies, we have been and will continue to be very vocal about this, things that hopefully that deters more actions as well as encourages states, on the other hand, to take action. So, you know, we can do the technical attributions to follow the technical data and hopefully that will lead to states taking more concrete actions, whether it’s at the legal level, taking, making court cases as well as at the political level, you know, attributing particular attacks to particular countries entities over blogs. But we also publish, like our colleagues, our annual report and most recently was published in last September. But we have also published a few weeks ago a report that looks particularly at Ukraine and the situation going on there in terms of, you know, building on that, a lot of it, we hope that these type of things can encourage democratic countries in particular to put, you know, first and foremost to put in place human rights based frameworks and frameworks or oversight over whether it’s the intelligence communities, whether it’s the police that sort of go down this route and sort of ensure that these types of tools are used as rarely as possible. Obviously, there might be national security, I think, considerations, but still in a limited, precise manner with democratic oversight and sort of encourage the sort of the democratic, I guess, West to take action. I think then the second step is sort of the more long-term sort of the global efforts where we can do a lot. But, you know, by encouraging this sort of whether it’s through export controls, through to make sure that the tools are not exported from sort of markets that that that we can control, as we’ve seen in the in the U.S. with the NSO and Candiru cases where they have been in, I guess in December, put on a blacklist, but in other areas as well; I think the sort of the again, encouragement of this vulnerability, responsible and coordinated vulnerability disclosure, like I talked earlier, I think is another area to think about. The other, Candiru, sort of what we have seen, so, we talked about this for the first time in last July. It was an attack that exploited sort of browser vulnerability, not just on our browsers, but on our browser as well as well as in Windows. They were zero days vulnerability involved. It was a sophisticated attack also involving phishing. It targeted, as I mentioned earlier, several civil rights, human rights politicians, activists, including in two European states, Spain and the UK. But the vast majority was in Palestine or Palestinian Authority. The we have sort of published the sort of the details around how the sort of the attacks occurred as well as the sort of the hashes, the malware that other companies could use to a find it if this is something in their systems and then also help patch it. I think that was that was it. I feel that were more but.

Jeroen Lenaers (Chair): I think there was many other questions and if there was a follow up, I’m sure most people will remind us. I first move to Mr. Snyder. Please. You have the floor.

Charley Snyder (Google):Jeroen Lenaers (Chair): Thank you. Mr. Agranovich.

David Agranovich (Meta): Thank you for the questions. So, I’ll take them, I think, in order. On the first piece in just echoing my colleague from Google’s comments. Like most tech companies, we also have channels for law enforcement organisations to request information pursuant to predicate lawfully predicated investigation. Perhaps one of the things that’s most frustrating about the proliferation of these services is that what they’re oftentimes offering is kind of a way to get around that type of lawfully predicated process. All right. So, if a law enforcement organisation is hiring one of these companies to try and gain access, for example, to a target’s Facebook account, they’re doing so even though there is a process to do that. If it was subject to democratic oversight and transparency, as far as the types of contacts we have with government, we do work quite closely with CERT teams because they are oftentimes that first line of defence and are able to get integrated quite quickly into a response if a government target is targeted. We’ll also generally work with or rather share broader threat information with not just kind of the security side of governments, but also the parts of government sort of think about how to legislate around some of these threats in the same way that we’re having this conversation today on NSO group. To my knowledge, we also don’t have direct contact with NSO. Our understanding of their activity is based both on the investigative work that our teams have done around their targeting of people on our platforms, including WhatsApp, as well as our partnerships with groups like Citizen Lab, who’ve been doing some really impressive work of digging into the capabilities of the products that NSO offers. On the social engineering point. What I mean there is and so what this might look like in the wild, right, is reconnaissance activity. That first phase of the surveillance chain would be a client. For example, buying a product that enables them to look up a bunch of a person’s information might help you pull up their Facebook page, their Twitter account, find if they have a YouTube channel, kind of gives you the ability to start building a dossier on a person that you can then later maybe approach them in a way that is convincing. The second phase, social engineering is approaching them in a way that is convincing face. And so, the idea there would be to perhaps using a fake account provided by one of these clients, one of these companies approach an individual and attempt to get them to click on a link, maybe you send it to them over WhatsApp or something like that. And the idea there, much very similar to spam actors or scammers sending malicious links in an email. The idea is to try and build enough of a rapport with someone so that they’ll trust you to click on that phishing layer, which then might take them to a website where they download a Trojan application, where they download some other form of malware, or where they enter their credentials for an account on any number of different internet services. And that last bit where they actually download the malware into their credentials that the exploitation phase and that middle piece building rapport and trying to get them to click on that phishing. That’s what I would mean by social engineering. And you’d ask the question around what is you know, I had mentioned some individuals who might be clients of these firms. I’ll predicate this by saying that in the seven firms that we looked into for the surveillance for hire report, it was extremely difficult to the point that we stayed away from trying to draw conclusions about in particular clients to walk back from the activity we saw on our platforms to a particular client. And I imagine that’s by design, right? One of the big things that these firms are actually selling isn’t just the surveillance capability. It’s essentially whitewashing your use of it. Because for many of the intermediaries, whether it’s the folks at Citizen Lab or us, what we see is activity that’s linked back to that company. And in doing so, it’s protecting whoever happens to be buying that that service from oversight, transparency or potentially having their operation disclosed publicly. But when I say individuals, an example actually from some really impressive reporting done by a team of investigative reporters at Reuters into a firm called Bell Tracks, which is also one of the firms that was in our report, discussed how some of the targets of their activity appeared to be litigants in lawsuits, and that their assessment from some of the off platform investigative reporting they had done was that the people hiring or buying these services may have been the opposing party in the litigation itself. And so, one of the challenges here is not only do you see this type of capability being used to target dissidents and human rights activists and opposition figures, but you also see it being used to potentially undermine other parts of civil society system, including the legal system. On the notification fidelity for us in particular. Right. We’ll send notifications when we see a person who’s been targeted in the various three stages of the surveillance chain. Right. And we see someone being targeted for reconnaissance, being targeted for social engineering, or where we see them being compromised. But because we oftentimes can’t actually confirm if someone has been compromised because a compromise happens off platform, we’ll send notifications to people where we are very confident that they’re being targeted for something. But it’s hard to know whether at the end of that chain they actually ended up, you know, being compromised, downloading the malware, entering their credentials just by nature of how our visibility into the threat space works. And then finally, resources that we dedicate to the cybersecurity work. So, since 2016, we’ve dedicated at least $16 billion across our safety and security work for ensuring that the platform is safe. There’s a variety of different safety and security initiatives but would include the work here and across the company. We have about 40,000 people who work on safety and security, the kind of very sophisticated investigative teams that work on this type of adversarial threat. Actor number somewhere closer to about 200. And that’s drawn from across the entire company.

Jeroen Lenaers (Chair): Thank you very much. Mr. Georgiou.

Giorgos Georgiou (Left): I think everything that we have heard is quite shocking. I think it’s really quite sad when we hear that even democratic governments are using these types of means. We’re talking about democratic government that should in fact be controlling or putting tax in place for Google, for Facebook, for Microsoft. It shouldn’t be the other way around. And personal data should be protected in a democracy. We understand that we are facing a huge contradiction here. This is a struggle that touches energy, that touches diplomacy, war, military means. Let me ask a question that perhaps could be considered naive. Have you received threats from companies’ bodies to, um, to not publish certain threats because it would suit them. Thank you.

Jeroen Lenaers (Chair): Thank you, Mr. Georgiou. A very concrete question. Maybe first, Mr. Agranovich.

David Agranovich (Meta): Thanks for the question. So, we know when in particular when we do this type of work, that there are safety and security concerns we have to consider, in particular for our investigators and our investigative teams. Those teams are geographically distributed around the world. And whether it’s our work on surveillance for hire, this particular issue or sophisticated espionage activity or influence operations, that’s always something out of the back of our mind. We have certainly taken steps to protect our employees and the folks who work on these efforts, but have not on the surveillance for higher front face any specific threats to the safety of our investigators or our teams. That said, I do think that there’s something that’s worth considering, in particular for a body like this, doing this type of work and doing, and in particular publishing the activity of these types of adversarial threat actors, whether it is the surveillance for hire topic we’re talking about today or another team that that the mountain works with on influence operations does carry with it. Risks oftentimes are pointing fingers at powerful governments that may not feel very great about the fact that you’re uncovering their activity. I think there is space to create kind of some protections, in particular when governments start issuing those types of threats for the people who are doing that type of investigative work to encourage companies to continue being transparent on these issues.

Jeroen Lenaers (Chair): Thank you. Mr. Snyder.

Charley Snyder (Google):Jeroen Lenaers (Chair): Thank you. Ms. Ciglic.

Kaja Ciglic (Microsoft): And yeah, maybe just to echo it, you know, in complete agreement with my colleagues from Meta and Google, I think you know, we do we take we take steps to protect the teams that work on these issues. But also, we do not let the actors know ahead of publishing the information they Google. We have received pushback after that has been published without going to specific information in terms of, you know, what has been published. And the one angle that I would add on top of it is also, you know, the threats might not be whether it’s physical, whether it’s, you know, to the safety of individuals as a as well. But we often go and publish information about attacks conducted by actors associated with big states. This these are and oftentimes our customers on the other side as well. So, it’s a challenging dynamic. And, you know, the repercussions might not necessarily be just sort of a to a safety, but also in terms of the business opportunity. So, I think about the sort of the statements, the positions that companies are making in this in this space as sort of being pulled into different directions a little bit by business interests, by interests and commitments to human rights implications. And this is why as well, it’s really important that there is a civil society angle as well working with the both the companies and the governments.

Jeroen Lenaers (Chair): Madam Novak.

Ljudmila Novak (European People’s Party): You have talked about working together with governments. What more can governments do? I imagine, we will come to conclusions in this committee at the end of work, and there will be legislation potentially that will follow. It may be that threats come from outside of the EU too. So, I wonder how EU legislation can provide protection from such threats, from external threats. And does this type of legislation even make sense? Can we protect ourselves?

Jeroen Lenaers (Chair): Thank you. Offer you the chance to respond to your mother tongue?

Kaja Ciglic (Microsoft): I did not respond in my mother tongue because I feel I don’t know the words for things like experts controls in Slovenia. But no, thank you for the question on the in terms of I think there’s a very valid point, and I think this is a we live in a very connected environment. I think, you know, outside the things that we could do to regulate this particular market, to encourage states, as we were just discussing, to not invest in it and not drive it, whether these companies exist within the EU or outside. And there definitely are some that exist within the EU as well. I think there is also things that the European Union in particular has already been doing, which is which is things like encouraging and promoting legislation that increases both privacy protections and encourages the private sector industry across the board, not just, you know, the big players, but everybody to respect and invest in their technology with those privacy protections in mind. And then similarly with cybersecurity protections, you know, whether we look at the next cybersecurity directive, whether we’re looking at the coming Cyber Resilience Act. I think raising the levels of what exists out there for everything from critical sector, critical infrastructure providers to online companies, I think will help not keep us completely safe, but sort of help raise the bar of access.

Jeroen Lenaers (Chair): Thank you. Mr. Snyder.

Charley Snyder (Google):Jeroen Lenaers (Chair): Thank you. Mr. Agranovich.

David Agranovich (Meta): Thank you. I’ll probably be echoing what my colleagues from Microsoft and Google have said, and I think maybe some concrete items as well that would be particularly effective the most. One of the most important to something that Stiglitz mentioned that governments can take the concrete step of not investing in these companies, whether that is by investing in them directly or by buying their products. It also would help substantially. I think when we’re doing the work to counter this activity, it takes away the ability for companies like NSO Group to claim that their products are only being used to protect people from terrorists and criminals. If governments aren’t necessarily playing the same game with them. In addition to that, I think some things that that were brought up throughout this conversation. One is requirements for these firms to do some of the same due diligence we expect of other industries. Some banks, for example, by creating know your client requirements. And then there are customer requirements where many of these firms at this point will say that they don’t know necessarily who’s buying their product or how it’s being used. I think there is space for regulation to require that some of these surveillance companies and in doing so, it makes them somewhat more liable for the activities that they’re that their clients get up to using the products that they’ve provided to them. Finally, I think there is also space to create some expectations around responsible disclosure of vulnerabilities. I know that I’m Ciglic at Microsoft to talk about this in detail but creating both the norms and perhaps even the requirement to disclose some of these vulnerabilities, or at least weigh the risks and the equity is around the various stakeholders of having these vulnerabilities out in the wild would be really effective. And then the final piece that I would note, and this came up and I think in the last question is creating kind of a space for it to be safe for companies to do the investigative and disclosure work that I think our firms certainly do, but other smaller companies in the space also do. And in return for doing that work, often faced reprisals in particular markets or from particular governments.

Jeroen Lenaers (Chair): Thank you. Diana Riba i Giner. You have the floor.

Diana Riba i Giner (Greens): Thank you very much to all the speakers who have come to give us all this information. So, a couple of questions. If I understood Mr. Snyder correctly at the beginning of your comments, you spoke about attacks that occur without anyone clicking on any kind of link whatsoever. And I believe he said there is no way to protect yourself against such an attack. So, I wonder if, as a company, you are studying those types of attacks. According to other experts we’ve had in the committee, there’s an increasing number of these types of attacks. So, we see more of these. Attacks without links. And so obviously that’s very concerning to us as users and to us as members of this committee. So, is there any work being done by those companies as such as such as yourselves, to see how we can defend ourselves against these no click attacks? Mr. Snyder also spoke about three countries in the EU. I believe Serbia, Greece and Spain that used Cytrox. So, could you tell us a bit more about that, about which users were affected? So, if you have any more information on that, we’d be very interested to hear about these European countries. And I don’t know if the three of you can give us any figures with regards to the types of attacks you’re observing, these monthly annual attacks. Any kind of figures, any further data? How many users are being affected by these types of attacks? Just to note the magnitude of what we’re dealing with. Thank you very much.

Jeroen Lenaers (Chair): Since there is one concrete question to Mr. Snyder, we’ll start with you on the use of Cytrox in European countries. You have the floor.

Charley Snyder (Google):Jeroen Lenaers (Chair): Thank you. Ms. Ciglic.

Kaja Ciglic (Microsoft): Thank you. I think I will echo a lot what has been said. I think maybe to start at the end, I think a little bit like was just said, I think it’s difficult to put a concrete number. I think some of it is as we all look at just what we see on our platforms. So, Microsoft looks at Windows, it looks at Azure Cloud infrastructure and sort of so that’s kind of the view we see. And that’s why the information sharing with colleagues and others in the industry is very helpful. And I think in addition, and we don’t necessarily always connect to a specific threat actor or whether it’s a private sector one or a government one, I think there you know, there are actors on the platform where we perhaps have not yet identified, but we have identified the threat. And so, we mitigate the threat, but do not necessarily go back to sort of figure out it’s coming from a specific group. But a little bit like Mr. Snyder said, it is a daily thing versus an annual or monthly thing. As similarly to Google, we send thousands of nation state notifications out every year. It is not. It is an industry, not industries of our word, but it’s a practise that is growing, both in terms of the numbers of different actors, the numbers of different governments engaging and investing in this area, the numbers of attacks and as well as and the sophistication in terms of what are we doing about the zero click attacks. I, I again, I think the thing that needs to be really underlined is that as an industry, I think we have a strong responsibility in investing in secure design, secure architecture, and we are committed to doing that. A lot of like Mr. Snyder said, I think we are not anywhere near the day where there will be zero vulnerabilities in the systems. But I think what is important is to also ensure that the lessons that some of the larger players have learnt over the years in improving our systems are shared with the wider technology community because a lot of the there is a lot of interplay between the different applications that might be on this on Twitter or whether it’s on this laptop, whether it’s on the phone. And they are not necessarily always developed by companies who have the same resources or the same knowledge or the same history. Some of the bigger companies have invested millions, billions in some cases. And so, ensuring that we work with them as well to ensure that the applications that are put on the systems are as secure as possible, I think is also something that will be important.

Jeroen Lenaers (Chair): Thank you. And Mr. Agranovich.

David Agranovich (Meta): Thank you for the questions. Also, we’ll be echoing a lot of, I think, what my colleagues from the industry were saying as far as the pace of operations that we see on the platform. So, it’s a constant, I think, back and forth in our surveillance for higher report from December. We talked about seven firms specifically, but the teams are constantly both investigating this activity from firms and amputees, as well as taking actions to block new malicious domains that were created after we block older malicious domains and deploying detection for malware on the platform. That said, maybe to give a sense of scope. December report. When we did the seven enforcements on those different firms, we notified 50,000 users around the world that we believe are being targeted by any number of those seven different firms. And we consistently are sending a similar volume of notifications to potentially targeted users so that throughout our enforcement throughout the year, we not just wait for the disclosure amendments to do that, but I think the 50,000 number is helpful in just painting kind of a picture of just how broad the targeting from seven different companies were.

Jeroen Lenaers (Chair): Thank you very much. Then I have one question of my own and I’ll pass the floor for one final follow up question from our rapporteur. But I’m struggling a little bit to get a sense of perspective here as well. All three of you have thrown around some very impressive numbers. Microsoft: 20 billion in five years, 6000 people. Google: 10 billion in five years matter. 16 billion already since 2017, 40,000 people. And yet a company like NSO Group which has 500 people, manages to find so many zero days vulnerabilities that apparently your three companies cannot find before they find them. So, what is the perspective? Do they have better people? Do they invest more? Which I cannot imagine. And do you really feel that what you are doing, as impressive as it sounds is, is adequate in order to prevent this kind of abuse of vulnerabilities? If I hear that from Meta, that’s 40,000 people on security, but 200 that are sort of concerned with this type of activity, is that really is that really sufficient? And I have heard some of the colleagues, some of the speakers say we need to also invest in infrastructure, and we need also to have a civil society and academics and researchers help us. But, I mean, you do not have insignificant budgets, especially compared to the companies that are using those vulnerabilities. So is what you’re doing. So just one quick question. Is what you’re doing really enough, or should you also be doing more yourself? I’ll pass the floor first to Ms. Ciglic.

Kaja Ciglic (Microsoft): Thank you. Difficult question. So, I would say I think the that the matter of perspective and sort of the juxtaposition, it’s important to remember that some of these groups, the only thing that they their job is on behalf of the clients is they break into the system to achieve X. So, you know, they need to be right one time where the systems which have many in reality billions of lines of code. Right. They need to find that like one angle to get in and to if we’re talking about zero in our ability oftentimes, of course, is also the questions of phishing and sort of the user awareness where again, also I agree from the company perspective, we need to do more as a society, more general to increase awareness of all of us on how you behave securely online, to protect against some of these issues in terms of the numbers of people. I think the other thing I would say is particularly those, you know, the numbers that Facebook have mentioned, but particularly those that deal with these highly sophisticated attackers. I would say there’s just not that many experts out there at the moment. And I think that’s also a problem. You know, I mean, overall, there is a shortage. And, you know, across every country, everywhere there’s a shortage of cybersecurity, skilled professionals. I think the the surface the people who deal with the most sophisticated actors, the shortage is even small, though. I mean, the shortage is bigger. I guess the numbers available are even smaller. And it’s a combination of people who have an understanding of the systems, awareness of the systems, but also in particularly as these are often times actors associated with nation state threats, we need to ensure that their vetted and are, you know, not associated with particular state actors as well. So that’s, I think, thinking about sort of the the security measures put being put in place too. And this is partly why these kinds of elite groups, I would say, are fairly small.

Jeroen Lenaers (Chair): Thank you. Mr. Snyder.

Charley Snyder (Google): Yes. Thank you for saving such a difficult question for near the end. It is a very good question. I would echo the same comment that my Microsoft colleague made, which is that, you know, and this is somewhat of a trope or a stereotype in the industry, but, you know, attackers need to be right once. Defenders need to be right every time. And in general, it is there is an assumption that in this world of cyber operations, you know, the rules of the game favour the attacker and not the defender. That said, you know, I think these activities in terms of what Google sees on its platforms and the breadth of activity that we facilitate every single day for users, these activities are very, very rare. And the number of users that are delivered, for instance, our government backed attack warning is a minuscule percentage of overall users who use, for instance, Gmail. I do think over time, security is improving. I do genuinely believe that. I think it’s a thing of the past. I think people take for granted that if you if you just look at Gmail, for example, you know, it is a thing of the past where your inboxes is filled with spam or, you know, I’m old enough to remember when computers you’d get all these pop ups on your computer such that you could barely use your computer. I think trends are moving in a positive direction in terms of taking for granted that your inbox isn’t going to have, you know, every other email is going to be a landmine. But we do need to continue driving up costs here for attackers. And one thing I would note on that is, while, you know, Zero-day vulnerabilities are very scary and they can be very sophisticated and certainly, you know, in that I message zero click exploits are very sophisticated. It’s actually perhaps a sign that we are driving up costs for attackers and making it harder to compromise users that they are having to use these sophisticated exploits. That’s actually better to see that they’re using exploits which are costing attackers millions of dollars and are very, very rare. It’s better to see that than to see adversaries compromising devices with five year old vulnerabilities because those devices haven’t passed or users haven’t taken action to to patch the devices. We want to make it harder and harder. We want to make it so that the only time we see compromise on our platform, it’s extremely sophisticated because we’re clearing out all of all of the low hanging fruit. And that’s a hard thing to quantify. We have Project Zero released a report earlier this year documenting that we are seeing a rise in in zero-day vulnerabilities that we’ve that we’ve observed. And, you know, that could be because there’s better disclosure and reporting around these things, but it could also be because we’re forcing attackers to work a little bit harder and to discover vulnerabilities rather than just continue leveraging old vulnerabilities that are still active and still work.

Jeroen Lenaers (Chair): Thank you. Mr. Agranovich.

David Agranovich (Meta): Thank you for the question. So maybe I wanted to take those two pieces. One was the questions around the resourcing, and then the second was, how are we really doing over time as defenders against a persistent attacker? On the resourcing piece. I would echo the points that Ms. Ciglic mentioned from Microsoft. It is it is difficult to find people to do these types of sophisticated investigative roles. And it’s not because the companies aren’t paying them enough. It’s because there just aren’t that many people in the entire global cybersecurity industry for each of these companies to have massive, massive teams doing this work. I think the folks that you have represented on this panel here each probably have amongst the largest teams in the industry doing this type of work. And even then, those teams are still comparatively seem relatively small. That said, part of the reason why the civil society partners like Citizen Lab, for example, or Amnesty Tech are so important, they can do two things that technology companies or platform companies often can’t. Not through want of resources, but because they are probably not the right actors in our broader societal framework to do it. The first is groups like Citizen Lab or Amnesty do a really impressive job of pulling the threads together, and I believe one of the other people in the room had mentioned the different lab as well. But groups like the Atlantic Council or Citizen Lab or Amnesty, they do an incredible job of pulling together the disparate threads of activity on each particular platform to create a more holistic understanding of the whole what’s actually happening out there. One of the reasons why, you know, to step out of the surveillance space and into influence operations is another area that my team works on. And one of the reasons that we share our influence operations, threat information with the Atlantic Council or a couple of other academic researchers around the world is because oftentimes these threats, whether it’s surveillance or espionage or something else, they’re not confined to one platform. But Meta is probably not the best organisation to be pulling together. What is happening on Google and Google probably isn’t the best organisation to pull together what’s happening on Facebook, but some of these civil society organisations can take information that they get from platform reporting and build a holistic picture of what’s happening. The second thing that these civil society groups can do is enable other aspects of this as a civil society framework, particularly journalists and activist communities who may not are either mistrustful of the platforms or don’t necessarily know where to start. Right. Because they may be concerned that they’re being that they’ve been compromised or being targeted. But going platform by platform by platform is probably not going to be the most effective way for them to figure out what’s happening. And we’ve seen this work right in media. And I’m sure the other folks on this panel fund a lot of these organisations to enable them to do some of this work. But we’ve seen this work. The amnesty example that I used earlier after we did our surveillance for hire report in December, a bunch of people were calling the Amnesty Tech Helpline to get assistance in figuring out what steps can they take to protect themselves and what steps can they take to discover whether they have in fact, been compromised. And we’ve seen that then lead down the road to people deploying, for example, that some of the toolkits that Amnesty has put together to identify compromise by, for example, Cytrox. So, I think that is really important. The particular role that civil society actors can play, that platform companies just probably aren’t the right ones to do it. Now, perhaps we can help to fund and build those communities. That’s not a role we do ourselves. The last piece is on the attacker versus defender advantage and just echoing what my colleague from Google had said in the short term, attackers often have the advantage here. They just have to get right. They just have to break through once. We have to catch them every single time they spend all their time doing this. Our companies oftentimes have a lot of different things that are going on. But over time, I do believe that defenders can have an advantage if we are working together to meaningfully constrain the operating space for the attacker. What does that mean? Means companies like ours doing investigations and public disclosures and disrupting these operations and shipping patches for vulnerabilities and hashes for malware. But it also means some of the conversations like the one we’re having today, governments, one choosing not to use these products, but two, constraining kind of the impunity with which surveillance for hire companies have been able to operate over these past few years together by raising some of these costs and lowering the benefits of the operations themselves and the products that they’re that they’re offering, I do believe that we can get ahead of this threat. And I think as my colleague from Google mentioned, you’re seeing that trend over time, be very positive. And even if the individual stories can at times of rather, rather frightening.

Jeroen Lenaers (Chair): Thank you. And then I pass the floor to our rapporteur Sophie in ’t Veld for a follow up question.

Sophie in ’t Veld (Renew): Uh, before I put my questions to the three speakers, I have a question for you, Chair, because we were also supposed to have Apple on the panel. They have declined. I understand. Can we agree that we put our questions to them in writing? Because I think they have a lot of important information to give us. Unless, of course, they are willing to come at another.

Jeroen Lenaers (Chair): No, I can answer that immediately because it was more of a timing situation in the end why they declined. So, they are willing to come hopefully at a later moment and discuss with the members of the committee. I should have announced that when I announced the speakers, but there was a scheduling issue.

Sophie in ’t Veld (Renew): Okay. All right. And I’m reassured. Okay. I’d like to, in my last question, add a little dimension, because this whole discussion gives a little bit the impression that it’s very clear cut. Who are the good guys and the bad guys? The trouble is that I think in many cases, in most cases, the good guys are at the same time also the bad guys. The governments who are supposed to protect us against this kind of intrusion are in several cases, the same governments who are actually intruding. And you said something very interesting Ms. Ciglic, and I quote, you had a sentence where you said the Democratic, I think West that’s also, you know, also indicates because we also know that governments have been spying on us the whole bloody time, even before spyware has been invented. And that was also the West. The US government has been spying on us and the Brits and EU governments have been spying on their own citizens. Very often they spy on each other’s citizens because they have no right to spy on their own citizens. They just do each other’s dirty work and then exchange. So, whereas on the one hand they’re saying, you know, we want to protect our citizens. At the same time, they’re making every effort to weaken the architecture and weaken the protections. And it’s very often, you know, justified by the fight against terrorism, the fight against child abuse, the fight against Social Security fraud. I mean, you can you all know the justifications. And this makes it very difficult. And this is why I’m coming back to this at the risk of you believing that I’m obsessed with the whole situation, with Peter Thiel, Palantir and what have you, it’s just one example. I’m sure there are many more, but he was or is still, I believe, on the board of Facebook. He’s also been funding a company which is working together with Raytheon, which is working for the American government. Palantir is also working for the American NSA. And we hear rumours that he’s also interested in investing or maybe even buying parts of NSO. So, in connexion with the blacklisting, I mean it’s all, it’s all one big spaghetti of interests. And he’s also one of the biggest funders of Donald Trump. I mean, there you have all the connexions with the American government and the military and Israel and all that. I mean, and Facebook. So, it’s not so straightforward because for example, I would really like to understand when you find traces of somebody having been targeted or it has been an attempted hack because even that is politically relevant, whether the hack succeeded or not is not necessarily the most important fact. If governments, for example, are trying to hack the phone of a European commissioner, it’s irrelevant whether it succeeded or not. The mere fact that the government has tried is the relevant and political fact. Now you say that sometimes you find you can read, you can trace it back to governments, maybe via a middleman who’s been acting on behalf of a government. But then do you always do that? Do you record those findings? And how does that then relate to the fact that you also need to do business with those same governments, those same governments who may be regulating you and overseeing you? I mean, this is it’s very I find that very complicated because everybody has different interests and different hats. If you expose European governments as spying on their own citizens, then you’ll also get in trouble with those same governments who are legislators and supervisors then on the European targets, because we’ve heard a lot of figures that were very general global figures. I would really like to get in writing if you prefer. It doesn’t have to be here if you don’t have them available. I would really like to get all the figures, detailed figures about activity, this kind of activity detected by you in. The European Union targeting EU citizens or even involving EU governments. And one last question is again on the stockpiling, because there again, I think there is a conflict of interest because you’ve given the example of governments and I presume the U.S. being one of the main security forces in the world is leading in this, stockpiling the vulnerabilities which can be exploited by others. So that means that on the one hand, if the US, for example, we are talking about big American platforms here, okay? So, the US are stockpiling these vulnerabilities, thereby exposing all of us to a risk. And there again, you are also I mean, the US is your, let’s say your home base. You have a lot of dealings with the US government. I mean this gets very, very complicated. So, I’d like you to say a little bit about these three things, the conflicts of interest that you are also caught in, figures for Europe and your efforts to or your findings rather when you find traces of a government or government sponsored actor targeting EU citizens or EU institutions. Thank you.

Jeroen Lenaers (Chair): Thank you, Sophie. We’ll keep the original order of the replies. So, Ms. Ciglic, you have the floor first.

Kaja Ciglic (Microsoft): Cool. Thank you. I think you’re exactly right. I think this is not an easy area. I think and you know, whether you look at, like I mentioned earlier, the sort of the private sector and how it needs to operate within these frameworks as well as for governments. And I think all governments, not just the US, I would probably say all governments that invest in offensive operations in cyberspace, whether it’s directly or through acquiring, through private sector actors, have this, have this, you know, different equities. They need to play to each other, and I think have come from the different in different parts of governments. The intelligence communities have their own things they want to promote. The Department of Trade, for instance, has a different way. The economy has a different way to promote the law enforcement’s again, another one as well. And human rights hopefully play an important role. I think that is why it is important that we are as transparent as possible. And Microsoft does talk publicly about the things we see because it was mentioned earlier. I think there’s needs to be a light shone on both the private sector actors and the government practises in this area to encourage democratic oversight. I think there are things in place in many places. They’re not necessarily optimal and there are improvements to be made. At the same time, there’s also lots of governments, including Democratic ones, that don’t have the oversight mechanisms in place yet. And some of the reason is because this is such a complicated area with lots of different interests, with lots of different players, you know, in terms of Microsoft has advocated on some of these issues at the UN as well, in terms of the discussions that are taking place around for the lack of a better word, cyber warfare, but the responsible behaviour on state behaviour and in cyberspace. Again, lots of different equities in place, lots, lots of, I think concern by many players about what can be referred to as unilateral disarmament in the online environment for those reasons that you’ve just outlined, you know, there are all states, whether it’s our allies, winners, EU countries, whether it’s more sort of the Chinese, the Russians, they all are sort of playing in this area as an area of strategic competition. And I think the private sector companies are in this area where we are providers of technology. We are the that the people who are responsible for securing our customers, whether it’s private citizens, whether it’s enterprises, winners, politicians, governments as well. And at the same time, the platform, the field where the battle is taking place and an area where actively states are undermining the ecosystem. So, it’s a little bit like I said earlier, this is I think why. I also think why, and I think this is why it’s critical that the civil society is there as well and pushes both governments and industry to act responsibly in this area. Because we do, we need sort of a third like a third-party advocate, I would say, on, on stockpiling and abilities. I said earlier, yeah, it’s, it’s a challenge I think in the US government the shadow brokers hack, the one that led to WannaCry and not Petya has been a hack of a stockpile of their vulnerabilities that has led to two large attacks on the Microsoft systems by then another government that used what they had. And this is why it’s, you know, we encourage and call on states to act as responsibly as possible and to limit this type of activity.

Jeroen Lenaers (Chair): Thank you. And the question on the providing the numbers.

Kaja Ciglic (Microsoft): Yes. I will have to get back to you, but because I have to take a look.

Jeroen Lenaers (Chair): Thank you. Then we move to Mr. Snyder.

Charley Snyder (Google): Thank you. Thank you very much for the questions on the balancing act. It is indeed a tricky balancing act. We publicly disclose hacking activity in emanating from countries that we also do business with. Google, I think we tried to take a very principled approach to this. We feel that hacking in general and the use of, you know, commercial hacking tools specifically as well as, you know, a broader range of hacking and malicious activities online, they make the Internet less safe. And we think a safer, more secure Internet is in everyone’s interest. So, all governments of the world. And so, we take this principled approach to disclosing malicious cyber activity when we see it, regardless of where it emanates from. And so, we’ve, you know, for instance, have disclosed activities about Western counterterrorism campaigns, as it’s been reported. These can be uncomfortable conversations that we sometimes have to have when we go public with this, with our detections. But I think our response is that, you know, we are trying to be a responsible player in this ecosystem and protect our users. And we’re also this is what, you know, societies should expect of companies that they call out this activity when they see it, and specifically to any particular government, you know, us taking this principled approach. Now, let’s you know that in the future that if we detect hacking activity against your government, we will disclose that, too. And so, sticking to a principled stance here, we think is makes makes the most sense from someone in our in our purview. And we think it is what is most likely to increase and improve online safety for everyone. In terms of global figures. I think like Ms. Ciglic will, we’ll have to get back to you on that. I can’t quantify that, you know, off the top of my head. And in terms of stockpiling, it is very concerning to us. I would note that, you know, while, you know, as I mentioned, the shadow brokers leak was one such example of one government’s stockpile being leaked and having serious impacts. Many governments, you know, including and European governments also, you know, stockpile vulnerabilities. And, you know, several European governments are some of the most sophisticated offensive cyber operations players that we that we track. We. One thing I would note is that some governments have taken the step of increasing transparency over their decisions about when to when to keep a vulnerability for exploitation purposes versus disclose it to improve security of the ecosystem. We by no means think that that is a, you know, a satisfactory approach here, but it is a positive step. And we would like to see that more governments take that step of having a defined process by how they determine and ensure that both kind of the offensive side and the defensive side are having that conversation, that equities conversation about whether to keep an exploit for use or to use it to improve the security of the ecosystem. So but more broadly, you know, we remain concerned by both companies and governments stockpiling these in secret and think there needs to be more transparency over these activities.

Jeroen Lenaers (Chair): Thank you. Mr. Agranovich.

David Agranovich (Meta): All right. Thank you. Starting, I think, with the first question. It’s interesting. I think that for all of these companies, we have a responsibility to protect the people on our platform. That is the first and foremost most important thing for us as we’re thinking about enforcement here is that we’ve created a platform for people to meet, connect, use the various services that we provide, and that comes with a responsibility for us to ensure that as they’re doing that, we are doing everything we can to keep them safe from being targeted with malware, from being surveilled by either governments or private companies offering services to those governments. And so, at the end of the day, what we focus on is we have to do this work regardless of who is doing the targeting, regardless of who’s buying the surveillance for hire products. Our responsibility is to ensure that we’re doing everything we can to keep people on the platform safe. And to a point that I think came up earlier in this conversation. Governments, democratic governments in particular, can lead by example here. Platform companies almost universally have lawful mechanisms to request information that are subject to transparency, disclosures that are subject to oversight oftentimes by the court systems of the country in question. And law enforcement organisations can and should use those lawful mechanisms to collect information that they might need for a lawful predicated investigation. And that creates the democratic oversight of that behaviour. The use of services designed to get around those processes by giving it away, by enabling access surreptitiously to a target that doesn’t have to go through any sort of a court system, that poses a significant danger to the democratic process and that enables the type of abuses I think that you in particular were citing. I’m really concerning. And so I think we would agree with the concerns that governments that engage in this behaviour can also propagate the bad activity. But particularly in democracies, there’s all these opportunities for us to create systems and oversight and transparency to enable necessary law enforcement activity on the numbers of targeting in the EU in particular. I do believe it’s something we have, but we’ll follow up with in writing in particular around the notifications, the Centre for Surveillance Companies in particular. We did have we did observe targeting of individuals in the EU and specific countries. And then finally on kind of our findings, how would we handle the findings of government linked actors similar to our our colleagues across the industry? When we find these operations, we have set ourselves kind of strict attribution standards that when we can attribute something, we’ll only do it based on on platform evidence at a high confidence. We do that to make sure that we’re not speculating. But following that framework, we have taken down operations from all over the world in many cases that we’ve attributed to governments specifically. And in the December report on surveillance for hire. We took down firms that were based in a number of countries, including in the United States, some of which claimed to sell their products to governments in the United States and in Europe, government or organisations in the United States and in Europe. And so, we’ll take these actions when we find them and have tried to create a principled framework around doing that work to ensure that we are putting the safety of our users first, understanding that it may lead to difficult conversations down the road.

Jeroen Lenaers (Chair): Thank you very much. And also thank you to all three of you for the commitment to come back in writing on the question regarding the numbers. I have one more question for Mr. Lebreton, please. We have closed the speaking list, but I’d like to also allow you the option to ask the question. We have another panel with very distinguished experts waiting for us, and we also need some time to do them. So, if you could be brief; that would be very helpful. Thank you.
Yes, well.

Gilles Lebreton (Identity and Democracy): Thank you. I agree with the previous speaker that the good guys can sometimes also be the bad guys. It’s true that some countries are certainly not exemplary here, but I’m thinking of the big technology companies here. So we’ve had representatives from these companies here with us today. They talk about how they are protecting users from external attacks and I’m sure that you do that is the good side. But there is also the less positive side that was shown to us by Michel Arditti actually yesterday. Now, these big tech companies collect a lot of data. They keep them, they exploit them. They sometimes sell them on to other companies who use them for targeted advertising, for example, and all of this without any controls in place. So, my fairly frank, possibly brutal question is: Do you not think this type of use of our personal data without our consent is also a form of spying? Thank you very much.

Jeroen Lenaers (Chair): A very clear question and also to the panellists there, the request to try and keep the answer brief so we can also have enough time to allocate to the next panel and let’s keep the original sequence, so Ms. Ciglic-

Kaja Ciglic (Microsoft): So I’m sorry I wasn’t here yesterday, so I’m actually don’t know what the discussion was. I think there is a very different, uh the practises between companies that specifically go out there and try to undermine the platforms, the systems and on behalf of other clients I think are very different to I think what you were referring to. But again, I’m not entirely sure about the context, so I don’t necessarily want to go into too much detail.

Jeroen Lenaers (Chair): Thank you, Mr. Snyder.

Charley Snyder (Google): Likewise, I’m not entirely sure of all the context. I believe the question is, is there any equivalency between, you know, commercial surveillance vendors and the practises of platforms? I want to be very clear. I believe there is no moral or technical equivalency between commercial surveillance vendors which produce dangerous hacking tools to gain unauthorised access to user data and accounts in which seem to be made available to the highest bidder with very little transparency. I believe there’s no equivalency between that and companies producing consumer services that are designed to help citizens and that citizens want to use. I also think we you know, a distinction I would draw is tech platforms operate with transparency. And we’ve for many years tried to make thoughtful decisions about when, how and why data is used in our products and how users can control or remove it. And many of these conversations have been playing out in civil society. And by contrast, commercial surveillance vendors operate in the shadows, and there is no transparency over their activities and their from what we know, based on the research and the hard work of the participants, the organisations on this panel as well as other organisations. It seems that their activities are in furtherance of, you know, activities that are antithetical to democratic values surveilling dissidents, journalists, human rights groups, etc.. On the other hand, you know, companies like ours do a lot of work to help and protect these high-risk organisations. And I mentioned earlier work we do to get our account security controls into the hands of human rights workers to protect their accounts from this exact type of organisations. So, I do think there is a very large difference between these two organisations.

Jeroen Lenaers (Chair): Thank you. And Mr. Agranovich.

David Agranovich (Meta): Thank you. And I would echo, I think, a lot of what Mr. Snyder and Ms. Ciglic mentioned. I also didn’t catch the conversation from yesterday, but I do think that there was a meaningful difference, both on a moral and a technical level, between the type of work that our teams are doing, both providing a platform and a service that people use voluntarily. And that has been architected over the course of many often difficult conversations and regulatory conversations to provide more transparency about the type of information platforms are allowed to use, how long they’re allowed to retain it, and what types of protections those platforms need to create around an individual users data and companies that exists almost for the sole purpose of trying to undermine or evade the safeguards and protections that platforms themselves and regulatory organisations like this one have put in place around big tech company platforms and how they’re architected. I think the fact that in many cases our platforms are resourcing and building sophisticated investigative teams to look for, disrupt and counter the activity of these types of surveillance, where companies that we’re in a constant back and forth battle to keep them from being able to access information about the people using our platforms or use our platform to try and target people is a testament to the fact that there’s a fundamental difference between our industries. We have put significant effort and time and expertise into detecting and making harder efforts to essentially go around and users get inside and try to take data from them and then oftentimes retain it in ways that probably don’t actually comport with the law with regulation. Whereas these firms, not just operating in the shadows and operating at the behest of whoever is willing to pay them, have architected their entire product line around the invasion of an individual’s privacy without their consent. So, I do think there is a fundamental difference. I think it’s important to in the course of trying to address the surveillance for hire industry, that we make these these differences clear, given that the defender community includes many of the companies that are represented here, as well as a bunch of other ones that aren’t represented here, but that are doing hard work and are dedicated to doing the hard work of trying to hold companies like NSO Group responsible.


Panel 2:

Jeroen Lenaers (Chair): Thank you very much. That concludes our first panel today with great thanks to the three panelists for taking the time to be with us. I think there was a lot of valuable information that you shared both in the work that you do yourself in relation to this topic and also into some potential directions for solutions also at the European level. So, thank you very much. Feel free to stick around also for the second panel and we’ll be in touch on a number of the issues that we’ll come back later to on the numbers, but also on the lists of vendors, etc., because I think that will be very valuable information for our committee. So, thank you very much. I would take more time to thank you, but I want to rush in straight to the second panel because we have two very distinguished experts with us for the second part of today’s hearing, Professor Ross Anderson from Cambridge University and Ms. Patricia Egger, who is a security risk and governance manager of Proton AG. So, I would open the floor immediately and pass the floor to Professor Ross Anderson, who is connected online. Please, Professor Anderson; my apologies for the delay. I hope you were able to follow the discussions we already had this morning. So please also feel free to react and comment on those and I pass the floor to you for about 10 minutes.

Professor Ross Anderson (Cambridge): Great. Many thanks. Sure. I wonder if I could share a few slides. I’ve been changing these quite a bit over the last hour as many of the things that I planned to say have already been said.

Jeroen Lenaers (Chair): And have you sent the updated slides to our services?

Professor Ross Anderson (Cambridge): No, I haven’t sent out there. Can I show them from my software or should I just speak?

Jeroen Lenaers (Chair): I’m looking at the technical team here. Well, if you could send them now, we can use them and operate them from the room here. I don’t think it’s possible to share the screen in this particular book.

Professor Ross Anderson (Cambridge): I’ll send them. No, and that will just take a second. And I’m going to speak about the equities issue. And whether regulation of spyware is feasible. So, I’ll just get on with my remarks, and I trust that these slides will catch up with us in a few seconds. Several the speakers in the previous session have talked about the equities issue, and this is something with which we’ve been engaged for over 30 years now in the information security, world security. Let’s remember, it’s not just the foundation for people’s privacy, but it’s increasingly entangled with safety now that we’ve got software in cars and medical devices and railway signals and children’s toys. And the broader picture is that governments need to mandate security for the same reasons as data protection and device safety, that markets don’t provide the right amount for the usual familiar reasons. Now, what makes it difficult is that some agencies of government, particularly law enforcement and intelligence agencies, want to bring security from time to time. And we have seen this in the context of what’s called the crypto wars. If you’ve got my slides by, you know, this is slide number three. We have over the past 30 odd years seen repeated attempts by intelligence and law enforcement agencies led by the U.S. to retain information dominance. Up until 1993, they obstructed civilian cryptography by controlling the exports of crypto hardware, except for things like ATMs. During 1993 to 2000, the Clinton administration demanded access to all cryptographic keys, and the EU played a big role in ending the Crypto Warren Commission of Backgammon put through the Electronic Signature Directive, which undermined that from 2001 to 2015, the agencies demanded and to a large extent obtained access to information and servers. Snowden told us about the prison system. Since 2015, the agencies have been demanding access to clients, and there’s a number of themes have run through the whole system. Now, one of the things that Ed Snowden disclosed to us, if we go on to the next slide, is the bull run programme, as it’s called, NSA or Edgehill, as it’s called in GQ, which disclosed that the NSA was spending over $200 million a year to undermine civilian cryptography and had over 140 civilian staff engaged in this. If we move now to the next slide. We see that the tasks of this organisation, including inserting vulnerabilities into commercial encryption systems, ice systems, networks and endpoint communications devices used by targets. And on the next slide, we will see that this frequently involve messing around with standards. The NSA famously interfered with the standard for random number generators so that it generated weak random numbers that the NSA cryptologist could predict. This could be adopted by Juniper Networks in the Russos, and it then got exploited by the Russians and others. So the idea that you can create vulnerabilities that only your side can use is a very dangerous one to engage. And if we can move to the next slide now, I want to just remark in passing that there has been substantial collateral damage from the crypto wars up until now. And the problem is getting worse because since the Stuxnet attack on the Iranian centrifuges about 13, 14 years ago, other governments have been wanting similar capabilities. Here in the European Union and in the U.K. and elsewhere, in fact, there are millions of electronic door locks still using the My Fair Classic Standard, which Philips in the Netherlands sold. And they were required by export control to limit the key length to 48 bits. And we still have in many buildings, including in some of the locks in the building where I work, no locks that are easy for people to clone. Car thefts in the European Union has almost doubled in the past five years, helped by weak cryptography, much of which dates back to original designs in the 1990s after remote key entry was mandated in European cars. And we find that Bluetooth is easy to hack, which exposes people and domestic devices in various ways. Now if we move to the next slide, the US response to all this to to the Snowden disclosures is that President Obama set up the NSA review group of four distinguished experts who made a series of recommendations, and all of them were accepted by the Obama administration except a line on the equities issue. Their line was that defence should be prioritised over offence and vulnerability should be capped only if there was a compelling reason to do so. But the U.S. government declined to follow this. What we’ve also learnt in the last couple of years is that U.S. agencies are by far the biggest customers for the cyber arms manufacturers. So, companies like VPN, for example, started in France, but they quickly found out that they had to put their main office on the Beltway because that’s where the customers were. The U.S. also these governments and stockpiling vulnerabilities rather than fixing them. And it says also, as previous speakers have remarked, repeatedly, lost tools that are based on them. Stuxnet, sorry. The WannaCry was just one of the obvious examples which caused pain in Britain’s health service. Next slide, briefly, academic colleagues and I have done a number of studies of this are keys on the doormats paper in 2015 demonstrated that exceptional access to all data would cause greater damage still and would undermine and reverse modern security practises. Last year we came out with a paper bug in our pockets in response to Apple’s proposal to implement client side scanning in its iPhones. So now let’s move on to the next slide. What should the European Union do now? Well, Commissioner Johansson has recently proposed a ruling that would make chaton images on phones easier for law enforcement to access and indeed to search remotely at scale. And I am pleased to see that this is now getting pushback from a number of member states. The alternative approach is what the European institutions adopted in Directive 2019 77 one on sales of goods, which a number of us helped to push in various ways, which set out to mandate patching. And this directive set out to improve the safety and security of everything from cars, through domestic appliances to children’s toys. Next slide, please. So, on the issue of regulating the cyber arms trade, this is something that previous speakers have also come up with and some relevant experience we have goes back to the Arab Spring. Almost ten years ago, a number of UK NGOs, including one with which I work, tried to persuade the export control organisation to stop a UK company called Sophos, which was selling surveillance software with President Assad of Syria. And he was using this to monitor people’s mobile phone calls and decide who he was going to arrest until the following day. And this was solved by one of their German subsidiaries through a front company in Dubai. And so, it was quite complex. And what we discovered is that GC HQ resisted very, very fiercely. And the reason is that they wanted their spyware, this UK German spyware on Assads Mobile Network rather than competing products for Israel’s or Ukraine’s. And their motivation was perfectly simple. Back then, in 2012, 2013, there was some possibility that British troops might end up in operations in Syria, and they prioritised getting an intelligence site of what was going on. So, the intelligence mission took priority over human rights, as it does again and again and again in our country and America. And indeed, as Sophie in ’t Veld pointed out, in other countries, too. So, here’s the problem how can EU institutions navigate the equities process, particularly for member states grant the licences? In that particular case, the two Member States who had some say over that licence were Britain and Germany. The EU wasn’t in the game. So, if we move on now to my final slide here, it’s a sum up. Next slide, please. Yes. Device hacking is something that exists. A number of member states use it in in order to get targeted, unwarranted access to the phones and other devices of serious criminals. And this is something that’s regulated by law, and it’s less objectionable, certainly, than bulk access at the server end, which can enable the authorities in countries like China to just search through absolutely everybody’s email and chat all the time. But also, device hacking has got certain things to recommend it in terms of human rights law and in terms of operations. You have to be very careful about trying to make it easier because it’s mostly used by hostile state actors. Remember, the main customers of the cyber arms manufacturers are not the world’s police forces or the world’s intelligence agencies. And EU citizens are much more likely to end up being the victims of such action than its beneficiaries. So, what’s the best security strategy for phones? Well, I believe it’s the same as with cars. You incentivise attack detection. You incentivise device patching. And you work with industry best practise rather than trying to re-engineer. What works has been shown to work in terms of coordinated disclosure. What the European institutions might usefully do is to use and improve the export control regime because there has been some progress on that over the past seven years or so. Let the victims through the arms vendors for damages. This would certainly change the game if victims could get together and have, you know, mass legal actions whereby they get millions of dollars of damages from the arms vendors. And if the institutions aren’t in a position to do this, let the victims do it themselves. And above all, I would ask members of the European Parliament, whenever you don’t mandate spyware and citizens devices, that’s not going to make things better. That’s going to make things worse. The lesson from the USA is once you start putting vulnerabilities and stuff, the vulnerabilities start being exploited by others. There is no such thing as a vulnerability that can be used by nobody but us. It’s always leaks and it always ends up being abused. Security is the foundation for privacy and for safety and for much else, and it’s strongly in our interests to put that trust rather than offence. Those are my comments. Thank you.

Jeroen Lenaers (Chair): Thank you very much, Professor Anderson. I thank you for the updated slides. Thank you. Also, the technical team for managing to get them on the screens very quickly. We’ll now hear from Miss Patricia Egger, who’s a security risk and governance manager at Proton AG. And then we’ll open the floor to the members for questions. Or if any members would like to take the floor, please indicate so. So, we can start immediately after Ms. Eggers contribution. You also have about 10 minutes. I thank you for being here.

Patricia Egger (Proton AG): Yeah. So, thank you to the PEGA Committee for holding this hearing here today to speak about Pegasus and spyware in general. It is an extremely important topic, and it ties into some other really fundamental questions about software, privacy and cybersecurity. So, I’m here today on behalf of Proton, a Europe based company that aims to provide a privacy by default technology ecosystem. We’re firm believers in end-to-end encryption, which underpins our products. In fact, Protonmail was designed to protect some of the very people that were specifically targeted by Pegasus. So, we are very much on the same side here. And because we have seen the need for this type of technology, we are further trying to broaden our offerings and reach and therefore protect more and more people. I’d like to start by pointing out that as much as the revelations surrounding Pegasus seem to have shocked the world last summer, there was not much that surprised the cybersecurity community. Pegasus wasn’t that big of a story in this community. That’s not to say that Pegasus is not sophisticated and scary. It absolutely is. But its existence and use relied on concepts that are really nothing new. And that did not surprise many of us. It was just one instance of challenges that we are very familiar with. So, to elaborate on this and so that we’re all on the same page, I wanted to take a step back and break down how this works. And I apologise if you already know this, but as was mentioned, I think a few times it all starts with a decent amount of pre-work in which the spyware provider does extensive research into security vulnerabilities that may help them get their spyware installed on a target’s device. And so, in our world, these are referred to as zero days, basically, meaning that the vendor is not aware of them yet and therefore there is no fix available. Some of the installation vectors required the use of the use of the user of the device to do something and others don’t. The ones that don’t require any interaction are known as zero clicks. And these are particularly scary as it’s basically impossible for the target to not become a victim. If the infection vector uses a common operating system application or outdated software, then there’s a good chance of infection. In any case, they can try as many times as is needed to achieve their goal. So based on this pre work, the spyware is installed on the target’s device. Once installed, the software has access to the deepest internal workings of the device, and this means that it can do pretty much whatever the legitimate user could do. The device then establishes a connexion with the server controlled by the spy on the victim’s device and sends the spy the data that it requests until such time as the connexion is cut. So, what is different with Pegasus and other pieces of malware? Or why did I say that Pegasus wasn’t as big a story in cybersecurity committee as it was everywhere else? And I think it comes down to these three points. Software, malware, spyware are all fundamentally the same thing. It’s all software. It’s the intent that is different. Are they being used for legitimate or illegitimate purposes? Is it spying or is it providing a useful or desired service? This, of course, is subject to interpretation and very much depends on your context and point of view. From a technical perspective, though, if you want it to be possible to access people’s data for legitimate purposes, then it will be possible to access the same data for illegitimate purposes. If you want to protect the general population’s privacy, then the vast middle, the vast majority of which are not criminals, then you will inevitably be protecting some of the bad people’s privacy as well. The second point is that the the beauty somehow of software and hardware is that they do not discriminate. If it works on my phone, then it probably works on yours. And similarly, if my phone is vulnerable, then it is likely that yours is as well. And why does this matter? I think it matters because it means that even if a piece of software is built for a very specific purpose, as what is claimed with Pegasus being used for national security on very specific targets, it will work on a broader scope. And the technical measures are not the solution to ensure that this broader scope are protected. That needs to come from somewhere else, for instance, through regulations. And the third is that there may be more that software and security providers in general could do to protect us from malware. But it is generally accepted that perfect security is not and will never be possible. This is just the reality of life, and therefore technical controls will only be able to take us so far. The rest again, probably needs to be controlled through other non-technical means. Now may have painted somewhat of a bleak picture, but there are some positives that we can talk about, and that is the existence of security and privacy focussed technologies that are designed specifically to protect people. These will likely come up in your future discussions. And when they do, I’d suggest keeping this in mind. These technologies generally have what we call a defined threat model. And basically, what this means is that they function under the assumption of a given threat with its motives and capabilities. So just to give you an example of what I mean by this end to end, encryption is one of these technologies, and it protects data in transit between the source and the destination. This means that it protects the user from from someone intercepting the message in flight and being able to read it. If you want to protect yourself from a relatively weak attacker in this case, then relatively weak encryption is good enough. If you’re concerned about a more advanced attacker, then you will need stronger cryptography. But what about the entity providing the end-to-end encrypted service itself? Well, if they have the keys and they can decrypt the communication between the source and the destination, and if you don’t want that to be possible, then the source and the destination need to hold their own keys. This is what Protonmail does, for example. And now we call this zero zero access encryption. So, what you can see here is that even this technology that may seem to be pretty much the same, at least at first glance, and sometimes even as it goes by the same name, the threat models are very different and they protect users from different threats, from different actors. And this is a fundamental difference in the business models. As in the second case. The user’s data cannot be monetised. So really, if we want to have a discussion on providing real security and privacy to the general population, then we probably need to have a conversation about business models and their impact on all of this. But this may be a discussion for another day. VPN also has a different threat model. So very quickly with a VPN, the provider basically masks who you are to the websites that you’re visiting. So, the website doesn’t know who you are, perhaps, but the VPN provider does. So, in this threat model, the service provider is trusted. And the reason I’m emphasising this is that it’s important to have an idea of what the threat model for the technology you’re using is to understand whether it can help protect you from whatever it is that you’re interested in. In this case, perhaps Pegasus. If Pegasus is on your phone, then essentially an end-to-end encrypted message may not be protected. That’s because in that threat model the source can read the message and Pegasus is not the source. So, to conclude, when you think about how to protect from Pegasus, I’d recommend you remember this. Although there is no silver bullet and malware such as Pegasus can be devastating once installed on your device. There are technologies that do provide protection from many other threats, such as end to end encryption or even better, zero access encryption. If one person is vulnerable to some attack or piece of malware, then many others will also be by intentionally weakening security. For some of us, all of us are put at risk. And finally, access to malware is already very much democratised. And I think it’s a mistake to not also democratise the technology that protects us as much as is possible. Thank you for your time.

Jeroen Lenaers (Chair): Thank you, Ms. Egger, for a very clear contribution. I immediately open up the floor. We have half an hour left in our panel. So, I will start with our rapporteur, Sophie in ’t Veld. And please, colleagues, if there are other people who ask for, please, indicate so we can add you to the speaker’s list.

Sophie in ’t Veld (Renew):Thank you, Chair. I’ll try and be brief. First, a few questions to Professor Anderson. You say that device hacking is still the best option for law enforcement. And I’m just wondering, because I have no idea of the let’s say, the cost and the effort going into that. But I’m just wondering, buying this kind of spyware that we’re looking at, I mean, I’ve been told it is incredibly expensive. We’re talking millions, tens of millions. And surely hiring and training people who can do this to hack a specific device would cost less would be less of an investment. So, what would the reason be for a government to invest in this kind of spyware rather than the ability to hack a device? And then secondly, just maybe it’s the devil’s advocate. There was one case presented to us, and I would just like to hear, you know, your views. There is a one of the biggest criminals in the Netherlands, really a murderer, drugs, running drugs, syndicates in what have you. Really nasty, nasty character. He was hiding in Dubai. And Dubai is generally not cooperating when it comes to extraditing criminals or suspects. And they used spyware that the police asked the intelligence services to install spyware on this guy’s phone so that they knew where he was. And eventually this helped catch the guy. Could this have been done with technology just hacking his device, let’s say, if the services had done it themselves rather than buying the spyware. And then my question to Mrs. Egger, you you said if we democratise or if spyware is being democratised, then so should the protection against spyware. But how do you see that? Because, quite frankly, I’m like most people, I’m a very average user of, you know, phones and equipments. I’m just, you know, not smart enough to use all sorts of sophisticated protection. I mean, I just get by installing passwords and stuff like that, but that’s about it. And if you say democratise, it means that normal people, normal users have to be having to be able to, to, to use it. So how do you know, how is that going to happen? Because people if people are not aware, if they don’t really feel that they could be a target, then there’s not really going to be much demand. So how do you like do see the development?

Jeroen Lenaers (Chair): Thank you. And we’ll take the answers in the same sequence, while first pass the floor to Professor Anderson.

Professor Ross Anderson (Cambridge): Thank you. There are lots of questions about the kind of digital investigative techniques that police forces use. And the great majority of them don’t involve the use of really sophisticated, expensive tools. At the low end, local police forces, for example, entrap gang members by pretending to be 16-year-old girls in social media and saying, I really like men with guns. Show me a picture of your gun. Right. And they get the picture of the gun, and the guy is arrested at the next level up. Forensic tools like Cellebrite are used to get information out of phones of people who are arrested, and these are analysed to figure out the communications patterns with people who haven’t been arrested yet so that other criminals can be tracked down at the very top of the tree. Of course, you have people who use dedicated criminal means of secure communications, and then you may indeed have high powered government agencies involved in breaking into that office. For example, in cases like Encrochat, which happened a couple of years ago, there’s one of these busts roughly every year. So, there’s a whole ecosystem of things that law enforcement can do. And only in the hard cases do you have to use the high-powered tools. And the regulation of this should be seen in terms of regulation of the whole criminal justice ecosystem.

Jeroen Lenaers (Chair): Thank you, Ms. Egger.

Patricia Egger (Proton AG): Yes, thank you for the question. I think it is a very, very good question. I don’t believe that. So what? I don’t mean that by democratising the protection that we need to push the effort or the responsibility onto kind of everyday users. I think the point here is, on the contrary, is to support other ways of doing things, to support the companies that are trying to do things in a different way, in a way that they have less access to your data by default. And so that you kind of you have the privacy by default. And so not by creating impediments to these companies, which are all I mean, generally smaller than the than the big techs. But to support them into trying to achieve what what their what what they’re trying to go for. And not restricting them.

Jeroen Lenaers (Chair): Thank you. And we fast floor to Mr. Heide.

Hannes Heide (Socialists and Democrats): Thank you. This is for Miss Egger. You were saying that Pegasus didn’t come as a surprise and the concept wasn’t new. There are technologies to protect private citizens, no access to Keys, etc. and that this is these are models that are that the population can use to. What about the costs? Whether such a system. And do they play a part in the conflict with the need for surveillance or cyber criminality. How is the balance reached with regard to the costs? Thank you.

Jeroen Lenaers (Chair): Yes, please.

Patricia Egger (Proton AG): So, I’m not sure I know what you mean by, like, costs in the end. Are you talking about the cost to the to the user or a different cost? The cost of the user? Okay. So as a as a as a briefly alluded to, this is just a completely different business model this way of doing things so. Because so for instance Proton does not have access to its user’s data. It doesn’t it cannot and does not monitor and monetise this data. What proton does is what people pay for the service. So yes, there is a cost to the user that doesn’t exist with some of the other big tech free alternatives. But I think our existence has proven that people are willing to do that. The cost is reasonable, at least to the millions of users that that that we provide services to the conflict with. The need for surveillance is indeed something that I think needs to be discussed in detail. I mean, of course, we are subject to we’re based in Switzerland, so Swiss laws and we respect those whatever, whatever those may be. But so there there that is, I think, more of a legal question than a than a technical one.

Jeroen Lenaers (Chair): Thank you. There was not a specific question toProfessor Anderson, but if you if you would like to add Professor Anderson. If not, we’ll continue.

Professor Ross Anderson (Cambridge): Well, the security research community is pretty much united on the view that you should not ask ordinary members of the public to do extreme security things to protect themselves. Devices should be secure by default. And doing that is certainly goes a role for the legislature.

Jeroen Lenaers (Chair): Thank you. I have a couple of questions of my own and I’m looking at them. Yes, Saskia, then I first past the floor too Saskia Bricmont.

Saskia Bricmont (Greens): Thank you and thank you to both of you. I have a question to Professor Ross. Can we prevent efficiently, because you put the accent on prevention? Do you think that companies like Apple and Google that we just heard do enough to ensure security in their communities are closed, that they ensure that security vulnerabilities are closed? And I heard from both of you that security will never be perfect. So, So what’s the solution, then? I really want to hear you a little bit more on that. And you mentioned, Professor, that we shouldn’t mandate spyware on U.S. citizens. So, I fully agree with that. But do you mean what do you mean concretely by that? Do you mean that we should forbid governments to do so, that we should sanction from a U perspective? Could you also be more precise on this. And then to Ms. Egger The current evolution is at the contrary of what you’re recognising, namely. Keep end to end encryption strong in order to also prevent security issues. Do you have a view on this? Because I also think that and it’s related to what professor said, that for security measures, there is more and more pressure to allow the end of end-to-end encryption in messages. But you also mentioned at the same time that it is a possibility on the technology side, but it’s not enough to prevent attacks such as Pegasus attacks. Also there, you mentioned that we should democratise security software, but is there any security software that could prevent attacks from spyware such as big business? Because I heard several times that on the technology level, there’s no answer yet to such attacks that have no need to click on anything and so on. So, on the technological level, is there a solution against such kind of spyware? Thank you.

Jeroen Lenaers (Chair): Thank you Ms. Bricmont. We will start in the sequence of the questions asked aside, Professor Anderson.

Professor Ross Anderson (Cambridge): great. Thank you. Very good question. Saskia. Do Apple and Google do enough? Well, they do a huge amount, but the answer to that is complex because with Android phones, the main problem is that many of the Android OEMs don’t ship patches and the majority of Android phones out there at any one time are insecure. The reason for this is that the typical Android phone vendor only keeps on updating Android for so long as that particular model of a phone is on sale. And as soon as they’re selling a newer one, they stop shipping updates. Now Google has done all sorts of stuff to try and fix this, such as by moving lots of the functionality into the browser, which can be updated automatically from the play store. But for four years when I used Android phones, I would only ever buy the Google owned brand Android phones because they would be updated. The underlying issue here is how long is a particular product going to be patched? And it’s the same as the issue that the European Parliament was wrestling with three years ago over the issue of patching things like cars and washing machines. The way to fix this? Well, the way that many governments are now working on fixing this, the British government, for example, is mandating death dates on devices so that if you sell devices with software, you’ll have to put something on the label that this device will have its software updated until such and such a date. And UK security requirements for companies that sell to the government have got this is a requirement, curiously enough, that clashes with how some companies do things. And as for whether we should mandate spyware, I’m particularly referring to the proposal to do client side scanning of both videos and text from the point of view of finding initially child sex abuse material and then later, according to the documents, terrorism. Now, this is not going to work at all in messages because if you try and pick out dubious text and messages, whether it’s terrorist conspiracies or grooming of persons under 18, that’s a very, very difficult natural language processing task. And your error rate is going to be several percent. Now, if you are going to be processing a billion text messages a day within the EU or maybe a bit over 100 million in the UK, then your false positive rate is going to be such that it will completely overwhelm the authorities and the technology is completely non-functional. So, if you if you build something like that into a few mandate companies like Meta to build in to WhatsApp a mechanism whereby suspicious text is reported to law enforcement service. You are just building about surveillance mechanism which will of course be taken over and abused by others somehow, someday.

Jeroen Lenaers (Chair): Thank you.

Patricia Egger (Proton AG): Yes. So, on your question of antenna encryption and the pressure to end this, indeed it’s something that’s happening and it’s something that I think many are pushing back against because we see the issues, although I think we understand where the request comes from. It is something that we don’t believe is in anybody’s or in very little people’s best interest and about you, Pegasus. And could anything actually be done? So maybe just to mention that I’m not an expert on Pegasus, I don’t have first-hand, you know, experience with it, but it is one of the more advanced set pieces of malware that exists out there and. To the extent that we understand, you know, what it was able to do. Perhaps that end-to-end encryption, there would not be the solution. But that doesn’t mean that it wouldn’t be valuable, because for many other types of attacks, for many other types of malware, it would or it does protect people. And so, I think it’s not we shouldn’t say, well, it’s not able to protect you against everything. And so, it’s not useful. On the contrary, it’s useful in a huge number of situations. And that needs to be, I think, front and centre, particularly when we’re, we’re getting pressure to. To not be able to do it. And corruption.

Jeroen Lenaers (Chair): Thank you. Thank you very much. I have a couple of questions for myself. A first to Professor Anderson. You posed an interesting question in one of your slides saying how can you navigate the equities issues when the permits or the authorisations are handed out by the member states governments? And I think it’s an interesting question and I would also like to invite you to maybe come up with an answer to that because I think could be very helpful. Also, in our work here, you hinted at issues like export control regimes. Is that the most useful, most efficient way of doing it? Or are there other ways that we could solve this equities issues at the European level? Another interesting thing you mentioned is let’s let victims sue the vendors. And then now, of course, the issue is often that the vendors are quiet, you know, well-resourced companies with a lot of experience in legal issues, while victims are often not. So, is there a way that we should help them? And how do you see the difference also for us in the US, where there is more of a culture to sue for significant damages, something that we don’t necessarily have in the EU? And is it up to the victims to sue these vendors or is, for instance, what we already mentioned before, a lawsuit that, for instance, WhatsApp undertook against the NATO group, is that more effective because they have also the resources to do such? Sexual assaults. Then on. Oh, very interesting what you said about the Android issue, because we’ve previously also had speakers here that said, you know Android is a bit more difficult to track for companies like NSO Group, at least to provide spyware because there is such a great diversity of Android operating systems on different phones. And it makes it less and less worthwhile to develop one sort of solution for all for all operating systems in all phones. So how would you how would you look at that? And then one question, which is not really the topic of today’s conversation, of course, but the proposal on child sexual abuse from Commissioner Johansson. Does your you know, your opinion? Does that apply to because it basically goes about known material as scanning for known material, scanning for new material and doing textual analysis to prevent grooming. Does your concern relate to all three of these issues, or would there be a difference between, for instance, scanning for known material that exists in the in the, in the datasets of Europol already? Or does that make no difference at all? And to Ms. Egger, I thought that maybe I misunderstood this in the beginning. You said something about Protonmail also being a protection against Pegasus in a way. Maybe. Maybe I misunderstood a bit, if it is. Could you maybe elaborate a little bit more on how that works? And the question we’ve heard from three of the biggest companies in the world, MIT, Google and Microsoft. Now they basically say, you know, software and zero-day vulnerabilities that are endemic in software production. So, would that mean that also the products that you offer would implicitly always have such zero days vulnerability in the as well? And what do you do in order to prevent that? How much of your resources do you address to do to. Detecting patching these kind of vulnerabilities. So maybe first a floor to Professor Anderson.

Professor Ross Anderson (Cambridge): Great. Thank you, Chairman. Let me just walk back through that list. In order on the CSA proposal, non material versus new material versus grooming and non-material, you can target using something like fossil DNA. But the problem is that fossil DNA records basically consist of 26 by 26 greyscale thumbnails of the abuse images. So, you can’t go and put these in people’s phones because they identify some victims and owning them is essentially a criminal offence. So, you’re eventually driven back unless you try to do stuff like uploaded, you’re eventually driven back to referring things back to a server for checking new material. And grooming is much harder because that means you’re using machine learning models in order to do analysis, which has got very high error rates and the most that you can reasonably do. There is of something in the form that acts a little bit like the wake word detector in your smart speaker. So just as you say, okay, Google and then Google service start listening. So, something in your phone might say this looks like a little bit like a porn image. There’s too much pink skin. So, I’m going to refer it back to a police server for inspection. And of course, once it goes back to a police server for inspection, the police can change their software as they wish. So, the CSA proposal, I think, is something that is really, really misguided, and we all know that this is driven by the agencies anyway. If your concern is for child welfare per se, those other things that you would do, but that’s a separate conversation. Next point is Android harder to hack than iPhones? I don’t believe that at all. When I wrote the most recent edition of my security engineering book and I looked closely at both Android and iPhone, and I actually switched from Android to iPhone. The fact is that Android is based on Linux, which is relatively well known, and although it’s hard and in one or two ways, there’s always a steady stream of zero-day vulnerabilities arising for Android phones, which will get patched, patched if you’re using the Google phone. But if you’re using a phone from another OEM will eventually not be patched. And this means that Android phones become relatively easy to hack. And most of the Android phones in use as any one time are completely vulnerable. On suing the vendors. It’s great if WhatsApp will sue the cyber arms vendors, but I observe that this appears to be only one such lawsuit going on, and if victims find it easier to sue the vendors, then there would be dozens of such lawsuits going on. Whether they would work would depend on a practical matter on what the rules of legal costs were in your particular country. In Britain, it’s difficult because if you sue a rich company and you lose, you will typically be ordered to pay all their costs and you’ll be bankrupted. But I understand that in some EU member states the amount of costs that you would have to pay in such a circumstance is very much less and it would allow people to sue. Still, it will create a climate of greater uncertainty for the vendors, and it may perhaps lead some of them to say that their products cannot be used against targets in the EU. Just as Nestle, for example, was attempting in the last year or so to stop its products being used against targets in the USA and in the UK. Finally, on the issue of export control, I’m no longer entirely up to speed with what’s been going on there. Other NGOs have been lobbying harder on that, but I believe there are things that the EU could do because after all, it’s the dual use regulation that is the main thing that is used for export control purposes. So European institutions might, for example, insist on much tighter reporting practises and could insist on publishing details of all the arms exports licences that were given for particular types of Julia’s goods. And in the case of spyware, for example, there could be a rule that all licences had to be published. You might understand that in the case of the exports of some military goods, Member States would want these exports to remain secret. But for spyware, surely you could say we need transparency here and we won’t tolerate secret exports of spyware by EU companies to repressive regimes.

Jeroen Lenaers (Chair): Thank you. And then Ms. Egger.

Patricia Egger (Proton AG): So the question about Protonmail and its protection against Pegasus, I mean, again, not a technical expert on Pegasus, so I just want to make sure that I’m not too. But what I can say is the difference, for instance, with the protonmail and some other email. Writer, is that not? I mean, we don’t. So, the company doesn’t have access to anyone’s kids, anyone’s messages. And the messages will be encrypted at rest on your device if you’re using it. So typically, if your device is powered down or if you’re not logged in, then then it may be a protection against some malware like Pegasus because they would still need to get authenticated and to and to open the application in order to decrypt the messages, in order to be able to, to, to read them, for instance. However, I believe that once if you are using your phone, if you’re logged into your application, if you are yourself, are reading your messages, and then it’s a bit of a different story. So, there is still a certain level of protection that you do get that you wouldn’t have with kind of a standard email provider. It’s an answer to a question. Yeah. And then the other one was, okay, so we have software as well. Is that not also vulnerable to zero days? And what do we do to help that? So of course, our software is also software. It’s also created by human beings. So of course, there is potential for vulnerabilities and in zero days and whatnot, the fact remains that we still do not have access to our users data. And that is something that that still gives them a decent level of protection to our users. We also then do have a product security function within proton that aims to address all these all of these questions. So, there’s we have groups of what we call security champions. So, developers within all of the teams that develop the products who exchange on best practises, on their experiences, on things like that in order to continuously improve their development and our products, we organised ten tests. So, penetration test, sorry, where we were we have independent people come in and try to see if they can do damage. We have a bug bounty programme as well which is open to the public. So, if anyone wants to try to, to attack our products and they’re most welcome to do that and if they find something, we will reward them. So, there’s, there’s many things that we do to try to, um, to, to find things before, before anyone else does. We actually have the, our product security manager who’s in the room today, if you want more specific questions, and then he might be able to, to answer. But yeah, there there is an. Many, many different things that we tried to do in different areas in order to have the most secure products that we can have.

Jeroen Lenaers (Chair): Thank you. Thank you. Thank you very much. Both you, Ms. Egger, and also to Professor Anderson for your time with us today. I think it was very valuable not only on the topic that this committee of enquiry is set up to do, but also on other related European legislation procedures. So, I would just kindly invite you to keep an eye on the work that we’re doing and also keep sharing your expertise and your experience with us when we are navigating our own ways to solve these issues, including the equities issues. So, thank you all very much. Thank you to all the members who participated. We have managed to conclude exactly on time, so thank you all for your participation. Thanks once again to all the speakers of today. And our next meeting will be on the Tuesday, the 21st of June at 3:00. And for the coordinators, we’ll see each other this afternoon. Thank you all very much.

Deine Spende für digitale Freiheitsrechte

Wir berichten über aktuelle netzpolitische Entwicklungen, decken Skandale auf und stoßen Debatten an. Dabei sind wir vollkommen unabhängig. Denn unser Kampf für digitale Freiheitsrechte finanziert sich zu fast 100 Prozent aus den Spenden unserer Leser:innen.

0 Ergänzungen

Dieser Artikel ist älter als ein Jahr, daher sind die Ergänzungen geschlossen.