2023 Security Predictions & Trends for DevOps

Earlier this year, JFrog’s Security Research Team performed in-depth analysis of the top 10 most prevalent vulnerabilities in 2022 and found the severity rating of most CVEs were surprisingly OVERRATED. Why, you ask? We will go deep into the data and show you why. Here’s a teaser on what you might learn:

  • Why 6 of the top 10 CVEs had a high CVSS score, but were LOW impact
  • Of the top 50 CVEs, 64% were overrated
  • Of the top 50 CVEs, 10% were underrated

Vulnerabilities should be assessed by both real-world impact as well as how exploitable they are in your local environment; not just the severity score (which is important). If it takes 246 hours to remediate a security issue, companies can’t afford to just “fix everything.” 

In this webinar, we discuss how organizations can make better decisions, get better processes and use better tools for their DevOps security initiatives in 2023.

Transcript:

Speaker 1:

First of all, thank you everyone for joining our webinar today on Lessons Learned From 2022’s Most Prolific Vulnerabilities. I want to introduce our two speakers, Nati Davidi, and Shachar Menashe.

Shachar will start off by telling us a little bit more about what we learned from the vulnerabilities in 2022. And then Nati is going to build on and tell us a little bit more about what you can think about in 2023, and how to look at security. With that, I’m going to pass it off to you guys.

Nati Davidi:

Thanks, Aviaj, and thank you everyone for joining us today. Before jumping into the main topic of this gathering, the high-profile CVEs of 2022 and what we have learned from them, I would like just to give a bit of context of why we are doing that, what actually we’ve been doing in the last year, and during 2022 specifically, in order to help the community.

As you can see here, we at JFrog are highly focused on the supply chain management and security and when you look at the software flow here from curation through creation, packaging, promoting, distribution, deployment, and production, clearly the attack surface is, I wouldn’t say endless, but is relatively big. And when you look from the attacker point of view, from the threat actor point of view, what can be done in each of these steps, even in a naivete way, you can see that there are so many entry points. And so many attack vectors to utilize the CICD, and to utilize the journey of the software in a way that will be later exploited in production.

So it starts with malicious packages, through utilization of 0-days, known CVEs, of course, secrets, misconfigurations, and other ways to tamper binaries. When all of these process allow the attacker to gather the information, which is eventually being used against production. By the way, just yesterday, we published yet another publication on a new approach by attackers, of injecting malicious code through NuGet packages, which is I believe the first time that research arm officially found and this has happened on a daily basis. So each and every of these attack vectors is a world by itself but the point is that it becomes so overwhelming to deal with all those findings. We have so many tools, so many approaches, so many different stages when you scan for those, and when you need to prioritize them and deal with them.

That’s of course create frustration. There are so many security tools, it takes time to integrate them. In the good case scenario, it’s just about missing blind spots, things that cannot be covered by the typical source code analysis. In other cases, it will be many things that simply will not be taken care of because the developers, and the DevOps are flood with so many security issues. So eventually what happens is that the security is not good enough, the development velocity is heavily impacted. There is a lot of frustration, but in the end of the day, we are missing many important things because of the wrong approach.

But we also invest in many wrong things that should not necessarily be fixed. And this is exactly what we’re going to talk about today. Shachar is going to cover how we suggest to analyze CVEs in the context of real world impact, and do it in an automated manner when possible. This is not of course an obvious thing to do. This is part of what we are trying to do in Jfrog these days. And with that, I will hand over to Shachar to start the overview of the 2022 findings.

Shachar Menashe:

Yeah, thank you very much Nati, and thanks everybody for joining us. So like Nati said, what we are basically seeing is that there’s just a lot of noise, a lot of focus on things that shouldn’t have the focus actually. And we wanted to understand from recent vulnerabilities, from 2022’s vulnerabilities, we wanted to understand what were all the common trends in these vulnerabilities, were the most common vulnerabilities really severe, in how many cases that were really exploitable, and what are the trends for next year’s vulnerabilities? So basically the first and foremost research goal was discovering what are the most common vulnerabilities of 2022. So the vulnerabilities that just occurred in as many artifacts as possible.

Then, mapping the trends for these vulnerabilities and drawing key findings from them. And we’ll share these key findings here, and actually some actionable steps, how to work better with vulnerabilities in 2023. And a forecast for 2023, like what components are going to be the most vulnerable or which components are going to be actually less vulnerable? So these were our research goals when we came to write this report, the most prolific vulnerabilities of 2022.

And basically we’re going to go high level over the report and share the key findings, and also show later where you can find the entire report, which is much more detailed. So the nice thing about this report and our methodology in general, is that we have a very good and unique source of data, which is JFrog Artifactory and JFrog Xray. So we’re not just downloading random, let’s say, open source images and checking them for vulnerability. What we’re doing is we’re using the anonymous usage data of Artifactory and because our customers, a lot of them are Fortune 100 companies, and a lot of very prolific enterprise companies. So this usage, it actually presents like a mini universe of what vulnerabilities are actually prevalent in enterprise companies.

So we can use this unique data and we’re actually getting our own unique count of vulnerabilities. So basically the methodology was to take all the artifacts that, again, were analyzed in a anonymous way and understand the number of CVEs from each of these artifacts and which CVEs we found, and then total them up, count them, and do deep analysis on the top 10. So we limited it to top 10 and only vulnerabilities that were discovered in 2022, because we wanted to see recent trends. So not just all vulnerabilities, and basically it’s across all industries, we didn’t segregate it to a specific industry.

Okay. So first in the full report, we can see all what the top 10 vulnerabilities were, and a deep dive into our analysis of each and every one of them. But that’s multi dozen page reports, so what we wanted to do today is go over the key findings, and the most interesting examples of each finding. So again, these are the key findings from analyzing those top 10. And the first key finding was actually very surprising to us. So the CVES that are appearing the most within enterprises are actually not high-severity issues, or even issues in very common components. They’re actually low-severity issues, but they’re issues that, because they’re low severity, they were never fixed. And because they’re never fixed, each new version adds to the vulnerability count, and it keeps rising and rising, and the vulnerability count for these vulnerabilities never decrease.

So how is this actually happening? And we’ll see an example. So as you know, there’s maintainers of large projects like Red Hat, Debian and Ubuntu and they perform their own CVE analysis, they don’t rely on NVD’s analysis. And a lot of times either these maintainers understand that a specific CVE is just not relevant for their project so there could be a general CVE which is relevant, for example on Apache, but it’s not relevant for the version of Apache on Red Hat. And in some cases, even it could be relevant, but they deem it as a minor issue, and they never fix it. And because it’s never fixed, these CVEs and the count of these CVEs never lowers. Each new artifact that is uploaded, or each new image that you scan will still have this CVE because it’s never fixed.

And what we discovered is that the threat of these CVEs is maybe misleading because a lot of the times it gets a high CVSS rating, but in reality, it’s a low rated impact because that’s the reason that it was never fixed. So for example, we can look at CVE-2022-29458. It’s a CVE in ncurses, it’s a very popular library, so denial of service. And you can see from NVD, it appears to be substantial. It got a high severity rating. And this was the second most common CVE that we saw throughout all of 2022, meaning we saw the most amount of software images and artifacts that are affected by the CVE. This was the second place.

So even though it got a high severity, in reality, the CVE was so minor that even though for example, Debian was vulnerable to it, they just chose not to fix it. They said, “Ah, okay, it’s a minor issue, we’re not going to fix it ever.” And this creates a lot of noise because basically it’s a default component that comes with Debian and it’s vulnerable by default or so like a very naive security scan will tell you that it’s vulnerable by default, but actually it’s so minor that they did, they decided not to fix it and that’s why it stays vulnerable and you can’t upgrade to any fixed version because there’s no fixed version and just stays there and creates noise. So we’ll see a bit later how to deal with these. For example, if you were only looking at the NVD and the CVSS score, it would be very misleading.

The second key finding, which is related but it deserves a point of its own, is that the public severity ratings are extremely overinflated since they ignore the real-world impact of the vulnerability. And we can see this in a lot of ways, which will look at some right now. We know this because as I said, our security research team is doing our own research on the vulnerabilities and understanding their real impact. So let’s see some examples of that. So for example, you might be familiar with CVSS impact metrics, so in a CVSS score, it says whether the vulnerability causes confidentiality loss. So for example, if it’s a data leak vulnerability, whether it’s a integrity loss so that means if it’s a denial of service, it will cause the, sorry availability. If it’s a denial service, it will cause an availability loss and integrity usually just relates to if there’s remote code execution basically.

So for example, our code execution vulnerability would be rated at high loss of integrity, high loss of confidentiality, and high loss of availability. But what we discovered is that these metrics are very much looked at at face value from NVD and there’s no deep research done in most of the cases. So for example, if we have a vulnerability that is a DoS attack and it crashes an important daemon, so we would say, “Oh, okay, that probably deserves a high availability rate because that’s actually severe,” and that’s fine, that happens. But the problem is that when there’s a vulnerability that is a DoS attack, but it actually crashes a forked process, so let’s say I’m opening a text editor and it’s a completely separate process, it’s a client process and there’s a vulnerability that crashes that. So that’s not actually so bad because it crashes a client on my own PC and it’s not even a network client and it’s a forked process, so it’s not going to crash any other process. But that also will get a high rating, no questions asked.

There is no CVE that we saw in 2022 or even earlier that would get a low availability loss rating from that. So that’s a bit of dissonance from the real world impact. A second example, if there’s a buffer overflow vulnerability, immediately, it will get the highest rating for every parameter, confidentiality, integrity, and availability. It’s very rare that it won’t, but it could be a buffer overflow that doesn’t write, doesn’t allow the attacker to overwrite any meaningful variable. Sometimes it’s a limited buffer overflow, like one or two bites and you just… The attacker cannot write any variable just like right at the end of a buffer. So it may even not lead to availability loss, but as long as it’s a buffer overflow, it will get a high rating on each and every one of these metrics.

So because we do our own research, we look out for these things and we started to map what is our own perceived severity and what is the NVD severity. And what we saw is that in 64% of the top 50 CVEs and 2022, and again top 50, I mean, most common 50, and across all the artifacts that we saw of 2022 vulnerabilities. So 64% received a lower severity rating. So that’s more than half of the vulnerabilities that were re-downgraded from either critical severity to high or from high to medium. A lot of times it was actually higher medium. So these are mapped here, it’s also, all this data is available in the report. But the important thing is that this just shows that it’s not a very specific problem, it’s across all the CVEs basically that the CVSS is getting very overrated.

A very recent example of the lack of research and just blindly giving a score is probably most of you as security professionals are familiar with it. It was a CVE-2022-23529. It’s a CVE in a node, JS package, an npm package, which is called node-jsonwebtoken. So it’s a package that parses jsonwebtoken, it’s the most popular parser for jsonwebtoken. So you would imagine it is extremely popular and a node package, it gets downloaded 12 million times in a week so that’s extremely popular. And it receives a CVSS of 9.8, this vulnerability, and it was marked as remote code execution vulnerability. So this seems very, very bad, right? So there’s a CVE, it has almost a perfect CVSS and in a very popular component. So you might think, this is something that you need to fix immediately. So actually we were looking at this vulnerability and we think that in reality, this issue probably didn’t affect even a single production system.

This is what we wrote in Twitter the day it came out. And what we realized is that the vulnerability to actually exploit it, and this is something that isn’t taken into account in the CVSS. It requires the attackers to control a function argument, which they would never control. Specifically, it was the function argument that if the attacker can control the secret key that is used for verification. But obviously they won’t be able to control that because if they control that, then the verification is useless so that’s not a realistic code. So it’s something that will never happen in production. And this got so much feedback and pushback, actually that the CVE was taken down about a week later. So this is actually a rare case where it is taken down, but in most cases, it just remains critical and it’s not taken down ever or discussed on. This was just the case because this package was really that popular so the pushback was very large.

So up until now I talked about issues, but I did want to talk about some actionable items that will let you, as a security professionals or DevOps engineers actually fight back from both of these phenomenon. And this is also something that we outlined the report in more detail and basically here to help the community. And this is the methodologies that we use as well, so we can vouch for them.

So how to deal with overinflated CVSS. So one of the ways, and this is what we’re going to focus on now, but the report has more, is just looking for alternative severity ratings. Don’t look at the NVD rating in face value, there are other places you can look at. So first of all, if you check NVD, specifically the NVD site and not [inaudible 00:18:55] or some other sites that just write the CVSS. All the of the external sites, write the [inaudible 00:19:04] NVD, sorry, the NVD CVSS, that’s the score given by NVD.

What we usually saw is that the score that was given by the original CNA or the CVE numbering authority, basically the vendor that were part of the vulnerability, their score is usually more realistic because they’re the ones that usually did the research. So they have a very thorough understanding of what the vulnerability is. So if you go to NVD and the CVE was reported by an external CNA, you can see both score and what we’re saying is basically you should probably rely more on the CNA score and not the NVD score.

The other thing, and this is, in this case, it could be a huge difference. So if you’re working on a specific distribution, for example, in Linux, so Ubuntu, Red Hat, SUSE and et cetera, it is much better to look at the analysis that is done by that distributions’ security tracker and then relying face value on the NVD’S severity. For example, here we can see a vulnerability, it’s the same vulnerability, but NVD gave it a high rating and Red Hat gave it a low rating. And in this case, it’s not even because the CVE is not relevant for Red Hat, it is relevant, it’s just that their security team understood that it’s much less impactful than what NVD said. And that’s again because they have dedicated security teams that analyze these things. So it’s just they get less CVEs than NVD and they have a dedicated team, then we trust them more. And across all of our years of research, we saw that they are more on the point than the NVD severity.

And the last score that you can take into account, and maybe this is a bit less known, the biggest software projects, open source software projects usually maintain their own database of vulnerabilities that affect their own projects. So for example, here we can see an example from cURL, a very popular network client and there’s a vulnerability that they rated as low, but NVD rated as high, so that’s a very big difference. And again, we preach to trust more on the project’s severity score because again, the project developers are the ones that fixed the vulnerabilities, they did all the due diligence, they did the thrashing, they really know what’s going on there, and they’re also, they’re not afraid to give critical ratings as well. There was a open SSL one recently, but it’s really much more thoroughly researched by the project maintainers.

So again, since we wanted to aggregate things and be helpful to the community, we did build a list of all the biggest software projects that have their own severity ratings and where you can find the severity ratings. I’m showing it here, but the most easiest place to find that would actually be in the security report. So the full report is very easily accessible, you just write in Google JFrog security research report 2023, and it would be the first result. And there you can see even more actionable ways how to find which CVSS, sorry, which CVEs you can focus on and which CVEs you shouldn’t focus on so even more than what we shared here. Yeah, and now Nati will talk a bit about how we can solve it in other ways and save you a lot of trouble.

Nati Davidi:

Thanks, Shachar for reviewing the findings. And as Shachar mentioned, you’re all welcome to visit our research.jfrog.com website or search for the research. Just to reiterate it quickly, so we see here two examples out of many of cases where there are very severe, allegedly very severe CVEs when the real world cannot be really exploited or vice versa cases where you have low ranked CVEs when those might be exploited and will never be fixed. And this is going back to the point of frustration, create tons of workload, tons of overhead. And this is only in the CVE under the CVE topics. And again, going back to the software supply chain flow, we have many other risks. What we are trying to do in our security arm, in our research arm is to ongoingly and fastly research any new introduced CVE and also chase malicious packages ourself and learn about configuration of packages and actually aggregate all of these data in a proper manner and make it available to the community and of course to our customers through the products.

So when you look again on the entry points here in the screen, each and every of these security buckets get their own attention by our security research arm so we can provide us information properly. Now, other than that, at Jfrog, what we’re trying to do is to cover the software supply chain security in the widest possible manner by introducing scanners, analyzers for each and every of the introduced steps on this screen. And the methodology is to do it in a three levels, actually. One is analyzing in any phase, second, validating in any phase, in any secondary phase to make sure that what you already analyze is still the same. And also to record and tag and sign things in order to make sure that are not manipulated. This entire approach can be achieved by using the JFrog platform. We’re not getting into the full details right now, but I do want to introduce the security part of it.

And the first one is the basic security package that can be actually easily used or tested or demonstrated through by accessing our website is what we call Xray. Xray today is an enhanced software composition analysis solution that allow identifying CVEs, malicious packages, operational risk issues, and of course, compliance issues related to licenses relatively easily, both on source code and on binaries. And we are doing both and we are encouraging everyone to do both simply because source is not binary. Binary consists much more than source, things like configuration, even packages that are not being installed in artifact in a regular way. And you miss them because you don’t look at the metadata, sorry, because you look only at the metadata of them. So what we do is actually offering way to look both in the source and in the binary. And specifically for CVEs in the basic package, we provide everything that Shachar just shared in a very easy manner, easy way to consume.

So when you look at CVE, we’ll always offer the same structure of giving, of course the title and the CVE number saying what it is, giving the regular CVSS scores, and then the JFrog security scoring and then all of the reasoning behind it of why do we think it’s exploitable or not exploitable, what are the prerequisites that need to be fulfilled in order for the CVE to be exploited successfully? And even more important, how to mitigate it, how to deal with it? Not in the regular manner of just upgrade your package or patch it, but in some cases use this function other than the other function or change this specific configuration and render the CVE [inaudible 00:27:31] as actually. And this structure is being offered for all of the high profile CVEs through Xray and this is part again, of our basic package.

In the advanced package, we actually take the approach that Shachar just introduced of understanding the real world impact and run what we call applicability scanners. The applicability scanners take the CVE and check the environmental terms if there are being met or not in terms of allowing the exploitability. So again, if there is a specific configuration in the package that will allow or will not allow the exploitation, the applicability scanners will automatically identify it. For example, if you have a container with first party code and with configuration with third party packages, the analyzer will scan all of it all together, first party, third party and configuration in order to understand the context. So it will tell you if the configuration allow exploitability, they will tell you if the first party code is calling the actual vulnerable function in the vulnerable library in a way that allows exploiting the CVE or not.

And this is of course a unique approach that can be achieved in a complete manner if you do both binaries and source code analysis and especially binary analysis because again, the artifact will necessarily contain things that will be missed in source code, configuration will not be introduced in source code. The interconnection between first party code and third party code will not necessarily be introduced in the open source form of the software and therefore binaries must be scanned. So the applicability scanners, as we call contextual analyzers, are now available through the advanced packages and can be testing your CVEs on your specific real-world context. And this will of course be as close as possible to a production form. And other than that, will do the same for many other aspects of source and supply security that are introduced on the other slides, will allow to detect secrets in your binaries, which usually are introduced in binaries and not in source code because it’s a wrong approach to putting into embedded in a source code.

Will identify misconfiguration and misuse of libraries and services and will also allow analyzing your Terraform and other infrastructures code artifacts to understand if they actually allow easy exploitation or utilization for the bad actors. So again, Xray enhance software composition analysis and the advanced package is about more scanners and the applicability scanners in particular that allow finally, take the research approach that Shachar introduced of how to know how the CVE is applicable on you specifically and do it in a fully automated manner.

In this case, you can see that the recent sudo vulnerability that allow accessing data that shouldn’t be accessible, you can see that the prerequisite for exploitation of the CVE is that when the sudoedit file introduced very specific configuration, that is relatively rare. Actually it’s very rare, so rare that we didn’t even found the one case when in production it was used that way. And only in this way it’ll be exploitable. So as you can see, hopefully in the monitors here, the scanner checks whether the sudoers file has the vulnerable directives, sudoedit, and it is being done automatically. And if it is being found, it is being presented here, again, in the findings of contextual analysis to tell you if this specific CVE is applicable or not. And with that, we’ll stop here and allow a few minutes for questions from the audience. Shachar, would you like to take the first ones?

Shachar Menashe:

Yes. So let me see. Yeah. Okay, so the first question is, “What’s the percentage of CVEs that you think have overrated CVSS score?” So good questions, this is something that we researched. So the way we did it, we researched the top 10 docker containers in Docker Hub and we scanned them and saw what’s a percentage of CVEs that had a reduced score. And in this case, it’s what I showed, it’s around 70%. So in this research we showed that when looking at real world usage statistics from Artifactory was like I said, 64 and from Docker Hub it was basically around 70%, so quite high.

And the second question is, “What’s the difference between your contextual analysis approach versus other people’s reachability concept?” Okay, so yeah, some other products have features that are similar to what we call contextual analysis. For example, calling it reachability or effective vulnerabilities. Our main differentiator, I believe there’s two. The first one and the biggest one is what Nati said is the focus on binary. So basically we don’t just check if the vulnerable function is called because there are some vulnerabilities where it’s not relevant. We also check the configuration of the scanned image, the configuration files in the image and the environment that the vulnerability runs on. For example, is it Windows versus Linux that has nothing to do with source code. And then we can write scanners for more vulnerabilities because some vulnerabilities you will never get an answer, whether it’s exploitable just from source code analysis. Of course we also do the source code because you have to do it.

And I think the second part is that we have very good coverage. We have a dedicated team just for writing those scanners, and we do some of them even automatically. So we really put a lot of effort in the coverage and for example, on several benchmarks like OS benchmarks, like [inaudible 00:34:21], we have more than 90% coverage of all the critical CVEs in that image. And we strive to always have more than 75% coverage for the critical CVEs of every image, basically.

“Do you think we’ll see an increase in new CVEs in 2023?” So actually we covered that in the report, but in some components, yeah. There are some components that, for example, SnakeYAML, which is a very popular YAML library in Java that just got a father written for it and the father haven’t been running for that much. So I believe that there will be a lot of new CVEs from that. In general, there’s just more software. So I think there will be a slight increase globally, but I would look at specific components. So I would refer to the report on that to see the most popular component that we saw in 2022 and which ones will go up in vulnerability, then which ones will go down.

Nati Davidi:

Okay. The next one is, “Do you do malicious package detection in or Jfrog Xray or Jfrog Artifactory?” So actually the malicious package detection is being offered as part of Xray today. And they, again, just to reiterate, the way it works is that our research arm is developing ongoingly, scanners that are ran against any new introduced package. And we are doing it by scraping, of course, the common repositories out there. We started by doing it retrospectively on hundreds of thousands of packages and we found accordingly many potential malicious packages and we of course make it publicly available to the community and we ongoingly keep again chasing each and every new packages out there. So what we’re been doing is to build what we claim to be the biggest database today of malicious packages. We have more than 165,000 packages that should not be used. And this information is being enhanced ongoingly. And again, on top of it, we always also offer remediation, the easiest possible remediation way to deal with the identified malicious package.

“Are your solutions available in the cloud, multi-cloud, on-prem and hybrid?” The answer is that all of the scanners that we discussed, again, that mimic the approach that Shachar introduced are available on all of the clouds, on-prem, hybrid,, SaaS as well. And again, I welcome you to visit our website to see how you can initiate the demo, sorry, the demo process or the testing process.

“What do you see as the most vulnerable part of the software supply chain, whether…” Well, that’s a big one, Shachar is a security expert. I’ll hand it over to you.

Shachar Menashe:

Okay. I’m also wondering about your answer to it, but for me it’s pretty easy, I think. I think, currently, the most vulnerable part is the malicious packages actually, and not even CVE. It depends, if you’re using npm and PyPI, then I think that’s the most vulnerable part when getting new packages. And the reason why is that even if you have a okay security pipeline, obviously with a perfect one, you can evade it, but even with an okay security pipeline, you might still get hit and without any developer mistake.

And the problem with malicious packages is that it’s not something that’s waiting to be exploited, like a vulnerability. It’s something that once you install the package, it’s immediately exploited and get, and there’s no mitigation against that. It’s not like stack overflow that there’s mitigations. So again, the problem is that there’s attacks like typosquatting, which okay, the developer needs to make a error himself or herself, so that’s less severe. But there’s attacks like dependency confusion and package hijack, which are still happening, like the PyTorch package hijack that we saw recently. And it’s just, even if you do everything very good, you are still hit by it and there’s no human error even involved.

And the single attacker can exploit thousands and tens of thousands users in a couple days with this kind of attack. So it’s very widespread. It’s immediately exploitable and sometimes it doesn’t even rely on human error. So that’s why I think it’s the most horrible part right now. But if you’re not using PyPI and npm and maybe NuGet, then it’s less severe, I would say.

Nati Davidi:

Yeah, so I was about to say the same Shachar, I swear. Yeah, malicious packages is the part that didn’t get enough attention so far. We do see many players out there trying to tackle this issue right now. I think that the most painful thing here is from operational point of view, it’s very hard to do it in a scalable manner, to be able to really stand each and every package that you let in into a huge organization, to be able to curate these packages in a way that will be automated. And that’s exactly what we try clearly to solve these days, and we will introduce more new capabilities soon around it.

And with that, I would like to conclude the session. And thank you very much again for joining us today. I would like again to mention that there is a major part of our work that is being done for the community only, not for commercialization, and not in order to sell our products. And I welcome you to reach out to us to consult with us, to ask us to cover new things that you think or new approaches or new methodologies and techniques that you would like us to cover for the sake of helping the community. And clearly we will make all of this capability also available through the platform always. So thank you very much for your attendance. Have a nice day.

Shachar Menashe:

Thank you.

Trusted Releases Built For Speed