Vulnerability History, Databases and Statistics..Face Your Fears [swampUP 2020]

Brian Martin,VP of Vulneribility Intelligence ,RBS

July 7, 2020

< 1 min read

Delivering Fast + Secure – JFrog Xray and Risk Based Security VulnDB: https://jfrog.com/webinar/delivering-…

Vulnerabilities, those pesky things that you hate to hear about. Let’s hear about them in a different light! Starting recently, just 118 years ago, we’ll start with the first modern vulnerabilities and quickly move through the years to show how long they have been around. Collecting them is simple too, right?! Probably not; let me show you why it is a raging headache and nightmare you wouldn’t wish on that annoying co-worker. We’ll round it out with pretty charts and anecdotes and cupcakes. Well, two out of the three isn’t bad.

Video Transcript

Hello, and welcome to SwampUP 2020!
My talk is the Vulnerability History, Databases, and Perspective,
Face Your Fears.
My name is Brian Martin, I am the VP of Vulnerability Intelligence of risk based security
and the content manager of our DB offering.
I’ve been collecting vulnerabilities in some fashion since 1993
and consider myself to be a vulnerability historian.
For this virtual SwampUP, I will answer questions during the talk,
and if you have further questions, visit our virtual booth and I’m happy to continue the discussion.
Computer security is in bad shape.
Apologists will sometimes dismiss this, as our industry being young.
Sure, compared to building pyramids or fire it is,
but compared to the modern car industry like Ford and their Model T in 1908,
our industry is arguably just as old.
And to go with that age, vulnerabilities from back then are still plaguing us to this day.
So how did we get here knowing what we know?
This talk will give you some historical perspective to fully understand and appreciate where we really are.
Let’s start with how this talk came about.
So I read some articles, I read some fun research papers from the early 70s,
I emailed a lot of people, older, smarter, and wiser than I am, including
some who worked on mainframes back in the 60s.
I read a lot of old books for perspective,
and it made me ponder the question, why are we here?
And I am reminded of the quote that may answer,
“Those who cannot remember the past are condemned to repeat it.”
Of course, this spawned a hundred variations that we now attribute to Shepard Burke Vanaget =
and the esteemed Jesse Ventura, but who’s counting?
When talking about vulnerability history, you really have to pick a start date and that may be arbitrary.
Twenty years ago, I would have picked the 1960s.
Fifteen years ago, the 50s.
Ten years ago, arguably the 1930s.
Now, I think the early 1900s is a good starting point.
The important part of choosing such a date is to remember that a vulnerability
is centered around the ability to cross privilege boundaries.
With that in mind, let me tell you a story.
New scientists printed a story in 2011 about the Marconi wireless telegraph,
prompting me to dig up even more details afterwards.
Invented in the years leading up to 1903,
this device was a significant advance in communication technology,
breaking away from wired lines.
The primary creator was the Italian Guglielmo Marconi,
who had assistants John Fleming and Arthur Blok at his sides.
Marconi, like most security vendors today, said his device was secure and that messages were private.
He boasted,
“I can tune my instrument so that no other instrument that is not similarly tune
can tap my messages.”
To show off his new device, Marconi set up a demonstration on June 4th, 1903.
He was in Cornwall, 300 miles away from the demonstration,
ready to send a message proving they could travel long distances.
Fleming was set up to receive the message and press the crowd in the Royal Institution theater in London.
Minutes before the wirelessly transmitted message was to arrive,
the crowd heard a ticking noise from the theater’s brass projection lantern.
Blok recognized it to be morse code and quickly figured out someone was sending powerful,
wireless pulses that were strong enough to interfere with the projector’s discharge lamp.
Mentally decoding the missive,
Blok realized it was spelling one _ word over and over:
Rats. Rats. Rats.
After that, came a string of crude limericks and then personal insults.
Level that, Marconi,
the interruption stopped and then the demonstration proceeded,
but the damage was done.
Afterwards, Marconi wasn’t happy about this and like some security vendors today,
wouldn’t address the critics.
“I will not demonstrate to any man who throws doubt upon the system,” he said at the time.
Fleming, pictured here later in life, wasn’t so rigid.
He sent a fuming letter to the Times of London, dubbing the hack “scientific hooliganism”
and “monkey-ish pranks,” calling it an outrage against the traditions of the Royal Institution.
That is much like security vendors today screaming, “That isn’t vulnerability.”
Fleming asked the newspaper for help in finding the culprit.
A mere days later, a letter confessing to the hack was printed by the Times.
This fine gentleman, John Nevil Maskelyne,
a stage magician and an inventor, had trolled Marconi
to tell him that his notions of private were a joke.
His admission at the Times led to several public gutters back and forth between Fleming and himself.
Basically in 1903, flame war.
You arguably have Maskelyne as the world’s first modern hacker,
and like today’s, he was flexing his skills.
So Marconi’s patented technology for broadcasting on a quote, “precise wavelength,”
was shattered before it even hit the market.
Of course today we know this to be simple technology and we tune the radio to a different channel.
This resulted in the oldest two vulnerabilities in VulnDB,
unencrypted transmission of messages for remote man-in-the-middle disclosure
and the ability to spoof messages.
But the story goes deeper.
Maskelyne was apparently hired by the Eastern Telegraph Company to undertake spying operations
on Marconi over a year before this incident.
Maskelyne had even written an article in the November 7, 1902 issue of “The Electrician”
about the very technology, half a year before Marconi’s demo.
The comparisons to full disclosure, wily hackers and internet trolls, is real strong
and personally I’m waiting on a movie given all the intriguing drama.
The lesson though, is if you say it’s secure or private, verify it first.
Next, we jump to World War II on the use of electromechanical rotor cipher machines for message security,
used on the back of that wireless telegraphy.
The most famous was the Enigma,
invented by a German engineer, Arthur Scherbius.
You may know it from movies such as “U-571,” “Enigma,” and “The Imitation Game.”
During 1932, the first break of a German army Enigma was carried out by three cryptologists and Poland.
Over the coming years, subsequent models of the Enigma would be created and then broken by other countries.
Not just Germany, but a variety of other cipher machines in other countries were compromised, including Japan.
Three models of the Lorenz device, used by the Germans were also broken,
the first initially cracked in 1942.
_ or not, we can thank this little beauty for leading what came after and eventually our jobs.
December 1943, the first electronic single purpose digital computer was built to crack the Lorenz cipher.
It was called Colossus.
The Lorenz and other devices were compromised during World War II by codebreakers
that allegedly _ in the UK with the aid of this computer.
Jumping forward a decade, the 1950s gave us the rise of the phone freaks.
Many forget that early telephone systems were electromechanical machines
that would evolve into huge digital computers.
In 1955, the Western Electric lab’s Panel Machine’s Switching System had an issue.
Using a modulated signal, someone could see an internal chunk between two cities,
which was considered to be a privileged operation, not intended for customers.
Then they would ask the remote operator to connect to a local number,
bypassing the building system allowing for free long distance calls.
In 1960, the Western Electric labs for a Crossbar was found to have a vulnerability,
known as the infamous Blue Box Exploit,
a carefully timed 2600 tone could be sent down the line
to trick the remote switch into thinking the trunk was idle,
allowing for that free phone call.
This is where the phone company learned the hard way that user input was a pain in the ass.
Some children’s toys, as well as home-built electronics could be used to circumvent the system to make free calls.
Phone phreaks would end up abusing the phone system for over a decade,
well into the 70s using this exploit.
Okay, now the computer history we’re more familiar with.
We know the Colossus predated the computers that were to come, but the 1960s brought us real
multi-user digital machines.
They also brought a handful of computer vulnerabilities that are very familiar to us, even today.
This is the IBM 7094, the first multi-user computer system
I am aware of that has software vulnerabilities,
resulting in gaining privileges.
From ‘62 to ‘66, issues were determined to allow for disclosing passwords,
bypassing time restrictions, and crashing the system.
While the 1960s were kind of a blip on the radar, the 70s brought us
a lot of fun in the vulnerability world.
Who remembers the wonderful game, “The Oregon Trail”?
While not a multi-user game for most of it, you could play it over TTY terminals and the phone line.
It reminds us that even for harmless apps, trust in user input is bad.
Kids figured out that inserting a negative value for coins could cause an integer underflow,
letting them manipulate currency.
This is a PDP-10, manufactured in 1966,
which carried into the 80s.
It ran one of two operating systems, either TOPS-10 or TENEX.
In ‘72, TENEX was found to have a timing tech vulnerability,
allowing the user to figure out another’s password.
Second is a Honeywell DPS 6 running GCOS 3 from General Electric.
It had three vulns, in between ‘72 and ‘79.
The TUTOR programming language was developed for the PLATO IV computer system seen here.
In ‘74, with crafted EXT commands sent to a remote terminal,
you could lock it up, requiring a power reset to fix,
and remote denial-of-service was born.
This is in my IMT’s Honeywell 6180, one of many computers that were in Multics.
In the 70s, there were at least 16 vulnerabilities in Multics.
Statistically, that made it kind of the oracle of the 70s.
Those vulnerabilities included privilege escalation, denial-of-service, file encryption compromising, and information disclosure.
We know about several of them, because in the early ‘79,
Roger Shell led a team at the Naval Postgraduate School to attack Multics.
During the test, they penetrated a Multics installation and added a backdoor.
That backdoor was tiny and required a password for use, “so it was neutered,” as Shell describes it.
The test impacted multiple machines and they ultimately ended up modifying Honeywell’s
mastercopy of Multics.
Even after the team told Honeywell that they existed and how it worked,
the vendor couldn’t find it.
Instead of looking further and fixing,
they kept distributing it even with the backdoor in place.
That technically makes it the first government backdoor operating system of sorts.
Interestingly enough, Charger and Shell wrote a paper in ‘74,
that they did the same thing five years earlier, but the backdoor wasn’t distributed.
This is a model 91 IBM 360 computer used at NASA.
The 360 series ran OS/360, one of three variants.
In 1974, it was found to have access to restriction bypass vulns.
This operating system is still used today,
and the system 360 software is still supported by IBM z/OS
for the Z series computers.
More fascinating,
between 1964 and 2005,
50 years of use, just one vulnerability has been disclosed in that operating system.
That’s it. From 2004 and on, 75 vulnerabilities have been disclosed in software
for the operating system.
This is a PDP-11 at Tottenham College in 1978.
RSTS was a multi-user time sharing operating system from digital equipment corporation.
In ‘75, four vulnerabilities were identified in RSTS including denial-of-service,
user credential disclosure,
file disclosure, and an unspecified remote issue with the login process.
That last bit is interesting to a vulnerability historian like myself.
Rather than full disclosure of the vulnerability, we have limited disclosure
from administrators who saw, it used to compromise a system in the wild.
This is a Xerox 560, capable of running CPVE operating system released in ‘73.
Two years later, a vulnerability was found that allowed a local user to bypass
the built-in memory protection to elevate privileges.
And another PDP-11…
pictured are Dennis Ritchie and Ken Thompson.
Those two names might be familiar as they are the creators of the Unix,
which is the basis for hundreds of subsequent variants including Solaris, AIX, BSD, macOS, and Linux.
In ‘75, an unspecified vulnerability was found in a login process related to ray checking.
Another vague disclosure, this time from HP labs.
This is probably the first vendor to disclose a vulnerability.
Six years later, another V6 issue was found in the issued program,
which arguably is when the floodgate opened for Unix vulnerabilities.
A dozen more would be reported over the next 10 years.
This IB77 Punch Card System,
part of a larger computer that ran one of four different operating systems,
two vulnerabilities were found, one in ‘77 or privilege escalation,
and one in ‘79 for password file disclosure.
At this point, we should consider the same types of vulnerabilities keep coming up, over and over,
from half a dozen vendors.
Last, we move back to encryption, but in the form of digital algorithms.
The new data-seal algorithm was a commercial cipher.
It was a precursor to the data encryption standard, or DES,
which is still a foundation used today.
In ‘77, NDS was found vulnerable to a slide attack that resulted in a full compromise.
This was one of the early computer-based algorithms that fell and would be one of hundreds to fall including DES.
Vulnerabilities were surfacing enough in the 70s so it was a concern to many.
From here, we get into a more modern security world that we all recognize.
Remember, most admins back then, were not security people
and few were dedicated to that task.
Rather, they were expected to secure their systems only in rare cases.
By the 80s, the vulnerabilities were becoming more prevalent
to the point some decided to list them and should be maintained.
Those first lists were basically the early vulnerability databases.
Different than today’s, but often the same intent and goal:
catalog all the vulns.
Before the 80s, the original invulnerability database,
or VDB, was likely the repaired security bugs in Multics from 1973.
While it wasn’t exclusively security bugs,
for a vulnerability historian, it’s kind of a holy grail.
This vulnerability list was eventually put into a book format, or so it seemed,
and printed in 1977.
That image you see is the generic, “we don’t have a cover image that Google Books uses,” I learned.
Obviously, I wanted to get a print book in order to review it and integrate any missing vulns into VulnDB.
So great, Google fails to find a copy, not even on Amazon or eBooks.
But look, I can find it in a library.
It exists in one library, according to Google Books and it’s 941 miles away
in California.
No one wants to go to California, right?
So of course, this prompts my logical, reserved response
and I wasn’t happy.
For 20, I had a digital copy waiting for me in a small conference in Pennsylvania,
courtesy of a friend who had requested the library that the copy be sent to her.
Who knew professors had such incredible power.
So after two years of searching, I finally had it.
Ultimately, six more bugs from Multics would be added to VulnDB as a result.
And since that whole adventure,
the list was finally put online by someone else.
So back to it.
The 70s and 80s also gave us our first taste of supervisory control and data acquisition
or SCADA vulnerabilities.
SCADA is a term for those little things like the power grid, water systems, and other infrastructure.
Note that while the first publication of the SCADA vuln was in 1983,
the problem had actually been around a while.
Also interesting to note that the next SCADA vuln would be published a full 17 years later in 2001.
In 1983, the Nuclear Regulatory Commission issued information notice numbered 8383
about portable radio transmitters and nuclear power plants.
It seemed harmless enough.
It wasn’t labeled in an advisory, designated important.
But what did it really mean?
The first incident occurred at the _ nuclear power plant
in Alabama in ‘75,
technicians figured out a specific differential relay
is radio frequency sensitive.
Translated, a standard walkie-talkie could shut down critical systems at a nuclear power plant.
Let that sink in.
And yes, first incident,
meaning there were more between ‘79 and 2011 at different nuclear reactors.
Up to ‘94, you saw primarily mainframe and unit based vulnerabilities, both local and remote,
encryption algorithms like _
were falling like flies.
Encryption algorithms continued to have cool names.
Then 1994 hits, and Pandora’s box opened shortly after the vulnerability world.
With the commercialization of the internet,
birth of the World Wide Web, and more rapid deployment,
we saw the eternal fountain of web based vulnerabilities.
Over the next 15 years, we’d see yearly vulnerability counts jump considerably as a result.
Over 23,000 vulnerabilities were disclosed in 2019 alone.
By the way, this is the first web server ever.
And with that, we’ll move onto the next part of this talk.
One of the foundations of _
is the cat and mouse game of patching against vulnerabilities.
Every organization in the world does this to one degree or another.
But three decades later, VDBs still don’t have it on the easy street.
I’ll cover some of the reasons why VDBs have problems.
First, why do they matter?
because they are the foundation of vulnerability management.
Firewalls, intrusion detection, vulnerability scanners all rely on them.
Organizations that rely on vulnerability intelligence get it from one in some fashion or another.
Mature organizations can use this data to make better decisions and avoid problem vendors and software.
But all of that assumes we have solid data.
If you don’t know about a vulnerability, you obviously can’t protect yourself against it.
Even in 2020, we see reports of vulns from eight years ago still being used.
You can buy all the blinky boxes you want,
but it’s not gonna help if you aren’t getting good vuln intel
because they simply won’t be reliable.
Let’s start with problems seen in VDBs.
They assume a lot,
that includes what software you use, what details are important to you,
which vulns you care about, and they assume that you are happy with your coverage.
Many are commercial and impacted by business decisions such as working from 9 to 5.
The government’s database, CVE, doesn’t actually track when a vulnerability was disclosed,
some don’t actually include data from CVE, which is free and easy.
There are a lot of reasons VDBs can fall short.
For example, different languages meaning some can’t agree on what a VDB is or what a vulnerability even means.
Most VDBs largely operate how they did the day they started.
Every product and service evolves except seemingly most VDBs.
Why?
Complacency and the Mendoza Line.
The Mendoza Line is an expression in baseball, deriving from the name of shortstop Mario Mendoza,
whose poor batting average is taken to define the threshold of incompenent hitting.
CVE, run by MITRE and funded by taxpayers,
is the Mendoza Line in VDB world,
Many third party offerings are based almost entirely on CVE
so they end up close to the line as well.
VDBs simply never had enough resources to keep up with what’s out there.
Disclosures can be a nightmare and not just for the researchers.
Once a vulnerability is disclosed, it’s often not the end of the story.
Many of them don’t even capture all the relevant or helpful bits.
Abstraction, a simple concept, but a nightmare in our world.
Abstraction is the way we catalog a vuln or a group of several vulns.
A single CVE ID may map up to 66 different vulns by our standards.
In other times, they may assign three different IDs to one vulnerability.
IBM’s database serves their security products, so one vulnerability may have two entries,
that map to two different ways to check for it.
The way we abstract directly leads to how we count the vulns and generate statistics.
If you say a disclosure is a single vuln, and I say it’s three,
we’re gonna have very different stats.
Secunia is one example, as they would create many entries for the same vuln.
Like Heartbleed,
for which they rated 36 entries.
If we followed suit, we would have 712 entries for Heartbleed,
all for the same openSSL vuln.
Secunia says their method of counting is a viable metric for the number of distinct vulns
when it clearly is not.
Stats are only as good as their explanation and disclaimers.
The legend on the chart is a good start,
but it doesn’t begin to speak to where the data came from,
how it was tabulated, and what statistical biases are present.
So what are problems we fact while gathering information?
The language barrier wasn’t much of an issue even five years ago,
with the CNVB or Chinese National Vulnerability Database operating at a greater capacity,
we’re struggling to understand reports.
Google Translate does a poor job with Chinese.
Even when it works and we can understand the type of vuln, there’s a technical barrier.
CNVD may say it’s a no-pointer-D reference resulting in code execution,
except that’s extremely rare.
How rare?
We have one documented case from 2012,
where Adobe said a no-pointer-D reference
could result in arbitrary code execution.
Since then,
there has only been 12 other NPDs in Adobe products,
none of which resulted in code execution.
The solution?
Each VDB team needs to have someone that reads fluent Chinese,
and Japanese, and Russian.
A vulnerability that can be exploited remotely, without any other person being involved, is critical.
A vulnerability that requires someone to click and perform an action is still critical,
but it represents a slightly higher bar to exploit.
We know that people who often click and convince them to do so is trivial at times.
But that distinction is so easy to make, so we need to do it,
yet researchers and vendors often don’t or mix them up.
For those familiar with the the address sanitizer or _ debugger,
you can see this output frequently I bet.
In our world, we’re taking a taking a particular note of ___ triggers a read or a write,
because the impacts are extremely different.
A read leads to denial-of-service from information disclosure.
A write can lead to code execution.
Many researchers and some developers don’t understand this and treat both the same.
If you read news articles, typically on security focused sites,
note that they often are quick and dirty,
they tend to parrot vendor advisories and do no analysis of their own.
But can’t be too mad at developers.
It’s faster to fix the buggy code and patch the vulnerabilities than do root cause analysis,
write an advisory and then properly catalog it.
In this case, a researcher advisory counted the presence of a blank password along with
the presence of a hard-coded password,
as two separate issues.
There’s an argument to be made if the accounts are different,
as the solutions are different.
Change the password or wait for a vendor patch.
That becomes a pedantic VDB argument,
which is my favorite kind.
But look close and we see they are talking about this same account,
that certainly is single vulnerability.
In many cases, the same content is posted to multiple places,
even vendors do it and we often have to have both in case someone searches for one URL or the other.
How about all the popular disclosure points like bug tracker or exploit database?
How about different public and private databases,
national databases,
blogs that discuss?
In a perfect world, we reference all of them.
But this falls under the resource problem for VDBs.
When a researcher doesn’t include a vendor link, or sometimes left to guess or to do a lot of time-consuming digging,
after quite a bit of searching it was an immediately obvious,
so we have to ask the researcher.
This is time we generally don’t have.
Sometimes we run into developers who have less than helpful commit messages,
this increases the time required to find the solution considerably.
So a PSA from the VDB world,
please reference tickets, pull requests, advisor IDs
or use keywords to help make fixes more readily apparent.
We have to try to understand researcher advisories that are contradictory to say the least.
This example has local, local physical, remote.
Vendors can sometimes write advisories that are so heavy on lingo,
lingo that’s specific to their product that it becomes unintelligible.
Advisories have long served as advertising for security researchers and companies
but sometimes they get ridiculous.
This is a screenshot of a single vulnerability disclosure on the right.
See that little box on the bottom? Yeah, me either.
Zoom in on it and we see after all of that,
it’s fixed.
No information on what the fix is and no link.
So sometimes fame outweighs the actual help.
As you recall, CVE is an industry standard and is widespread and quoted frequently.
The entire point of this is to have unique identifiers for vulnerabilities.
Since, the CVE ID has _, errors happen.
Rather than cut and paste, people type them out
and vulnerability chaos ensues.
Not quite that dramatic, but some do jokingly refer to me as the CDE police.
Anyone doing basic VDB work should run across these weekly if not, more often.
MITRE, who runs CVE servers doesn’t seem to care.
And why do this?
Because accuracy matters.
And I’m a masochist apparently.
Some cases are real easy to spot and there are simple typos,
but theirs may be simple typo but sometimes that little typo leads down a rabbit hole
that requires extensive notes clarifying what happened with the assignment.
But wait, there’s more headaches.
We’ll do these rapid fire.
A handful of commercial exploitation frameworks out there are motivated to keep exploit details private,
making it difficult to determine if they are the same issue.
We love disclosure timelines, but cringe when they involve bad sci fi time-travel plots.
I get it, I really do,
but vendors putting vuln details behind paywalls
forces customers to use it despite outsourcing their security work, they can’t access it like me.
Disclosure is done via LinkedIn, accomplishments or publications or whatever
or even the bio,
they’re typically without actionable details. They’re not helpful.
Disclosures via Twitter profile and tweets, really?
Always with the CVE ID reserved it seems, meaning no actionable details.
CPE,
which is different, allows you to programmatically use vuln data to easily integrate
into your software
but with CVE’s limits,
that means a lot of vendors and products won’t have actual CPE data,
making that integration very difficult.
Trying to get clarity from a researcher, vendor, or CVE numbering authority can be a raging headache.
Over 100 million repos on a single site, Github,
how do we begin scale to that volume?
What else?
Here’s a snip of laundry list of other things we don’t have time to get into.
Why does all of this matter?
As we find ourselves in the middle of an election,
the considerable number of states use software-based electronic voting machines,
some of which are 10 years old.
Every modern automobile has computer systems in it
and we move to a new generation of cars
heavily built on software.
Remember, it isn’t just about vulnerabilities either.
Regular software bugs may have serious consequences.
Even those default credentials you left in for testing can cause issues if it makes it into production.
Like this nice _ that has a four-digit default code to start the bike.
Even more scary, medical devices become heavily based on software.
As far back as 1985, we saw that there are at 25, which are pictured here,
which had a race condition that could cause potentially lethal dose of radiation.
And it did in fact kill people.
That modern life-saving technology, like a pacemaker with wireless diagnostics
but no encryption of authentication.
How bad of a thread is this?
Former Vice President, Dick Cheney, had the wireless feature of his pacemaker disabled in 2007.
That was one year before _ published a seminal paper on ICD vulnerabilities.
In five years, before a pacemaker hacking plot in the TV show “Homeland” appeared.
Not just pacemakers but insulin pumps,
that also enjoyed no authentication or encryption but do accept wireless signals
to dump the insulin vial into a person.
The obvious lesson is that vulnerabilities are not new by any stretches of imagination.
The last 112 years have shown us repeatedly
that hardware and software is vulnerable.
Clear text transmissions from 1902,
crypto failure from 1939,
blindly trusting user inputs since 1955,
buffer overflows since 1973.
Why?
Quite simple, because we aren’t addressing the underlying problem.
Instead, we keep putting bandaids on top of all the other bandaids,
desperately trying to plug a hole of this sinking ship,
rather than fix the issues at the root.
We continue to come up with new, exciting, and profitable ways to fix the symptoms.
We spend countless hours debating full disclosure on how much time to give vendors to fix reported vulnerability.
Meanwhile, vendors, including some of the biggest still drag their feet
and take as many as three years to fix simple issues even after the issue is disclosed.
The age old question is, how do we fix this?
I’m more of a breaker than a builder, so it’s not exactly my brand of cola.
I think the power to fix this does not lie with security folks.
I think it lies with the developer world.
Secure coding practices in audits before code is pushed to production,
is our likely savior.
Before we get into questions, I want to invite you to hang come out with
me and the other members of our team. We will be in the
RBS booth immediately following this session for about an hour.
If you have any questions about vulnerabilities or anything else, let’s chat.
Thank you for your time.