Up to Blog

Feature: Scoring

When we stop and think about some of the biggest problems communicating between developers and security folks, one of them is wrapping our head around priority.

The Problem with Reporting Everything

Generally, security takes the approach of including everything in reported results. This can result in 100’s, 1,000’s or even 10,000’s of findings from a single security activity. JASP routinely finds 100’s of items.

Are there really 100’s of items you need to worry about this instant?

This is an honest mistake on the part of the security team. We feel obligated to report anything that could be an issue. If we don’t report it and you get hacked because of it, whose fault is that? (We assume you will say it is ours)

The other side of the issue here though is that seeing a list of 1,000 items, developers often shut down and give up. Often a single really important fix waits months while a whole project is carved out to fix them en masse.

Our solution to this is to offer scoring to help prioritize.

Scoring Models

The security industry often falls back to CVSS or other generic scoring models. These are flawed in practice and become complicated to apply in real world environments. We have but don’t use CVSS scores.

Our first pass at scoring involves leaning on our team of experts. We each score each finding and the JASP Tool takes the average of our team’s scores. This is based on the exact findings as they appear in our results. Its like an expert system.

Our scoring infrastructure is built to accomodate multiple scoring models, including ones that are based on ML - and yes, we intend to go there. Long term, we believe that our approach to scoring will help our users to find the most important issues.

The Benefit

The benefit is that we can show our customers the most important things we see.

This might be one of the most important things we can do to make security easier.

Up to Blog