Building a Risk-Scoring Model for Applications: Initial Algorithm and the Underlying Data Elements

Most risk-scoring models for applications are too simplistic, lacking the breadth of data points needed to provide an accurate risk index. A few open-source projects attempt to build application risk models that are sophisticated enough to account for all of the data and associated nuances needed to pinpoint a risk score that is accurate and meaningful. The problem is that they are too complex to easily implement and manage in an ongoing basis. In response, Contrast Security recently released a RiskScore (Beta V.5) based on an algorithmic risk model that accounts for all of the relevant data points that is also simple to use and manage. This Inside AppSec Podcast interview with Contrast CTO and Co-Founder Jeff Williams, CISO David Lindner, and Sr. Data Analyst and Data Scientist Katharine Watson explores the reasons Contrast developed an algorithmic RiskScore, why and how it plans to release it as an open-source project, how organizations can contribute to it and leverage it, and what the results resemble when it is applied to vulnerability types using Contrast Labs’ application vulnerability and attack data.

Om Podcasten

Contrast Security provides the industry’s only DevOps-Native AppSec Platform using instrumentation to continuously analyze and protect software from within the application. This enables businesses to see more of the risks in their software and less development delays and AppSec complexity. The Contrast platform integrates seamlessly into development pipelines, enabling easier security bug and vulnerability fixes that significantly speed release cycles. The Contrast Inside AppSec Podcast features informative, engaging interviews with security, development, and business leaders on application security trends and innovation. Visit Contrast Security at contrastsecurity.com.