Discussing the Value of Data – Part 1 – Data as a Currency

This is the first piece in a series that will examine the economic disparity between the profits being generated through use of consumers' aggregated data (as realized by any number of software firms), and the price that consumers pay when those firms fail to protect that information.

Data is a peculiar thing. It doesn’t exist until you capture it, and it isn’t useful until you can compare it to some other data. The process of comparing data points to one another is like a series of steppingstones to an insight; to knowledge. Over time – in theory – these insights can create a body of knowledge that contributes to improvements in our daily lives.

Take, for example, your daily commute. Knowing how far you’ve traveled toward your destination is good, and better if you know how far away your destination was to begin with. Further, knowing how much time it took to travel that distance might even allow you to make a good estimate of your expected arrival time. If you make this trip often then over time you may begin to refine your estimates even further, and develop insights about the times of day when travel is quicker (or slower). Eventually these insights may accumulate into a mental model that describes optimal routes, times, and modes of travel that you can take. To the degree that this model minimizes your travel time, your data collection and analysis may have made a small improvement to your life.

If this data allows you to improve your life in some observable way then it must have value. You could state the value in terms of time saved, give it a dollar denomination derived from the effective hourly earnings from your job. Or perhaps you save gas by traveling less time, which has an obvious cash value. This is all nice in theory but until the improvements to your routine become real, the value of the data itself is extremely difficult to determine. We lack the ability to measure the value of raw data, yet with some relatively simple arithmetic we can readily observe the real value of the insights and improvements that data enables. This value gap results in improper treatment of raw, granular data, and lies at the heart of the ongoing debate about data security and personal privacy.

We use data as currency – as a medium of exchange – in our daily lives. For example, our personal identities are established by a well-known piece of data that we call a Social Security Number. This 9-digit identifier allows us entry into a vast financial and regulatory system that is structured around each person’s unique, verifiable identity. This financial regime requires employers, banks and credit cards to verify their customers’ identities, which is done through an exchange of data – of a Social Security Number – with the employer, bank, or credit card company. In return for this exchange, one may become eligible to hold a job that pays a salary, or to hold a bank account for money storage, or to obtain a credit card and enjoy the powers of purchasing through leveraged debt.

As simple as it may seem on the surface, this Social Security Number is a citizen’s small claim to a patch of land on a trusted system of government regulation. This system is the bedrock that gives foundation upon which our financial system rests. Without this little piece of data you would be deemed untrusted; your participation in the financial system would be limited to cash transactions only.

Another example exists in the healthcare space, where our identities (based again on the SSN) allow us create medical records containing medical history and critical health information. Since each person’s medical history is unique and in many cases very private, it’s important that we have a trusted system to verify a patient’s identity. By keeping records in a consistent, verifiable way, doctors can track our health over time. Medical record storage and transfer eliminates the need for individuals to keep and carry cumbersome records of our own (which would be inconvenient) or to depend on our own judgment and memory to reproduce medical history (which would be terribly unreliable). Without the use of an identifier, we would be unable to participate in a health system that provides continuous, lifelong care. Our doctor visits would be limited to one-off occasions and emergencies only, with no way for a provider to identify and treat long-term health concerns.

In both examples, data provides a base layer of trust, upon which we build other layers of trusted information and knowledge. These layers allow two parties to trust one another, even though proof of one’s identity is only implied by the social security number. This method of structuring data to organize people into a system allows for broad application of measures that, on balance, improve people’s lives. Generally speaking, an individual who participates in the banking and healthcare systems is far better off than one who does not.

Both of these examples are offered as idealized abstractions of what both the financial and healthcare systems should be. Discussions of trust and verification are meant to describe the intention of the SSN as a tool, not to express opinions about the state of either the financial sector or the healthcare system. Neither is exempt from criticism, and both suffer from major deficiencies that warrant serious attention. It so happens that many of these deficiencies are rooted in a systemic mistreatment of important data, which will be discussed in the coming parts of this series.

The activities and influences of data collection and analytics expand far beyond these two bedrock examples. The expansion has resulted in new business models that are delivering new profits. In order to understand this rapidly diversifying landscape, it’s important to examine the benefits that are attracting new entrants, and why they keep coming. 

Security – The Last Passenger on the Train

“We could have done more, and most of what we did was in response to issues as opposed to in anticipation of issues.”

– Steve Crocker, Chairman of ICANN

Thus ends the first of a multi-part series by Washington Post that will explore the historical factors that have left us with an insecure internet. The message seems clear: we have built a house upon the sand, and may never recover any semblance of solid footing.

In essence, the author explains that data encryption in early days was a heavy load for computers to carry, and so the architects of the internet opted to add it in later if (or when) it became necessary. As Steve Crocker laments in the quote above, many now wish that they would have made a different decision. And others argue that it was the only decision possible at the time.

We hear the position that such explosive adoption of the internet could never have occurred had it been throttled at each point by (then slow) encryption technologies. We also hear the argument that researchers 30 or 40 years ago could never have predicted the security maladies we face today – particularly when those maladies afflict a population that is – still – woefully unprepared to think about security.

I’d like to introduce a metaphor of a train on a crowded platform, packing as many passengers aboard as it begins to chug away. This train represents every new technology, every startup that forms around that tech, and every product those startups push out. Competition will always demand that the technology become a “minimum viable product” or nothing at all, and that the startup who can create that product most quickly will be successful. Time is of the essence and the first passengers on board are those who can make the train “go”! And so the train is continuously and forever racing away from the platform with only the “most critical” people on board.

Economics are no different today than they were 30 (or 300) years ago – save for the speed at which things operate and the depth of our understanding. When the cornerstones and foundation of the internet were laid, they were placed in a hurry. The tech was new, people needed the things it could do, and competitors were lurking. Things are no different today – except that the pace at which tech must be developed and adopted continues to accelerate. The implication is subtle, but frustrating: security always seems to be the last guy on the train as it speeds away from the platform.

So today, as our train races away, we have the opportunity to make a different decision. As we rapidly build new things that are destined to leave our sphere of influence almost instantly, we can put security on the train first. We can build things so they are fundamentally secure; so that future generations can build on top of them with confidence, as we would have if our forebears had built with encryption.

This is far more than just a technology issue, although technology is critical. As technology seeps deeper into the fabric of our daily lives, security is relevant to all things and all people. Public awareness has come a long way since the Target breach of December 2013 (and all those that followed), but knowledge is lacking and urgency has suffered as a result. The sentiment seems to be, if we don’t know what to do about it, then why should we spend a lot of time worrying?

Our collective challenge is to convince our employees, friends, neighbors and fellow citizens that we do know what to do, and that we need their help. We need broad, high-level awareness of security practices in order to ensure that our weak links continue to become stronger. We need to build these practices into our startups, our education programs, our infrastructure, and the insurance policies that protect it all.

When the security officer is the first person on the proverbial train, then the train will be more secure as the rest of the passengers board, and those passengers will be more aware of security. When this becomes a standard practice, we might begin shipping products and institutions to the future that can stand the test of time. If this happens we will have learned from the lessons of Steve Crocker, and denied the certain doom that awaits digital technology if we don’t. 

United Airlines Hack: Fighting Vulnerabilities with Marketing

On the afternoon of April 15, 2015, security researcher Chris Roberts boarded a United Airlines flight and published a tweet that has sounded echoes around the security world. 

During Mr. Roberts' flight a United Airlines cybersecurity analyst became aware of the tweet and alerted the FBI, who met Mr. Roberts at his destination and questioned him for several hours.

The message Roberts sent refers to his research into vulnerabilities on commercial airlines’ flight control systems. He claims to have built a test environment to mimic the electronic systems of a particular type of plane, and in that environment, successfully taken control of the simulated airplane’s flight functions. Now the FBI alleges that Mr. Roberts took his research a (significant) step further – that he actually took control of a flight in progress and caused it to shift course from “cruise” to “climb.”

In the scenario described by the FBI, Mr. Roberts used his knowledge of airplane electronic diagrams to enter the systems via an Ethernet connection to the in-flight entertainment system. From there he would have been able access the satellite phone system, which is also connected to various cabin control systems. Those cabin controls are, in turn, connected to flight avionics systems. While Roberts admits to entering the network and observing traffic on multiple occasions, he claims he never commandeered a flight. The FBI alleges that, during the interviews they conducted following his April 15 tweet, Mr. Roberts did admit briefly taking control of a flight.

If true, the implications of this are somewhat unsettling. Of course, we should expect to hear from some people that this means the terrorists are next in line. And of course we should be on the lookout for that – but an attack like this is unlikely. First of all, it takes a highly skilled and knowledgeable researcher to do what Mr. Roberts (allegedly, and by his own claims) is capable of doing. We know that hackers exist everywhere, but those of this caliber are rare and valuable – and thus unlikely to deploy on a suicide mission. Second, a successful attack would need to be orders of magnitude more sophisticated in order to compromise the controls and then maintain control for long enough to do anything terrible. Certainly in the wake of this discovery, significant scrutiny will fall on the possibility of this type of attack.

What’s more unsettling is fairly mundane – the fact that we continue to receive confirmation of inadequate attention to security in systems design. There is no reason for an actor to be able to jump between the computer systems that control an airplane. Recognizing that the cost of re-fitting electronic systems on an entire legacy fleet planes – not to mention the logistics – would be nightmarish, we understand why these legacy systems persist. We hope that newer engineering will consider newer problems.

United on Friday took an interesting step toward mitigation: they offered 1 million flyer miles to ‘ethical hackers’ who are able to take remote control of an avionics system. Depending upon your valuation method, the reward amounts to between $10,000 and $25,000 – about the cost of an entry-level security assessment. Of course, this is a non-cash expense for United, and likely to have a negligible effect on profitability.

So in one sense, United has seized upon a clever way of demonstrating a proactive approach to a very high-profile problem. The media loves it, and some researchers will undoubtedly participate. However in another sense, United has opted for the cheapest (and most effort-free) approach of mitigation. The security problem promises to continue exacting a heavy toll on those who ignore it. United’s media-friendly approach to the headlines may be good PR, but it is bad practice. 

Single Point of Failure: An Approach to Cyber Risk Evaluation

Your business shouldn’t have very many, but it probably has a handful – roles, functions or process that, if compromised or disabled, would cause your operations to grind to a halt. Those critical points are often physical assets like warehouses, delivery channels, storefronts. They are subject to well-known risks common in the physical world – weather, fire, natural disaster –  for which we have well-developed mitigation tools.

But at the same time, those critical functions are often reliant on computer systems whose powerful functions are rivaled only by their innate fragility. These systems represent potential failure points with repercussions that are not easily contained; they span the entire operation. Information technology systems represent a meta point of failure that supersedes all other. It is important to understand where these cyber risks originate in order to manage and mitigate them.

The potential for system failures represents your cyber risk. Recognizing and identifying those risks is the first step to securing your operations. The steps below offer a general guideline for defining the business risks you face through cyber exposure:

  1. List your critical operations – the things that, if ceased, would halt business. Revenue, for sure, but also production, procurement, and vendor operations. Define the things that would escalate to priority #1 in the event of an outage. If the answer is “everything,” then that is a good start toward understanding and de-risking.
  2. Identify the networks and software involved in keeping each of these operations running. This doesn’t have to be a detailed drill-down of individual applications, just a quick inventory of systems. Identify the people in charge of using these systems each day.
  3. Ask whether those systems can be restored quickly in the event of failure. Is critical data backed up regularly (and how often)? If the network is taken over by outside actors, can control be restored?
  4. Understand what is happening in each of those systems. What type of data is being transferred? Is it sensitive (how sensitive)? Are your technicians able to monitor activity on the network? If there is an invasion, are you able to restore the system to its original state? Are you able to analyze what happened to cause the breach?
  5. Talk to your vendors about their risks. How much of your critical IT or operational infrastructure is reliant on your vendors? Are they able to answer these questions about their own operations? Ask them.

More than likely, the questions you ask in steps 3–5 will unveil some uncertainties. Building a greater understanding of the reality in these uncertain areas will go a long way toward quantifying your business risk. This line of questioning will help you and your team prioritize what needs to be fixed first, which can help you build a budget for security implementation.

It is important to remember that security is strategic, and can be a competitive advantage. Consider that, as a vendor, your goal is to inspire confidence in your customers. Your ability to demonstrate competence and confidence in your operations’ security is a great way to beat the competition and become a preferred vendor.

Breach Apathy, Meet Breach Fatigue...

Just about a year ago, most of us were learning about the Target data breach. It had been discovered just before Christmas at the height of the holiday shopping season, and its scope was slowly revealed to the public over the subsequent months. During that time Target’s stock price suffered significantly (the firm lost $3.2B of market value between December ’13 and February ‘14) and CEO Gregg Steinhafel, a 35 year veteran of the firm, resigned.

Since then, Target has become a byword for data breach. Even as dozens of other firms enter the fray and admit that they, too, have lost staggering amounts of consumer data, Target still looms large in consumers’ minds. It certainly wasn’t the first major breach; plenty of firms (Sony being one) have experienced large incidents before. Perhaps it was the timing of the breach, or the fact that it was the first one that seemed so closely tied to the departure of a chief executive. Whatever the reason, Target was the breach that made everybody sit up and take notice.

Those in the cybersecurity world were, in a sense, relieved at this development. At last, the public’s attention had been turned to what is undoubtedly one of the greatest national security issues of our time. Finally, we could bring this topic out into the light of day and treat it properly, as a community threat. The Target breach, although it caused significant damage, led us into an era that could help the many.

Or so we thought. Since February 2014, when the full extent of the Target breach was divulged, a peculiar thing has happened. Data breach and info security have become mundane and unremarkable – we are no longer surprised by or concerned with a major breach. Less than 1 year later, dozens of high-profile breaches left the public numb to the phenomenon. Consider the recent Sony Pictures hack… characterized initially as a state actor with terrorist intent, this breach was an absolutely unprecedented event. Yet, 6 weeks later it has all but disappeared from the news.

Before Target, it may have been said that the general population suffered from breach apathy. We must now face the possibility that breach apathy has given way to breach fatigue.

It will be hard to know which is worse without living through them both. Was the public apathy that preceded the Target breach responsible for negligence that enabled the bad guys? Or will post-Sony fatigue lead to ever-more careless behavior on the part of employees? It’s possible that it doesn’t matter. Because both sets of problems are rooted in the same malfunction: effective security must begin at the board level, and flow all the way through an organization. Anything less than strategic, long-term oversight will result in vulnerable operations and more breaches.

In simple terms this means that the security community must to focus on bringing their work into the boardroom. This means making the business case for security investments – assigning a value to assets that is risk-rated and justifies a proposed investment in technology. It also means that the management community must seek out the technologists, and help them build these business cases – help them understand not just the tactical concerns, but the strategic basis for their work.

When the executives and technologists can meet on common ground and discuss security as it relates to business strategy, then we will begin to see progress against breach apathy, and breach fatigue. When we recognize that security is truly a community effort, there is no longer any excuse to ignore the problem.

Beta Version Available on Salesforce.com

We are pleased to announce a Beta of our security assessment, def.inity. The tool is designed to help security vendors and consultants engage with new clients, by walking them through a series of questions about the potential client's security controls. The assessment (which does not collect sensitive or proprietary information) provides report that identifies what aspects pose the greatest risk of a breach. Based on this report, clients will more easily grasp the tangible implications of that risk, and together with vendors can work toward a solution based on the business' needs.

def.inity is intended for use by consultants in IT security, solution providers in security technology, and liability experts from the legal and insurance fields. Each of these roles faces a complex conversation with potential clients, and will benefit from the simplicity and clarity afforded by the def.inity assessment.

def.inity is built with the Force.com API, so that it integrates seamlessly with Salesforce CRM platforms. Custom reporting (still in development) will be available through standard Salesforce interface, a well as via a custom Tableau environment – so that it can be exported by any user for reprts, presentations, proposals, or publications. If you are not a Salesforce user, we can still provide a login credential for you, and make reports available via Tableau.

We are seeking Beta testers to help us understand the UX concerns of the assessment – Does it work well? Are the questions comprehensible? Would it help you work better with clients?

Beta testers who provide feedback and participate in a brief interview about the assessment will receive a free 1 year subscription to the tool, when the full version is released. Email us today to obtain login information and get started using def.inity

Sample Client Report - Education

Below is a sample report delivered to a client who completed the Def.Inity assessment. This is intended as a high-level view of executives' awareness of cyber defenses within the firm, to help them obtain some tangible idea of where they are most at risk. We use the Council on Cyber Security's framework, the "20 Critical Security Controls" – hit the link to learn more about each one.

If you would like to complete an assessment, please email charles.leonard@scalarsecurity.com and we will set you up with a password to the demo site.

Industry Benchmark Data: Education

The screen below shows the average level of compliance with each of 20 Critical Controls across firms in the education space.

  • A score of 1 means perfect compliance (which is, in practice, impossible)
  • A score of 0 means the subject is aware of the control, but has not complied (in effect, accepting the risk)
  • A score of -1 means the subject is not aware of any controls, or unsure what it means to be in compliance (this is the worst state)

Threat Vector Data: Crimeware

Crimeware represents a significant threat, and is especially fond of targeting firms with insufficient care in implementing the controls shown below. The scores between 0-1 indicate the percentage of attacks in which Crimeware was successful in part due to deficiencies in that control.

A score of 0 simply means that being compliant with that control would not have stopped a pure crimeware attack. It is important to remember that many attacks use multiple vectors.

Defense Assessment: Anonymous Test Client

The report below puts 3 components together –

  1. Industry average compliance
  2. Anonymous client's self-reported compliance
  3. Most acute threat vector

You can clearly see where the client departs from industry norms, and where deficiencies represent a vulnerability. The firm has mostly ignored the controls, and remains unaware of protections that would guard against an immediate threat (Control #5)

Mouse over any of the data points to see an output.

Output Samples

Below is an example of Defense Intelligence provided through Scalar's Def.Inity assessment. This chart shows average compliance with each of the 20 Critical Controls across several key industries. A score of 1 means the respondent has implemented defenses (that's good) – a score of -1 means the respondent is unaware whether the defenses exist (that's bad).

Below are two basic readout showing how breach attack vectors tend to coincide with a lack of rigor in certain control areas. You can interpret this as follows: 

"25% of attacks that used Crimeware took advantage of insufficient attention to Control #3"

A full list of the 20 Critical Controls and the areas they cover can be found here.

Decision Day with ATI's SEAL Program

Last night we presented our progress through this summer's "Student Entrepreneur Acceleration & Launch" program. Scalar was one of 11 companies chosen by Austin Technology Incubator to represent the UT startup ecosystem, and we spent the summer testing, validating, and iterating our business model. 

All 11 companies' presentations – in Bioscience, Clean Energy, and IT/Wireless –  can be seen below. For Scalar, skip to 1:54:30.

On the Horizon...

We've got good things coming up, and wanted to share with you, the internet.


We need your help.


SXSW Panel Picker Voting – We proposed to host a panel at SXSW 2015. Titled “A Light in Dark Places,” it will address the importance of bringing more people into the conversation about internet security. We would love to see you there, but we won’t be able to if we don’t get enough votes – so help us out!

>>> Vote at this link or click the tile:

Vote to see my session at SXSW 2015!

Austin Technology Incubator’s SEAL Decision Day – This summer we were fortunate to be accepted into ATI’s Student Entrepreneur Accelerated Launch program (SEAL). The culiminating event is called “Decision Day” and will be hosted online as a live streaming event – where you can be a part of the Q&A.

On Wednesday September 10 @ 630pm (Central) the presentations will begin. All are welcome to attend the whole event – Scalar Security will be last on the agenda @ 820pm Central.

Coming soon… Scalar Demo – We’ll be rolling out an early version of our assessment tool, and are looking for people to test it out. In other words, we need you to break our stuff.

Sign up below!

Name *