Answered step by step
Verified Expert Solution
Link Copied!

Question

...
1 Approved Answer

I need help in understanding and constructing a Scope Table with most critical scenarios, 2. LEF Table, 3. Primary Loss Table, 4. SLEF Table, 5.

image text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribed

I need help in understanding and constructing a Scope Table with most critical scenarios, 2. LEF Table, 3. Primary Loss Table, 4. SLEF Table, 5. Secondary Loss Table for this:

image text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribed
Inappropriate access privileges Purpose Determine the level of risk associated with inappropriate access privileges in a customer service application. Background During a recent audit, it was discovered there were active accounts in a customer service application with inappropriate access privileges. These accounts were for employees who still worked in the organization, but whose job responsibilities no longer required access to this information. Internal audit labeled this a high risk nding. Assetls) at risk The account privileges in question permit access to the entire customer database, comprised of roughly 500,000 people. This information includes customer name, address, date of birth, and social security number. No banking, credit, or other financial information exists in these records. TComls] The primary threat community (TCom) is made up of employees whose accounts have inappropriate privileges in the account. Given that this group of people has access and experience with the application, they are considered privileged insiders for the purpose of this analysis. You will sometimes get an argument that they aren't supposed to have access, so they shouldn't be labeled privileged insiders. Keep in mind that the label rrprivileged insider\" is not about whether their privileges are approved or not, it's about the fact that they have logical or physical proximity to the assets in question, and they don't have to overcome resistive controls in order to do whatever you are concerned about them doing. Another potential TCom to consider in this analysis would be nonprivileged insiders who gain illicit access to one of these accounts and leverage the inappropriate access in a malicious act. For example, John, who sits across from Debbie, might not have access to this application, but he knows that Debbie does. He knows this because she mentioned the other day how odd it was that her account could still get into the application 3 months after changing roles. He wants to gain access to the application, so he shoulder surfs Debbie's password the day before she's supposed to go on vacation. The next day, he logs into her account and looks up personal information on a handful of people. He sells this information to someone he met in a bar. This scenario is certainly a possibility and can be scoped into the analysis as well. Another potential TCom is cyber criminals. The thinking here is that one of these accounts could be compromised via malware that gives remote access to a cyber criminal. The cyber criminal leverages the inappropriate access to steal customer data. We'll discuss some considerations regarding each of these TComs in the Analysis section below. Threat typets] The primary type of threat event here is clearly malicious. It is difficult to realistically imagine that someone with inappropriate access to an application they're no longer supposed to have access to would accidentally log into that application, and do something that would inappropriately disclose customer information. However, there is a twist here. What about the possibility of an employee with inappropriate access logging into the application and just rummaging around looking up customer information out of boredom or curiosity but not with an intent to harm snooping, as it were? That is absolutely a realistic scenario, and it's something that the organization is not okay with, so the question boils down to whether we scope that separately from the truly malicious event. Deciding whether to combine or separate scenarios like this typically boils down to whether there is likely to be a significant difference in the: 0 Frequency of one scenario over the other 0 Capability between the threat agents in one scenario versus the another 0 Losses that would occur, or 0 The controls that would apply In this analysis, the capability of the threat agents is the same, so that wouldn't be a good differentiating factor. Likewise, the applicable controls should be the same. The losses that would occur might be different, as a malicious actor might on average take more information, and there's a much greater chance for customers to actually experience loss, which would increase secondary losses. There is also likely to be a higher frequency of the events involving nonmalicious actors, because truly malicious acts tend to be less frequent than acts of misbehavior (there are more jaywalkers in the world than there are serial killers). For these reasons, it makes sense to have two distinct threat types for this analysis. We'll label them "malicious\" and "snooping." Threat effect[s] The relevant threat effects in this scenario will depend on the type of privileges an account has. If an account has inappropriate readonly privilege, then the only threat effect in play is condentiality. If an account has change or delete privileges, then integrity and availability come into play. As a result, unless you already know that inappropriate privileges are limited to read-access, you'll need to include all three threat effect types. Scope Based on the considerations above, our scope table at this point looks like this (Table 8.1): Table 8.1 The Scope Table for Level of Risks Associated with Inappropriate Access Privileges Asset at Risk Threat Community Threat Type Effect Customer PII |Privileged insiders |Malicious Confidentiality Customer PII |P I Privileged insiders| Snooping Confidentiality Customer PII |Privileged insiders| Malicious Availability Customer PI Privileged insiders Malicious Integrity Customer PII |Nonpriv insiders Malicious Confidentiality Customer PII |Nonpriv insiders Malicious Availability Customer PII |Nonpriv insiders Malicious Integrity Customer PII |Cyber criminals Malicious Confidentiality Customer PII | Cyber criminals Malicious Availability Customer PII |Cyber criminals Malicious Integrity You'll notice that snooping is limited to confidentiality events. This is because we assume that as soon as someone illicitly changes or deletes record, they've crossed the line into malicious intent. At this point, the scoping table consists of 10 scenarios. It would be nice if we could slim this down a bit by eliminating a few of these. The first and most obvious way to accomplish this is to find out whether the inappropriate privileges are limited to read-only, or whether they have change and delete privileges as well. Let's say for the purposes of this example that none of these accounts have delete privileges. This being the case, our scope table now looks like this (Table 8.2): Table 8.2 The Slimmed Scope Table Asset at Risk Threat Community Threat Type Effect Customer PII |Privileged insiders |Malicious Confidentiality Customer PII Privileged insiders Snooping Confidentiality Customer PII |Privileged insiders | Malicious Integrity Customer PII |Nonpriv insiders Malicious Confidentiality Customer PII |Nonpriv insiders Malicious Integrity Customer PII |Cyber criminals Malicious Confidentiality Customer PII |Cyber criminals Malicious Integrity There's another very important consideration, though, that can help you skinny-down the number of scenarios you need to analyze in a situation like this. Ask yourself what question these analyses are trying to answer. We know that inappropriate access privileges aren't a good thing, so that's not in question. In this case, what we are trying to understand is the level of risk associated with these inappropriate privileges so that we can accurately report it to management and appropriately prioritize it among all of the other risk issues the organization is faced with Our next step, then, is to look at the scenarios in our scope table and try to identify one or more scenarios that are likely to be much more (or less) frequent and/or much more (or less) impactful than the others. This is where your critical thinking muscles can get some serious exercise The first scenario that catches our eye in this regard is the one about cyber criminals/integrity. In our minds, there's very little likelihood that a cyber criminal is going to benefit from damaging the integrity of customer records. It's possible that their purpose is not financial gain, but rather to simply harm the company or individuals, but it seems a very remote probability that an actor with sufficient skills to gain this kind of access is going to have that focus. Furthermore, damaging or deleting records is much more likely to be recognized and reacted to than simply stealing data, and it seems especially unlikely that a cyber criminal would sacrifice their hard-won access in this manner. If the scenario were different, however, and instead of customer PII, the information at stake was something a cyber criminal or other threat community would gain significantly from by damaging or deleting, then this scenario might make perfect sense. We are going to delete it from our scope, though. As we look at our scenarios, it also seems to us that the frequency of nonprivileged insiders hijacking an account that has inappropriate privileges is likely to be much smaller than the malicious or abusive acts of privileged insiders. It also occurs to us that illicit actions by nonprivileged actors would take place against accounts with appropriate access privileges roughly 85% of the time, because there would be little reason for them to single out and attack an account that had inappropriate privileges. For these reasons, we suspect the frequency of privileged insider actions to be much higher than the frequency of nonprivileged insiders, so we'll remove the nonprivileged insider scenarios from scope, too. Now our table looks like this (Table 8.3):Table 8.3 The Scope Table with Further Omissions Asset at Risk Threat Community Threat Type E'ect Customer PlI Privilegedinsiders Nialicious Condentiality Customer PlI Privilegedinsiclers Snooping Condentiality Customer PlI Privilegedinsiclers I\\-'Ialicious Integrity Customer PlI Cyber criminals I'Ii'Ialicious Condentiality It's looking better all the time. At this point, we aren't sure we're comfortable removing any more scenarios from our scope. That doesn't mean, however, that we have to analyze all four of these. Our approach now is to choose one that we believe will represent the most risk and analyze it. The results of that analysis may tell us everything we need to know to answer the question of how high this audit nding should stand in our list of priorities. The odds are decent that we will need to analyze more than one scenario, but the odds are at least as good that we won't have to analyze all four. This is not about being 100% comprehensive in our measurement of risk. This is about pragmatically reducing uncertainty and having enough information to effectively prioritize this issue against other things on the organization's todo list. Analysis Now that scoping is done, we need to start performing our analyses. The most frequent scenario is almost certainly going to be snooping, but the most impactful is likely to be the cyber criminals because they are certain to try to run off with as much information as possible, which means more compromised records (insiders might do that too, but they are more likely to take fewer records). A cyber criminal compromise is also likely to have a greater impact because they are far more likely to leverage the customer information in a way that will harm customers, and because their compromise included two control failuresfailure of controls at the compromised workstation and in account privileges. Regulators and lawyers love a twofer. At this point, we are pretty sure we'll analyze both of these scenarios because they are so different from each other and because they both seem to have the potential for signicant outcomes. We won't know for sure until we have made our estimates and run the numbers, but that's what our gut is telling us at this point. TALKING ABOUT RISK This is a good time to point out (if you had not noticed) that the analyst's intuition and experiencethose more subjective but vital elements of us as humansstill play a critical role in risk analysis. It will never be a purely objective practice. What you'll find, though, is that the more you do these kinds of analyses, the more nely tuned your critical thinking and analytical instincts will become. You'll be able to glance at a scenario and have a pretty decent notion of how much risk it represents. You'll find that after you've completed the analysis, your initial gut feeling was in the right ballpark. However, there's a warning attached to this. You have to be careful not to let that gut impression you had upfront bias your analysis. It can happen. We think we know something and then we look for things that validate or confirm that initial starting point. We don't do this consciously, it's one of those cognitive biases we humans are subject to. This is another reason why it is so important to document the rationale behind your estimates. The process of documenting your rationale very often brings to the surface weaknesses in your thinking. It also provides an opportunity for others to examine your line of thinking and poke holes in it. For example, after reading this example analysis, some of you might decide you would approach it differently than we have, perhaps making different assumptions. That's fine. There is no perfect analysis and we all bring our own strengths (and weaknesses) to the process. However, consider the following: without an analytical model like FAIR and the rigor this process introduces, how likely is it that someone sticking a wet finger in the air is going to rate risk accurately? Yes, some people just have a gift for shooting from the hip with their analyses, but in our experience this is not the usual case, which is why we find so many holes in so many of the risk ratings we review. Privileged insider/snooping/confidentiality Loss event frequency Note that we tend to work the loss event frequency (LEF) side of the equation first. That is not a requirement. We know people who prefer to start with loss magnitude, which they feel works better for them. You'll undoubtedly figure out your own approach. The next decision we need to make is what level of abstraction we want to operate at on the LEF side of the ontology. Do we have sufficient historical loss data specific to this scenario that we can estimate LEF directly? Probably not in this scenario. We may have some data, though. A couple of employees were terminated in the past several years for misuse of privileges, which can be useful information, but it is not enough by itself for us decide to make our estimates at the LEF level. That being the case, let's look at the next lower level in the ontology -threat event frequency (TEF) and vulnerability The first thing that should jump to mind is that we already know how vulnerable we are-100%. These are privileged insiders, after all. It's great when one of the values is that easy to figure out. One of the things that being 100% vulnerable tells you is that every threat event is a loss event, which means that estimating TEF is the same thing as estimating LEF. That being the case, we are just going to estimate LEF directly for this scenario. Before we go on, we should point out that in scenarios like this you might find value in making your estimates all the way down at the Contact Frequency and Probability of action level in the ontology. We very rarely do that, but it is an option when you need it. As for data to help us with our estimates, we probably don't have confessions of the guilty to guide us, and we doubt that if we sent out a survey asking people who had inappropriate access whether they'd ever snooped around we'd get very useful results. We do have some data that can help us though, as you'll see. As you'll recall from the chapter on measurement, you always start with the absurd when making a calibrated estimate. For LEF, we could use the following as our starting point: . Minimum: once every 1000 year Maximum: a million times a year In other words, we are saying that the population of privileged insiders with inappropriate access privileges in this application abuse that access by snooping at least once every 1000 years, but no more than a million times a year. (It is critical to keep in mind that our TEF estimate is constrained to this population of privileged users with inappropriate access, rather than all privileged users on that application. This is a very common mistake that we see people make.) Now we have to decide whether we are at least 90% confident in that estimate by asking whether we'd rather bet on that range or spin the wheel in the hopes of winning the hypothetical $1000. We're probably going to bet on our range, but before we make that decision, there's something we've forgotten. Wouldn't it be a good idea to know the size of this user population? How many people does 15% of accounts represent? Let's say that in this organization, 15% represents 30 users. In other words, 30 accounts out of a total population of 200 accounts have inappropriate access privileges in the application. This data point doesn't guide us directly to our estimate, but it's helpful nonetheless. With the population of potentially abusive insiders defined, we can do some work on the maximum end of our estimated range. If, for example, we reduced our maximum value from 1,000,000 to 1000, would we still prefer to bet on our range or the wheel? A maximum estimate of 1000 abuses for a population of 30 employees seems high, but it raises a question. What constitutes an event? If an employee illicitly peeks at one customer record each day over the span of a week, is that one event with five compromised records, or five events with one compromised record each? The truth is, it usually doesn't matter which approach you choose, you just need to pick an approach and specify it as part of your rationale. You also need to remember the approach you have chosen because it will affect your loss magnitude estimates. For this example, we are going to consider each instance of snooping to be an event, regardless of how many records were affected -e.g., when a privileged insider illicitly views records on five separate days, it represents five distinct loss events. With this approach in mind, how do we feel about our maximum estimate of 1000 versus spinning the wheel? Think about the 30 people involved. What are the odds that one or more of them do this and, if they do, how often would it be? Also think about the information at which they would be sneaking a peak. It's name, address, social security number, and date of birth. It occurs to us (now) that absent malicious intent, what's the value in looking at this information? It isn't financial or medical information, which is more likely to stimulate voyeuristic viewing. With this question in mind, we reach out to HR again about the two people who were terminated. As it turns out, they were selling the information rather than just snooping in it, which belongs in the malicious scenario and not this one. This means the organization has no history of privileged insiders abusing their access to snoop around in this data. Of course, this isn't saying it hasn't happened. Nonetheless, when we think about the relatively small population of users and the low value proposition for voyeurism, we reconsider whether the likelihood of simple snooping is high enough to warrant spending more time on this scenario. We decide it's not, document this in our analysis rationale, and move on. Here's our scenario table now (Table 8.4): Table 8.4 The Scope Table with the Most Important Scenarios Asset at Risk Threat Community Threat Type Effect Customer PII |Privileged insiders Malicious Confidentiality Customer PII |Privileged insiders |Malicious Integrity Customer PII | Cyber criminals Malicious Confidentiality Note that we are not saying privileged insider/snooping does not or could not take place; we're simply prioritizing the time we spend doing analyses.Privileged insiderl'malinvitiuslr:ondentialityr Loss event frequency Based on the new information from HR, we have decided to analyze the Privileged lnsider's-'Ialicious scenario next. This should go a bit faster because a little bit of the groundwork is already laid. We know that we are 100% vulnerable to this threat community, that we are going to operate at the LEF level of the ontology, that the size of the user population is 30 people, and we have dened what an event looks like. We'll also start with the same absurd estimate for LEF: t Minimum: once every 1000 years 0 Maximum: a million times a year Now let's see if we can leverage that l-IR information to help us with our frequency estimate. There were two of these people in the past 3 years, so maybe we can use that to help us nail down an estimated minimum frequency1e, two events every 3 years, or said another way, a frequency of 06?. There's a problem with using that value as our minimum, though. Those two employees didn't have inappropriate access. Their access levels were exactly what they were supposed to be. As a result, they were members of the 170 people with legitimate access and not members of the user population we're focused on. We could, however, assume that the users with inappropriate access privileges have roughly the same rate of abuse as that broader population. Based on that assumption, we figure out that 15%1 of 0.6? comes to roughly 0.1 or once every 10 years. So how does that affect our estimates? Well, it's just one data point, so it could always be an outlier. Furthermore, these events could happen without ever going detected. Still, never let a good piece of data go to waste. Let's adjust our range to look like the following: 0 Minimum: once every 20 years {0.05) 0 Maximum: l00 times per year You'll note that the minimum is actually lower than the 0.1 we came up with in the previous paragraph. We did this primarily because we don't like to narrow our ranges too fast. It's too easy to get aggressive with range narrowing and end up losing accuracy, especially with limited data. Remember we need accuracy with a useful degree of precision. Accuracy rules. We can always narrow the range further if additional considerations or data suggest it's appropriate to do so. With the minimum where it's at, we'd still bet on that end of our range rather than the wheel. Given our definition of an event, we're also at least 95% confident in our maximum value (recall that we use 95% when we're evaluating our condence in just one of the range values). We find it very difcult to believe that out ofa population of 30 people with no criminal records (per HR hiring policy and background checks), there would be one or more people performing more than 100 malicious acts per year. The folks in HR agreed when we ran our thoughts past them. In fact, they suggested that the number was still too high, so we dropped it to 50. So now our range looks like this: 0 Minimum: once every 20 years (0.05) t Maximum: 50 times per year We left that as our nal range because we could not come up with rationale that allowed us to feel comfortable narrowing it further. It was an equivalent bet between the wheel and our range. It is a fairly wide range, but the simple fact is that we did not have enough data to narrow it further. This is common with privileged insider acts where so little history exists. For our most likely estimate, we decided to simply go with what our single data point told us, 0.1. We couldn't think of any logical rationale to move up or down from that number. Well choose a moderate condence setting to reect our single data point, as well as the other considerations and assumptions in play, in a combination of less than perfect condence (Table 8.5). Table 8.5 LEF Calculations for the Event of Cyber Criminal Analysis LEF I'i'l'lin'umi LEF l'i'Inst Likely LEF Nimgimmn Cou'lme Now that the frequency side of the equation is taken care of, it's time to think about loss magnitudei.e., whenfif this happens, how badly is it going to hurt? Loss magnitude The first thing we have to do is determine which of the six loss forms are likely to materialize as primary loss from this type of event. Productivity loss We are quite certain that there would be no operational disruption in revenue generation, nor would there be any reason for personnel to sit idle. As a result, productivity loss is not a factor in this event. Response costs There would certainly be time spent investigating and dealing with the event, so response costs are a factor. Replacement costs Likewise, we would clearly terminate a perpetrator so there would be replacement costs associated with hiring a replacement [unless the organization decided to not backfill the position). As with most scenarios, competitive advantage, nes and judgments, and reputation loss aren't relevant as forms of primary loss. Primary response costs for an incident like this tends to fall into three categories: (1) personhours spent in meetings regarding the incident, (2) investigation into what transpired, and (3) dealing with law enforcement. The magnitude of these costs will depend to some degree on how much information we have about the event and how easy it is to get the information. If the perpetrator confesses and it's easy to know the extent of the compromise, the amount of effort spent in investigation will be smaller. It also helps if we have application logs that narrow down the number of records that were accessed, which can be cross-referenced against legitimate work the employee was doing to determine which customers were compromised. Law enforcement might also have information for us. If we have to bring in external forensic expertise to determine what went on, the dollars start stacking up fast. Breaking this down, then, we're going to make the following absurd starting estimate for person-hours: 0 Minimum: lhour 0 Maximum: one million hours The \"good news\" here is that this type of incident has happened before so we can draw from those experiences. After talking with people who were involved in those earlier incidents, we learned that personhours involved in response for each incident were somewhere between 100 and 200. Nobody was tracking the number of people involved or the level of effort, so this boiled down to a best guess. Based on this information, we adjusted our estimate to the following: 0 Minimum: 50hours 0 Maximum: 400 hours The minimum value represents a case where there are almost no complicating factors. The perpetrator confesses and very little investigation is required. The maximum represents a worst case where the event is more complicated and it is not even clear who the perpetrator was. When considering best case and worst-case conditions in this type of incident, these estimates are where we landed with an equivalent bet against the wheel. It's broader than the ranges experience in the two previous incidents because neither of those reected what we considered to be best or worst-case conditions. For most likely values, we split the difference from the previous events (150 hours), which we thought was a reasonable representation of what to expect. We did not choose high confidence for the most likely value, though, because there are too many uncertainties regarding complexity from one incident to another. We then multiplied these values times the organization's average loaded employee hourly rate of $55 to come up with the following primary response cost estimates (Table 8.6): Table 3.6 Primary LEF Response Estimates Loss Type NEnimrm'i Miosl likely laxilmm'i Condence Primary response $2150 $0250 $22,000 Moderate Replacement costs for a terminated employee are very easy to come by because, unfortunately, there's so much data. Our HR colleagues suggested the following values for replacement costs in our analysis (Table 8.3"): Table 8.6 Primary LEF Response Estimates Loss Typ Minimum Most Likely Maximum Confidence Primary response $2750 $8250 $22,000 Moderate Replacement costs for a terminated employee are very easy to come by because, unfortunately, there's so much data. Our HR colleagues suggested the following values for replacement costs in our analysis (Table 8.7): Table 8.7 Primary LEF Response and Replacement Estimates Loss Type Minimum Most Likely Maximum Confidence Primary response $2750 $8250 $22,000 Moderate Primary replacement $20,000 $30,000 $50,000 High That's it for primary loss, which just leaves secondary loss. Secondary loss The first thing you need to do when thinking about secondary loss is identify the relevant secondary stakeholders. When dealing with customer information, two jump immediately to mind: the customers themselves, and regulators. Once you have the secondary stakeholders identified, you need to identify which forms of secondary loss you would expect to materialize. With that in mind: Response In an incident involving sensitive customer information, your response costs almost always include: The cost of notifying customers, and perhaps regulators The cost associated with increased volumes of customer support calls The cost of credit monitoring The cost of having people in meetings to strategize how to handle customers and regulators If the number of records is large enough, you are also likely to have legal and PR costs NOTE: Every time you have secondary loss of any form, you have response costs of some sort. Every time. If one person picks up a phone to answer a customer's call regarding the event, that cost is considered secondary response. If one meeting is held to strategize on how to notify or deal with the secondary stakeholders those person-hours represent a secondary response cost. Fines & Judgments There is always the potential for fines, judgments, or sanctions of some sort when dealing with sensitive customer information compromises. Reputation Reputation damage is also a potential outcome from any breach of sensitive customer information. Productivity and competitive advantage losses do not typically result from these types of incidents. In fact, productivity loss rarely materializes as secondary loss. If you're surprised about competitive advantage being excluded, you might want to refer to the chapter on Common Problems. In that chapter, we explain why competitive advantage is so commonly misused. Secondary replacement costs usually only apply when the organization has to compensate for a secondary stakeholder's losses (e.g., replacing stolen funds from a bank account). Having established the relevant secondary loss forms, we can start estimating secondary loss event frequency (SLEF). Because SLEF is a percentage (i.e., the percentage of primary events that have secondary effects) your absurd starting points are always 0% and 100%. However, the good news (analytically) when dealing with sensitive customer information scenarios is that you almost always have to engage the customer if their private information has been breached (at least in the United States). This makes SLEF nearly 100%. We say nearly because there can be situations where notification is not required. Consequently, we used a minimum of 95% and a maximum of 100% for the ends of our distribution, and 98% for the most likely value. When a range is this narrow, we don't worry much about our choice of confidence level, and just leave it at moderate (Table 8.8).Table 8.8 Secondary LEF Calculations SLEFh'liniImnn LEF h'Iost Likely LEF NIal'mnn Cmtdmce 95% 98% 100% Moderate As we noted above, secondary response costs take various forms, each of which need to be evaluated separately. In a customer information compromise analysis, you want to answer the question of the number of compromised records. Based on the way the application in this scenario functions, if you have access to one customer, you have access to them all. Consequently, the best-case breach involves just one customer and the worst-case involves all 500,000. We can use those numbers as the minimum and maximum values. The most likely value was estimated to be toward the low end of the range for a couple of reasons. Both of the previous incidents involved relatively low numbers of affected customers (6 and 32), and the expectation is that most insiders are intent on minimizing their chances of being detected and don't want to try to fence half a million records. They're also more likely to be trying to satisfy a relatively marginal financial need. As a result, the most likely value was set at 20 records. The confidence level was set to moderate because of the inherent uncertainty regarding how many records a perpetrator might compromise. 0 Compromised records minimum: 1 0 Compromised records most likely: 20 0 Compromised records maximum: 500,000 0 Confidence level: Moderate Customer notification costs are pretty easy to pin down, particularly if an organization has had to do it before, which most have (at least any that have been around a while and have a large number of customers). You1l want to get these numbers from your privacy office (if your organization has one) or whoever is responsible for notification. In this organization, notification costs were pretty well established for customers at $3 each. The organization has a boilerplate notification document already created, so there's very little effort required in customizing it. Note that this is the cost per notied customer and thus has to be multiplied by the number of affected customers. 0 Customer notification cost minimum: $3 X l = $3 0 Customer notification cost most likely: $3 X 20 = $60 0 Customer notication cost maximum: $3 >

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access with AI-Powered Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Operations Management Creating Value Along the Supply Chain

Authors: Roberta S. Russell, Bernard W. Taylor

7th Edition

978-0470525906

Students also viewed these General Management questions