Cyber Security

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the relevant US laws relating to cybersecurity, and the methods they allow for monitoring, apprehending, and prosecuting cybercriminals. The discussion also discusses the problems exist in apprehending and prosecuting a cyber criminal who resides in another country, and what can be done to improve international cooperation in cybercrime.

Cybercrime

Cybercrime is described by (McAfee, 2017) as relentless, undiminished, and unlikely to stop.  It is just too easy and too rewarding, and the chance of being caught and punished are perceived as being too low. Cybercriminals at the high end are as technologically sophisticated as the most advanced IT companies and have moved quickly to adopt the cloud computing technology, artificial intelligence, Software-as-a-Service (SaaS), and encryption.  Table 1 summarizes the estimated daily cybercrime activity to illustrate the magnitude of cybercrime activities (McAfee, 2017).

Table 1.  Estimated Cybercrime Daily Activity (McAfee, 2017).

The Cybercrime remains too easy since many technology users fail to take the most basic protective measures, and many technology products lack adequate defenses and security measures, while cybercriminals use both simple and advanced technology to identify targets, automate software creation and delivery, and monetization of what they steal.  The Monetization of stolen data, which has been a problem for cybercriminals, seems to have become less complicated because of the improvements in cybercrime black markets and the use of digital currencies.  Example of the series of cyber attacks is the WannaCry ransomware in May 2017, a type of virus that encrypts the user’ data and only releases it when a ransom has been paid.  This incident affected hundreds of thousands of computers across the globe.  The total cost of the WannaCry attacks, which is the United States, United Kingdom, and another attribute to the North Korean government, was estimated to exceed $1 billion.  WannaCry was soon followed by a destructive wiper-malware attack that wipes computers outright, destroying records from targeted systems without collecting a ransom known as NotPetya/Petya.  These examples exemplify the serious impact of cybercrime globally (Chernenko, Demidov, & Lukyanov, 2018). 

The Impact of Cybercrime

The current estimated cost of the cybercrime for the world has reached almost $600 billion, or 0.8% of the global GDP, according to a new report by the Center for Strategic and International Studies (CSIS) and McAfee.  Concerning the cost of cybercrime about the worldwide internet economy, $4.2 trillion in 2016, which can be viewed as a 14% tax on growth (McAfee, 2017).  The cybercrime technologies become much sophisticated. The governments need to implement new and more powerful technologies to fight this new breed of criminals (Janczewski, 2007).  There is a severe need to improve the current laws and regulations and the international cooperation against cybercrime (Chernenko et al., 2018). 

Laws and Legal Actions

The US has no single federal law which regulates information security/cybersecurity and privacy throughout the country, unlike the European Union.  Several states have their cybersecurity laws in addition to their data breach notification laws.  These areas are currently regulated by a patchwork of industry-specific federal laws and state legislation whose scope and jurisdiction vary.  The challenge of compliance for organizations which conduct business across all fifty states and potentially across the world is considerable. A summary of applicability, penalties and compliance requirements are discussed in (itgovernanceusa.com, 2018).

The International Organization for Standardization (ISO) often referred to as International Standards Organizations, joined with the International Electrotechnical Commission (IEC) to standardize the British Standard 7799 (BS7799) to a new global standard which is now referred to as ISO/IEC 27000 Series.  The ISO 27000 is a security program development standard on how to develop and maintain an information security management system (ISMS).  It involves a series of standards each of which addresses a particular aspect of ISMS.  The ISO 27032is a published cybersecurity guideline (Abernathy & McMillan, 2016).  Moreover, many enacted statutes address various aspects of cybersecurity, some of the notable provisions are addressed in (Fischer, 2014).

Moreover, recent legislation such as the European Parliament’s 2016 directive on the security of network and information system has taken the cybercrime into account. This legislation focused on threats to critical infrastructure and aimed to improve the cybersecurity measures to safeguard so-called essential services such as online marketplaces, search engines, and cloud computing services vital to business, governments, and individuals (Harris, 2018).

In the computer world, the evidence of cybercrime can be difficult to properly obtain and preserve so that it will be allowed as evidence in a court of law.  Due to the nature of the cybercrimes, most computer crime evidence is electronic, which can quickly be erased, modified, and tampered with.  After a computer crime such as a server attack is committed, an initial investigation by the network admin can quickly ruin evidence the attacker left behind.  Thus, some special procedures are required when acquiring and preserving evidence of a computer crime.  These procedures include preserving the incident environment, collecting evidence, data volatility, and retaining chain of custody of the evidence.  The evidence collection is a very critical aspect of the incident response. There could be physical evidence, logs, system images, screen captures, and camera video depending on the type of cybercrime. Each evidence needs to be carefully collected and preserved and protected from tampering (Harris, 2018). 

When the evidence is collected and required for the investigation and litigation, the legal hold is typically initiated.  The legal holds halt the backup and disposition processes and immediately places the personnel of the organization into data protection mode. Organisations must follow this procedure otherwise they can be at risk of losing required data for the protection of the legal situation.  Organizations are responsible for acting as soon as possible to protect data which might become evidence. Thus, organizations should work with legal counsel to better understand legal holds and how to act appropriately to avoid any fines or sanctions (Harris, 2018).

International Cooperation against Cybercrime

Urgent measures which are needed to preserve data at the national level are also required within the framework of international cooperation.  Chapter III of the Convention on Cybercrime provides a legal framework for international cooperation with general and specific measures, including the obligation of countries to cooperate to the widest extent possible, urgent measures to preserve data and effective mutual legal assistance (Council-of-Europe, 2018).

There are principles for international cooperation as provided for in Chapter III of the Convention on Cybercrime.  The international cooperation is to be provided among parties to the broadest extent possible. This principle requires parties to provide extensive cooperation to each other and to minimize impediments to the smooth and rapid flow of information and evidence internationally.  The second principle involves the extension of the cooperation to all criminal offenses related to computer systems and data as well as to the collection of evidence in electronic form related to any criminal offense.  The third principle states that the cooperation is to be carried out both by the provision of Chapter III and through the application of relevant international agreements on international cooperation in criminal matters, arrangements agreed to by uniform or reciprocal legislation, and domestic laws (Council-of-Europe, 2018).

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Chernenko, E., Demidov, O., & Lukyanov, F. (2018). Increasing International Cooperation in Cybersecurity and Adapting Cyber Norms. Retrieved from https://www.cfr.org/report/increasing-international-cooperation-cybersecurity-and-adapting-cyber-norms, Council on Foreign Relations.

Council-of-Europe. (2018). International Cooperation Against Cybercrime. Retrieved from https://www.coe.int/en/web/cybercrime/international-cooperation, Council of Europe.

Fischer, E. A. (2014). Federal Laws Relating to Cybersecurity: Overview of Major Issues, Current Laws, and Proposed Legislation.

Harris, S. (2018). Mike Meyers’ CISSP Certification Passport.

itgovernanceusa.com. (2018). Federal Cybersecurity and Privacy Laws Directory. Retrieved from https://www.itgovernanceusa.com/federal-cybersecurity-and-privacy-laws.

Janczewski, L. (2007). Cyber warfare and cyber terrorism: IGI Global.

McAfee. (2017). The Economic Impact of Cybercrime – No Slowing Down. Retrieved from https://www.mcafee.com/us/resources/reports/rp-economic-impact-cybercrime-summary.pdf.

Proposal: Enterprise Security Plan

Dr. Aly, O.
Computer Science

Table of Contents

Phase 1:  Organization Description. 5

Phase 1:  Risk Assessment. 8

1.1        Risk Assessment Overview.. 8

1.2        Quantitative Risk Assessment and Analysis. 11

1.2.1         Asset Valuation (AV). 12

1.2.2         Exposure Factor (EF):  Loss Potential 15

1.2.3         Single Loss Expectancy (SLE). 15

1.2.4         Annualized Rate of Occurrence (ARO). 16

1.2.5         Annualized Loss Expectancy (ALE). 16

1.2.6         Calculating Annualized Loss Expectancy with a Safeguard. 17

1.3        Qualitative Risk Analysis. 18

1.4        Key Assets. 18

1.5        Risk Assessment Tools. 19

1.6        Cloud Computing Technology:  Virtual Private Cloud (VPC) 19

Phase 2:  Security Policy. 22

2.1 CIA Triad. 23

2.2 Additional Security Concepts. 26

2.3 Building and Internal Security. 28

2.4 Environmental Security. 29

2.5 Equipment Security. 29

2.6 Information Security. 30

2.7 Protection Techniques. 32

Phase 3:  Network Security. 34

3.1 OSI Model vs. TCP/IP Model 34

3.2 Vulnerabilities of TCP/IP. 36

3.3 Network and Protocol Security Techniques. 36

3.3.1 Communication Protocol Security. 37

3.3.2 Secure Network Components. 38

3.3.2.1 Network Segmentation. 38

3.3.2.2 Network Access Control 39

3.3.2.3 Firewalls. 39

3.3.2.4 Endpoint Security. 42

3.3.2.5 Other Network Devices. 42

3.4 Healthcare Network Security Approach. 43

Phase 4: Incident Response, Business Continuity, and Disaster Recovery. 45

4.1 Business Continuity Planning (BCP). 45

4.1.1 BCP Steps. 46

4.1.1.1 Project Scope and Planning. 47

4.1.1.2 Business Impact Assessment. 48

4.1.1.3 Continuity Planning. 49

4.1.1.4 BCP Documentation. 50

4.2 Disaster Recovery Planning. 51

4.2.1 Fault Tolerance and System Resilience. 52

4.2.1.1 Hard Drives Protection Strategy. 52

4.2.1.2 Servers Protection. 53

4.2.1.3 Power Sources Protection Using Uninterruptible Power Supply. 54

4.2.1.4 Trusted and Secure Recovery. 54

4.2.1.5 Quality of Service (QoS). 55

4.2.1.6 Backup Plan. 55

Phase 5:  System and Application Security. 57

5.1 The Implementation of Secure System Design. 58

5.1.1 Access Control Implementation. 58

5.1.2 Closed and Open System Implementation. 59

5.1.3 Techniques for Ensuring CIA Triad. 60

5.1.3.1 Process Confinement Implementation. 60

5.1.3.2 Physical vs. Logical Process Bounds Implementation. 61

5.1.3.3 Isolation Implementation. 61

5.1.4 Trusted System and Assurance. 62

5.1.5 System Security Model 63

5.1.6 Control and Countermeasures Based on System Security Evaluation Models. 63

5.1.7 Security Capabilities of Information Systems. 64

5.1.8 Vulnerabilities Evaluation and Mitigation. 66

5.1.8.1 Client-Based Vulnerabilities Evaluation and Mitigation. 66

5.1.8.2 Server-Based Vulnerabilities Evaluation and Mitigation. 67

5.1.8.3 Database Security Vulnerabilities Evaluation and Mitigation. 68

5.1.8.4 Industrial Control Systems (ICS) Vulnerabilities Evaluation and Mitigation. 70

5.1.8.5 Web-Based Systems Vulnerabilities Evaluation and Mitigation. 70

5.1.8.6 Mobile Systems Vulnerabilities Evaluation and Mitigation. 71

5.1.8.7 Embedded Devices and Cyber-Physical Systems Vulnerabilities Evaluation and Mitigation   72

5.2 System and Application Security Best Practice: Six Pillars. 72

References. 75

Phase 1:  Organization Description

            The organization which is used for this project is a healthcare organization called “ACME” with a headquarter facility located in 101 Hidden Street, Hidden State, 12345.  The headquarter is located in the North side of the state, with a total number of employees of 1100 employees; 500 hundred staff such as IT, HR, and 200 physicians including different medical fields such as pediatric, and 400 nurses in various medical fields.   The headquarter facility of ACME is serving more than 5000 patients.   The headquarter has 100 physician room which requires 100 client PC. 

The organization is expanding and started to establish two more healthcare facilities in the same state; one in the west side of the state and other is on the south side of the state.  The new facilities will be limited to providing urgent care and medical services to patients with few staff employees for administration.  The new facilities buildings are identical but in two different locations.  They are both identical to the headquarter in terms of design. However, the section for the staff and administration, including HR and IT in these new local facilities is smaller than the headquarter section because the administration, IT and HR of the local facility are limited to the local facility, while the administration and other departments such as IT in the headquarter are intended to serve the organization as a whole including all its extensions.  The building is divided into three different sections; visiting section, patients, and administration with staff employees. The three sections are connected to form one healthcare facility service.  The capacity of each facility for the patients is expected to server 5000 patients with 100 offices for 200 physicians.  Each physician room serves two physicians. 

Security and privacy play a significant role in protecting the patients’ data from any intruder or any unauthorized users.  The development of an Enterprise Security Plan is very critical to ensure such protection for the patients, stakeholders and the employees.  Thus, this project is intended to develop a security plan not only for the new facility but also for the headquarter to make sure the same security measures and compliance are adhered to and applied in all facilities. 

The headquarter migrated last year the healthcare records to the Cloud Computing using the Virtual Private Cloud (VPC) model, which is regarded to be the most dominant approach to Trusted Computing Technology (Abdul, Jena, Prasad, & Balraju, 2014).  The VPC Cloud Computing Deployment Model is described as a virtual network dedicated to Amazon Web Services (AWS).  The VPC is logically isolated from other virtual networks in the AWS cloud.  When employing the VPC, specific security configurations such as selection of IP, the creation of subnet, route table, network gateway and security must implement (Abdul et al., 2014).  When employing the VPC model, the data is placed in a private subnet for protection (Abdul et al., 2014).  The details of deploying the VPC as a Cloud Computing Deployment Model is discussed later in this project.

The organization is required to comply and apply all security measures and employ the VPC model as well in all locations of the headquarter and the new facilities.  Moreover, policies such as HIPPA must be applied and adhered to in all facilities, in addition to the physical security measures to mitigate any risks and threats.  This project will provide a comprehensive plan which will discuss and analyze the following major components of the Enterprise Security Plan:   

Phase 1:  Risk Assessment

            Risk Assessments are not only required under HIPAA regulations but also can be a crucial tool for organizations as they develop stronger and robust data security measures (Snell, 2016).   The Risk Assessment assists in ensuring all involved entities and people are compliant with HIPAA requirements in terms of physical, technical and administrative safeguards (Snell, 2016).  Moreover, the Risk Assessment also assists in exhibiting the potential areas where organizations might be putting patient health information (PHI) at risk. The PHI and the electronic PHI (ePHI) which any healthcare facility creates, receives, maintains or transmits must be protected.  The Risk Assessment plays a significant role in this protection process (Snell, 2016).

1.1              Risk Assessment Overview

The purpose of this Risk Assessment is to provide risk analysis and risk management. The Risk Assessment includes systematic processes to identify risks and determine the consequences of such risks, and the techniques to deal with them (Aagedal et al., 2002).  The Risk Assessment goes through a process of context identification, risk identification, risk analysis, risk evaluation, risks acceptance level, risk handling.  Figure 1 illustrates the general overview of the Risk Assessment Process. Adapted from (Aagedal et al., 2002).

Figure 1.  Risk Assessment Overview. Adapted from (Aagedal et al., 2002).

ACME formed a team whose role is to perform Risk Assessment and Analysis.  The team members are from various departments within the ACME organization (Stewart, Chapple, & Gibson, 2015).  The Risk Management and Analysis are the exercises for the upper management (Stewart et al., 2015).  All of the Risk Assessments, results, decisions and outcomes must be communicated, comprehended and approved by the upper management (Stewart et al., 2015). 

ACME can use the Risk Assessment in both reactive as well as proactive mode.   Risk Assessment must include every aspect of the organization such as physical, employees, computers configuration and setup, IT technologies.  Since healthcare organizations are placing more resources toward information security (Cash & Branzell, 2017), ACME organization plans to follow the same path and provide more support to leadership, program development, breach readiness and security-program funding.  

There is an increasing trend in the dedication of the leaders towards security (Cash & Branzell, 2017).  For instance, more than 95% of organizations stated that there is a designated individual to oversee their security program (Cash & Branzell, 2017).   The need for dedicated leadership is becoming the norm for the most healthcare organizations to ensure focused efforts to maintain a safe and secure data environment (Cash & Branzell, 2017).   

Moreover, organizations with strong security planning including risk assessments, have a focused education program for healthcare information users (Cash & Branzell, 2017).   There is a critical need for adequate education for employees (Cash & Branzell, 2017).  This need is demonstrated by the following statement “The biggest exposure is the employees that have access to data.  There is a small percentage of people who could misuse their access or have malicious intentions. However, educating the workforce is probably the best thing we could do to ensure security procedures are being followed by our employees” (Cash & Branzell, 2017).   The majority of healthcare organizations follow the security framework of National Institute of Standards and Technology (NIST) framework for cybersecurity (Cash & Branzell, 2017).  As indicated in (Cash & Branzell, 2017), several providers organizations in healthcare use more than one framework such as HITRUST framework to create a safe information-sharing environment.  HITRUST is a scalable, prescriptive and certifiable framework (HITRUST, 2018).  It offers HITrust CSF a tool calls “MyCSF” which provides all types of types and sizes with a secure, web-based solution for accessing the HITRUST CSF, performing assessments and managing compliance (HITRUST). Moreover, the HITRUST Cyber Threat Xchange (CTX) is described to be the most analyzing cyber threats and distributing actionable indicators (IOCs) which organizations of various sizes and cybersecurity maturity can utilize to improve their cyber defenses (HITRUST, 2018).

            There is no 100% risk-free environment.  However, even with the best security program and application, with the prevention as the focus, there is still a change of being breached (Cash & Branzell, 2017).   Organizations must have adequate, reliable detection systems with plans to respond to identified breaches.  Risk Assessment, Risk Analysis, and Risk Management are techniques involve the implementation of the process and policy review to ensure the organization complies with the required policies and regulations to ensure a secure environment.  

Thus, this Risk Assessment and Analysis is to mitigate the risks and the threats.  It is the responsibility of the upper management to determine the acceptable risk and not an acceptable risk. To determine the acceptable risks detailed and complex asset and risk assessments are required.  There are a couple of approaches for Risk Assessment and Analysis.  One approach is a quantitative and another is qualitative.  The quantitative approach assigns real dollar figures to the loss of an asset, while the qualitative approach assigns subjective and intangible values to the loss of an asset.  Both methods are required for a complete Risk Assessment and Analysis.  Most organizations implement a hybrid of both risk assessments techniques to gain a balanced view of the security concerns. 

1.2 Quantitative Risk Assessment and Analysis

There are six significant elements for the quantitative risk analysis as shown in Figure 2.  Adapted from (Stewart et al., 2015).

Figure 2.  Six Major Elements of Quantitative Risk Analysis. Adapted from (Stewart et al., 2015).

 1.2.1    Asset Valuation (AV)

            This step is an essential step in the Risk Assessment and Analysis to appraise the value of the assets of the organizations.  If the asset has no value, there is no need to be considered in the Enterprise Security Plan.   As indicated by (Stewart et al., 2015), the annual cost of safeguards should not exceed the expected annual cost of asset loss.   

            The goal of the asset valuation is assigned to an asset a specific dollar value that contains substantial costs as well as intangible ones.  The value in the following asset valuation is an estimated value.  Table 1 summarizes these assets of servers, clients, printers and the estimated values.  

Table 1:  Initial Inventory List of Tangible Assets.

The primary servers are located in the headquarter, and they will be configured to communicate with the Cloud Computing as the storage of the PHI and ePHI will be stored in a protected storage area in the Cloud Computing using Virtual Private Cloud deployment model.  Moreover, the AWS storage gateway techniques will be implemented to provide seamless integration with the data security features between the on-premise IT environment and the AWS storage infrastructure.  Details of the storage gateway technique are discussed later in this project under Phase 4.   Each server will be dedicated to the department.  For instance, the server for HR department will be configured to access the HR information from the Cloud, and no access to other information such as PHI.  The client PCs for the physicians will be configured to access specific PHI information, which will be not be accessed by other clients such as the PCs for administration.  The IT servers will be configured to serve all departments including the physicians, HR, PHI, and so forth with very strict access limitation and role to very limited people in IT.   

The network plays a significant role in protecting the PHI and ePHI from risks and threats.  Thus, the data center with the servers must have a network with protected IP addresses, using security measures for the data on rest and transit data.   The network between the data center and the Cloud must be secured as well.  There are various networking techniques.  However, the protection of the data in the Cloud plays a significant role in protecting the data.   Transferring the data back and forth between the data center and the Cloud must be taken into consideration.    The details for the network is discussed and analyzed under Phase 3 of Network Security.

            The intangible asset in ACME as a healthcare organization reflects the patients’ information.  The PHI and ePHI are the most important and critical information which must be protected and sealed from other unauthorized users.  Thus, to comply with HIPAA regulations and rules, a specific and articulated architecture for HIPAA in the Cloud using AWS must be implemented.   In order to use AWS for HIPAA applications, the following general strategies must be implemented.  Details of the architecture for HIPAA application is discussed later in Phase 5 of Systems and Application Security.

  • Protected Data must be decoupled from processing/orchestration.
  • Data flows using automation must be tracked.
  • Logical boundaries between protected and general workflows must be implemented (AWS, 2018).

1.2.2    Exposure Factor (EF):  Loss Potential

The Exposure Factor reflects the percentage of loss which the organization can experience if a specific asset were violated by a realized risk.  In most cases, a realized risk does not result in a total loss of an asset.  The EF indicates the expected overall asset value lost because of a single realized risk.  The EF is usually small for assets which are easily replaceable such as hardware.  Thus, the printers, client PCs, and Servers which can be replaced will have a small EF.  However, EF can be very large for assets which cannot be replaced or proprietaries such as product designs, or database of customers (Stewart et al., 2015).  In ACME’s case, the patients’ information cannot be replaced in addition to the sensitive nature of the information pauses a high value of EF.   Since EF is expressed as a percentage, the EF for replaceable assets can have 50%, and the patients’ information can have 100% EF.

1.2.3    Single Loss Expectancy (SLE)

            The EF is required to calculate the single loss expectancy (SLE) which is the cost associated with a single realized risk against a specific asset.  SLE indicates the exact amount of loss ACME would experience if an asset were harmed by a specific threat occurring.  The SLE is calculated using the following formula:  SLE = AV * EF.  The SLE is expressed in dollar value (Stewart et al., 2015).  Thus, the SLE for the tangible assets listed in Table 1 is calculated based on 50% of EF and the AV of each asset as illustrated in Table 2.  The intangible assets reflected in the patients’ information and all private and confidential information have SLE of 100%.

Table 2.  Calculated SLE of the Threat for the Tangible Assets.

 1.2.4   Annualized Rate of Occurrence (ARO)

            The annualized rate of occurrence (ARO) is the expected frequency with which a specific threat or risk will occur within a single year.  The ARO can range from a zero 0.0 indicating that the threat or risk will never be realized to a vast number, indicating the threat or risk often occurs (Stewart et al., 2015).  The calculation or ARO can be complicated, as it can be derived from historical records, statistical analysis, or guesswork (Stewart et al., 2015).  The ARO is also known as the “Probability Determination.”  The ARO for some threats or risks is calculated by multiplying the likelihood of a single occurrence by the number of users who could initiate the threat (Stewart et al., 2015).  Thus, the ARO for the tangible assets can be .001, while the ARO for the intangible assets such as patients’ information is very likely with ARO of 100%.  Thus, the protection of patients’ information has the highest priority in ACM as a healthcare organization.

1.2.5    Annualized Loss Expectancy (ALE)

            The annualized loss expectancy (ALE) is the possible yearly cost of all instances of a specific realized threat against a specific asset (Stewart et al., 2015).  The ALE is calculated using the following formula:  ALE = SLE*ARO (Stewart et al., 2015).   For instance, if the SLE for the tangible assets is $212,500, the ALE is calculated as $212,500*.001 resulting in ALE of $212.50.  Table 3 summarizes the ALE for the tangible assets as for the intangible assets of the patients’ information it will be 100% with the highest priority of protection. 

Table 3.  Calculated ALE for the Tangible Assets. 

            These tasks of EF, SLE, ARO, and ALE for every asset and every threat/risk is required when using the quantitative risk assessment. However, there are quantitative risk assessment software tools which can simplify and automate much of this process.  These tools produce an asset inventory with valuations and using predefined AROs along with some customizing options, producing risk analysis reports. For this project, there is no such a tool to use.  Thus, these calculation is developed manually.  However, for ACME as a healthcare organization, one of these tools must be used to perform these calculations and develop the risk analysis report.

1.2.6  Calculating Annualized Loss Expectancy with a Safeguard

            The annual cost of the safeguard must be determined by calculating the ALE for the asset, which requires a new EF and ARO specific to the safeguard.   The calculation of the safeguard cost and benefit are required.  Since the hardware and the tangible assets can be replaced the safeguard costs are expected to be less than the safeguard costs for the patient’s information.  ACME started the migration of the data and applications into the Cloud Computing using the VPC which is regarded to be the most dominant trusted cloud environment.  Thus, most of the cost is expected to be for the patients’ information as well as for developing the HIPAA architecture in the cloud.

1.1   Qualitative Risk Analysis

Since ACME organization will employ both the quantitative and qualitative risk assessment and analysis approaches as a hybrid assessment and analysis, this section discusses and analyzes the requirement for the qualitative technique.  The balance of quantitative and qualitative approaches is highly recommended (Stewart et al., 2015).  The qualitative analysis requires ranking threats on a scale to evaluate their risks, costs, and effects.  The qualitative risk assessment and analysis involves judgment, intuition, and experience.  Various techniques to perform qualitative risk assessment and analysis include brainstorming, Delphi technique, focus groups, surveys, checklist (Stewart et al., 2015).  Thus, ACME organization plans to use Delphi technique for the qualitative risk assessment and analysis.

1.4  Key Assets

Among these assets, there are vital assets, which need to have more priority than other assets.  These critical assets include all the servers at the headquarter, the devices which connect the clients to the servers.  These devices are wired in a protected and encrypted technique to access the PHI and protected information in the Cloud securely.  The access to these devices should be limited to the employees only.  Moreover, the employees also have various roles. Based on their role limited access should be provided to their role in these devices.  Badges to access the building is a very high priority.  Thus, when an employee leaves the organization or is terminated, there must be a technique to revoke her privileges immediately to access the building and any device which is connected to the Cloud environment.   

1.5  Risk Assessment Tools

There are various Risk Assessment Tools available which ACME organization can utilize to create more accurate and more detailed Risk Assessment plan.  These tools include Risk Vision, Onspring, Resolver, ETQ, and TrackTik.  The Risk Vision tools assist in protecting the organizations from IT risks.  The Onspring tool assists in identifying, evaluating, and managing risk.  Resolver tool is described to be an easy-to-use ERM software which will engage risk owners, enable informed decision-making and help the organization achieve its objectives.   ETQ tools is a quantitative risk management technology which assists organizations to make better decisions.

1.6 Cloud Computing Technology:  Virtual Private Cloud (VPC)

Cloud Computing offers many advantages to consumers such as increased utilization of hardware resources, reduced costs, easy deployment, and scalability down or up as needed (Kazim & Zhu, 2015).  The users can lease multiple resources based on their requirements, and pay only for the services used (Kazim & Zhu, 2015).   Cloud Computing enables sharing of resources such as network, storage, application, and software through the internet (Adebisi, Adekanmi, & Oluwatobi, 2014; Avram, 2014; Aziz & Osman, 2016-LR-Access).  Cloud Computing is increasingly gaining attention in IT industry.  Cloud Computing is internet-based technology that has evolved in the field of the IT over the past few years.  A survey conducted by International Data Corporation (IDC) indicates that the Cloud Computing is best suited for users and organizations who are seeking a quick solution for startups, such as developers or research projects and even e-commerce entrepreneurs (Ramgovind, Eloff, & Smith, 2010). 

Because of the security and privacy concerns for adopting the Cloud Computing, Virtual Private Cloud (VPC) is introduced to address issues related to the Public and Private Clouds (Botta, de Donato, Persico, & Pescapé, 2016; Sultan, 2010; Venkatesan, 2012).  The VPC is taking advantages of technologies such as the Virtual Private Network (VPN) allowing the organization to set up their required network settings such as security (Botta et al., 2016; Sultan, 2010; Venkatesan, 2012; Zhang, Cheng, & Boutaba, 2010).  The VPC deployment model has dedicated resources and the VPN to provide isolation (Chen, Paxson, & Katz, 2010). 

This new deployment model is said to provide a secure and seamless bridge between the organizations’ IT infrastructure and the AWS Public Cloud (Dillon, Wu, & Chang, 2010).    The VPC is a combination of the Private Cloud and Public Cloud (Dillon et al., 2010).  The service provider can only specify the security settings and the requirement to secure the environment, even for the VPC, however, they would not know if the organizations fully implemented these security measures (Nazir, Bhardwaj, Chawda, & Mishra, 2015; Padhy, Patra, & Satapathy, 2011).  VPC is also described as a semi-private Cloud with fewer resources and as an on-demand configurable pool of shared resources allocated within the Cloud (Singh, Jeong, & Park, 2016).  As indicated in  (Dillon et al., 2010) the VPC is worth investigating, as new deployment model of the Cloud Computing introduced by Amazon Web Services (AWS), providing security to organizations.  It is logically isolated from other virtual networks in the AWS cloud.   Trust is becoming the primary concern of the Cloud Computing consumers (Abdul et al., 2014).  The VPC is regarded to be the most prominent approach to Trusted Computing Technology (Abdul et al., 2014).  

Thus, ACME organization migrated to Cloud Computing VPC Deployment Model last year and is planning to use the same technology for the new two facilities.   VPC Deployment Model offers security measures and HIPAA architecture which can ensure more robust security compliance and application than the local data center.   As indicated in (Regola & Chawla, 2013), the Cloud Computing can enable rapid provisioning of resources which can be configured to suit a variety of security requirements.  The VPC offers one possibility to quickly provision sensitive information with security measures to protect such data (Regola & Chawla, 2013).  Moreover, AWS also offers architecture strategies for HIPPA in the Cloud.

 Phase 2:  Security Policy

            The Security Policy is a document which defines the scope of security needed by the organization.  It discusses the assets which require protection and the extent to which security solutions should go to provide the necessary protection (Stewart et al., 2015).  The Security Policy is an overview of the security requirements of the organization.  It should identify the major functional areas of data processing and clarifies and defines all relevant terminologies.  It outlines the overall security strategy for the organization.  There are several types of Security Policies.  An issue-specific Security Policy focuses on a specific network service, department, function or other aspects which is distinct from the organization as a whole. The system-specific Security Policy focuses on individual systems or types of systems and prescribes approved hardware and software, outlines methods for locking down a system, and even mandates firewall or other specific security controls.   Moreover, there are three categories of Security Policies: Regulatory, Advisory, and Informative.  The Regulatory Policy is required whenever industry or legal standards are an application to the organization.  This policy discusses the regulation which must be followed and outlines the procedures that should be used to elicit compliance.  The Advisory Policy discusses behaviors and activities which are acceptable and defines consequences of violations.  Most policies are the advisory type.  The Informative Policy is designed to provide information or knowledge about a specific subject, such as company goals, mission statements, or how the organization interacts with partners and customers.  While Security Policies are broad overviews, standards, baselines, guidelines, and procedures, include more specific, detailed information on the actual security solution (Stewart et al., 2015).         

The security policy should contain security management concept and principles with which the organization will comply.  The primary objectives of security are contained within the security principles reflected in the CIA Triad of Confidentiality, Integrity, and Availability.  These three security principles are the most critical elements within the realm of security.  However, the importance of each element in this CIA Triad is based on the requirement and the security goals and objectives of the organization.   The security policies must consider these security principles. Moreover, the security policy should also contain additional security concepts such as Identification, Authentication, Authorization, Auditing and Accountability (Stewart et al., 2015).

2.1 CIA Triad

The Confidentiality is the first of the security principles.  Confidentiality provides a high level of assurance that object, data or resources are restricted from unauthorized users.  Confidentiality can be maintained on the network and data must be protected from unauthorized access, use or disclosure while data is in storage, in transit, and in process.  Numerous attacks focus on the violation of the Confidentiality.  These attacks include the capturing of the network traffic and stealing password files as well as social engineering, port scanning, shoulder surfing, eavesdropping, sniffing and so forth.  The violation of the Confidentiality security principle can result from actions from system admin or end user or oversight in security policy or a misconfiguration of security control.  Numerous countermeasures can be implemented to alleviate the violation of the Confidentiality security principle to ensure Confidentiality against possible threats.  These security measures include encryption, network traffic padding, strict access control, rigorous authentication procedures, data classification, and extensive personnel training. 

            The Integrity is the second security principles, where objects must retain their veracity and be intentionally modified only by the authorized users.  Integrity principle provides a high level of assurance that objects, data, and resources are unaltered from their original protected state.  The unauthorized modification should not occur for the data in storage, transit or processing.  The Integrity principle can be examined using three methods. The first method is to prevent unauthorized users from making any modification.  The second method is to prevent authorized users from making an unauthorized modification such as mistakes.  The third method is to maintain the internal and external consistency of objects so that the data is a correct and accurate reflection of the real world and any relationship such as a child, peer, or parents is validated and verified.  Numerous attacks focus on the violation of the Integrity security principles using a virus, logic bombs, unauthorized access, errors in coding and applications, malicious modification, intentional replacement and system backdoors.  The violation of the Integrity can result in oversight in security policy or a misconfiguration of security control.  Numerous countermeasures can be implemented to enforce the Integrity security principle and ensure Integrity against possible threats. These security measures for Integrity include strict access control, rigorous authentication procedures, intrusion detection systems, encryption, complete hash verification, interface restrictions, function/input checks, and extensive personnel training (Stewart et al., 2015). 

            The Availability is the third security principle which grants timely and uninterrupted access to objects.  The Availability provides a high level of assurance that the object, data, and resources are accessible to authorized users.   It includes efficient, uninterrupted access to objects and prevention of Denial-of-Service (DoS) attacks.  The Availability principle also means that the supported infrastructure such as communications, access control, and network services is functional and allows authorized users to gain authorized access.  Numerous attacks on the Availability include device failure, software errors, and environmental issues such as flooding, power loss, and so forth.  It also includes attacks such as DoS attacks, object destruction, and communication interruptions.  The violation of the Availability principles can occur as a result of the actions of any user, including administrators, or of oversight in security policy or a misconfiguration of security control.  Numerous countermeasures can be implemented to ensure the Availability principles against possible threats.  These security measures include designing the intermediary delivery system properly, using access controls effectively, monitoring performance and network traffic, using routers and firewalls to prevent DoS attacks.  Additional countermeasures for the Availability principle include the implementation of redundancy for critical systems, and maintaining and testing backup systems.   Most security policies and Business Continuity Planning (BCP) focus on the use of Fault Tolerance features at the various levels of the access, storage, security aiming at eliminating the single point of failure to maintain the availability of critical systems (Stewart et al., 2015).  Figure 3 illustrated this CIA Triad which must be implemented in the Security Policy.

Figure 3.  Three Security Principles for the Security Policy.

            These three security principles drive the Security Policies for organizations. Some organizations such as military and government organizations tend to prioritize Confidentiality above Integrity, while private organization tends to prioritize Availability above Confidentiality and Integrity.  However, the prioritization does not imply that the other principles are ignored or improperly addressed (Stewart et al., 2015).

2.2 Additional Security Concepts           

Additional security concepts must be considered in the Security Policy of the organization.  These security concepts are called Five Elements of AAA Services.  They include identification. Authentication, Authorization, Auditing, and Accounting.  Figure 4 illustrates these additional security concepts as part of the Security Policy in addition to the CIA Triad.

Figure 4.  Five AAA Services Concepts for the Security Policy.

            The Identification can include username, swiping a smart card, waving a proximity device, speaking a phrase, or positioning hand or face or finger for a camera or scanning device.  The Identification concept is fundamental as it verifies the access to the secured building or data to only the authorized users.   The Authentication requires additional information from the users. The standard form of Authentication is the password, PINs, or passphrase, or security questions.  If the user is authenticated, it does not mean the user is authorized. 

The Authorization concept reflects the right privileges which are assigned to the authenticated user.  The access control matrix is evaluated to determine whether the user is authorized to access specific data or object.  The Authorization concept is implemented using access control such as discretionary access contr0l (DAC), mandatory access control (MAC), or role-based access control (RBAC) (Stewart et al., 2015).  The Access Control is discussed in details in the System and Application Security of Phase 5 below.

            The Auditing concept or Monitoring is the programmatic technique through which an action of a user is tracked and recorded to hold the user accountable for such action, while authenticated on a system.   The abnormal activities are detected in a system using the Auditing and Monitoring concept.   The Auditing and Monitoring concept is required to detect malicious actions by users, attempted intrusions, and system failures and to reconstruct events. It is to provide evidence for the prosecution and produce problem reports and analysis (Stewart et al., 2015). 

            The Accountability concept is the last security concept which must be addressed in the Security Policy.  The Accountability concept is to maintain security only if users are held accountable for their actions.  The Accountability concept is implemented by linking a human to the activities of an online identity through the security services and Auditing and Monitoring techniques, Authorization, Authentication, and Identification.  Thus, the Accountability is based on the strength of the Authentication process. Without robust Authentication process and techniques, there will be a doubt about the Accountability.  For instance, if the Authentication is using only a password technique, there is a significant room for doubt because of the password, especially weak passwords.  However, the password for the implementation of multi-factor authentication, smartcard, fingerprint scan, there will be very little room for doubt (Stewart et al., 2015).    

2.3 Building and Internal Security

            The Security Policy must address the security access from outside to the building and from inside within the building.  Some employees are authorized to enter one part of the building but not others.  Thus, the Security Policy must identify the techniques and methods which will be used to ensure access for those authorized users.  Based on the design of the building discussed earlier, the employees will have the main entrance.   There is no back door as an entrance for the employees.   The badge will be used to enter the building, and also to enter the authorized area for each employee.  The visitors will have another entrance because this is a healthcare organization.  The visitors and patients will have to be authorized by the help desk to direct them to the right place, such as pediatric, emergency and so forth.  Thus, there will be two main entrances, one for employees and another for visitors and patients.   All equipment rooms must be locked all the time, and the access to these equipment rooms must be controlled.  A strict inventory of all equipment so that any theft can be discovered.  The access to the data centers and servers rooms must have more restrictions and more security than the normal security to equipment rooms.  The data center must be secured physically with lock systems and should not have drop ceilings.  The work areas should be divided into sections based on the security access for employees.  For instance, help desk employees will not have access to the data center or server rooms.  Works areas should be restricted to employees based on their security access roles and privileges.  Any access violation will require three warnings, after which a violation action will be taken against the employees which can lead to separating the employee from the organization (Abernathy & McMillan, 2016).

2.4 Environmental Security

            Most considerations concerning security revolve around preventing mischief.  However, the security team is responsible for preventing damage to the data and equipment from environmental conditions because it is part of the Availability principle of the CIA Triad.  The Security Plan should address fire protection, fire detection, and fire suppression.  Thus, all the measures for fire protection, detection and suppression must be in place.  Example of the fire protection, no hazards materials should be used.  Concerning power supply, there are common power issues such as prolonged high voltage, power outage. The preventive measures to prevent static electricity from damaging components should be observed.  Some of these measures include anti-static sprays, proper humidity level, anti-static mats, and wristbands.  HVAC should be considered, not only for the comfort of the employees but also for the computer rooms, data centers, and server centers.  The water leakage and flooding should be examined, and security measures such as water detectors should be in place.  Additional environmental alarms should be in place to protect the building from any environmental events that can cause damage to the data center or server center.  The organization will comply with these environmental measures (Abernathy & McMillan, 2016). 

2.5 Equipment Security

            The organization must follow the procedure concerning equipment and media and the use of safes and vaults for protecting other valuable physical assets.  The procedures involve security measures such as tamper protection.  Tampering includes defacing, damaging, or changing the configuration of a device. The Integrity verification measures should be used to look for evidence of data tampering, error and omissions.  Moreover, sensitive data should be encrypted to prevent the exposure of data in the event of theft (Abernathy & McMillan, 2016).  

            An inventory for all should be performed, and the relevant list should be maintained and updated regularly.  The physical protection of security devices includes firewalls, NAT devices, and intrusion detection and prevention systems.  The tracking devices technique can be used to track a device that has critical information.  With respect to protecting physical assets such as smartphones, laptops, tablets, locking the devices is a good security technique (Abernathy & McMillan, 2016). 

2.6 Information Security

With respect to the Information Security, there are seven main pillars.  Figure 5 summarizes these pillars for Information Security for the healthcare organization.   

  • Complete Confidentiality.
  • Available Information.
  • Traceability.
  • Reliable Information.
  • Standardized Information.
  • Follow Information Security Laws, Rules, and Standards.
  • Informed Patients and Family with Permission.

Figure 5.  Information Security Seven Pillars.

The Complete Confidentiality is to ensure that only authorized people can access sensitive information about the patients.  The Confidentiality is the first principle of the CIA Triad. The Confidentiality of the Information Security is related to the information handled by the computer system, manual information handling or through communications among employees. The ultimate goal of the Confidentiality is to protect patients’ information from unauthorized users.  The Available Information means that healthcare professionals should have access to the patients’ information when needed.  This security feature is very critical in healthcare cases.  The healthcare organization should keep medical records, and the systems which store these records should be trustworthy.  The information should be available with no regard to the place, person or time.   The Traceability means that actions and decision concerning the flow of information in the Information System should be traceable through logging and documentation.  The Traceability can be ensured by logging, supervision of the networks, and use of digital signatures.  The Auditing and Monitoring concept discussed earlier can enforce the Traceability goal.  The Reliable Information means that the information is correct.  To have access to reliable information is very important in the healthcare organization.  Thus, preventing unauthorized users from accessing the information can enforce the reliability of the information.  The Standardized Information reflects the importance of using the same structure and concepts when recording information.  The healthcare organization should comply with all standards and policies including HIPAA to protect patients’ information.  The Informed Patients and Family is essential to make sure they are aware of the health status.  The patient has to approve before passing of any medical records to any relatives (Kolkowska, Hedström, & Karlsson, 2009). 

 2.7 Protection Techniques

            The Security Policy should cover protection techniques and mechanisms for security control.  These protection techniques include multiple layers or levels of access, abstraction, hiding data, and using encryption.  The multi-level technique is known as defense in depth providing multiple controls in a series.  This technique allows for numerous and different controls to guard against threats.  When organizations apply the multi-level technique, most threats are mitigated, eliminated or thwarted.  Thus, this multi-level technique should be applied in the healthcare organization.  For instance, a single entrance is provided, which has several gateway or checkpoints that must be passed in sequential order to gain entry into active areas of the building.  The same concept of the multi-layering can be applied to the networks.  The single sign-on technique should not be used for all employees at all levels for all applications, especially in a healthcare organization.  Serious consideration must be taken when implementing single sign-on because it eliminates the multi-layer security technique (Stewart et al., 2015). 

            The abstraction technique is used for efficiency.   Elements that are similar should be classified and put into groups, classes, or roles with security controls, restrictions, or permissions as a collective.  Thus, the abstraction concept is used to define the types of data an object can contain, the types of functions to be performed.  It simplifies the security by assigning security controls to a group of objects collected by type or functions (Stewart et al., 2015).  

            The data hiding is another protection technique to prevent data from being discovered or accessed by unauthorized users.  Data hiding protection technique include keeping a database from being accessed by unauthorized users and restricting users to a lower classification level from accessing data at a higher classification level.   Another form of the data hiding technique includes preventing the application from accessing hardware directly.  The data hiding is a critical element in a security control and programming (Stewart et al., 2015). 

            The encryption is another protection technique which is used to hide the meaning or intent of communication from unintended recipients.  Encryption can take many forms and can be applied to every type of electronic communications such as text, audio, video files, and applications.   Encryption is an essential element in security control, especially for the data in transit. Encryption has various types and strength.  Each type of the encryption is used for specific purpose.   Examples of these encryption types include PKI and Cryptographic Application, and Cryptography and Symmetric Key Algorithm (Stewart et al., 2015).

Phase 3:  Network Security

            Computer and networks emerge from the integration of communication devices, processing devices, security devices, input and output devices, storage devices, operating systems, software, services, data, and people.   A good understanding of these elements can play a significant role in the implementation and the maintenance of the network security.  There are two security models Open Systems Interconnection (OSI) model and TCP/IP model.  Both models are used to describe the process called packet creation, or encapsulation.  The packet cannot be sent on the transmission medium until the packet is created to hold the data.   The most widely used protocol is TCP/IP which is based on DARPA model.  The OSI model serves as an abstract framework or theoretical model for how the protocol should function in an ideal world or ideal hardware (Stewart et al., 2015).

3.1 OSI Model vs. TCP/IP Model

            International Organization for Standardization (ISO) created the OSI model to develop a set of protocols to be used as standards for all vendors.  The OSI model breaks the communication process into seven layers; application layer, presentation layer, session layer, transport layer, network layer, data link layer, and physical layer.  Each layer communicates with the layer above it.  Each layer does not make any changes to the data received from the layer above it.  However, each layer adds information to the development of the packet.  OSI model is an open network architecture guide providing a common foundation for the development of the new protocol, networking services, and hardware devices (Armstrong, 2016; Stewart et al., 2015).  Figure 6 illustrates these layers and their protocols.

Figure 6.  OSI 7-Layer Model. Adapted from (Armstrong, 2016; Stewart et al., 2015)

            The TCP/IP model also called the DARPA, or the DoD model consists of only four layers as opposed to seven layers in OSI model.  These four layers of TCP/IP include the application layer, transport layer, internet layer, and link layer.  Figure 7 compares the TCP/IP model with OSI model (Stewart et al., 2015).

Figure 7.  Comparison between OSI Model and TCP/IP Model (Stewart et al., 2015).

3.2 Vulnerabilities of TCP/IP

            There are numerous vulnerabilities of TCP/IP due to improper implementation of the TCP/IP stacks.  These vulnerabilities include buffer overflows, SYN flood attacks, various DoS attacks, fragment attacks, oversized packet attack, spoofing attacks, man-in-the-middle attacks, hijack attacks and coding error attacks.  Moreover, TCP/IP is also subject to passive attacks through monitoring or sniffing.  The network monitoring is the act of monitoring traffic patterns to obtain information about a network.  The packet sniffing is the act of capturing packets from the network to extract usernames, passwords, email addresses, encryption keys, credit card numbers, IP addresses, system name and so forth.  Security measures must be implemented to mitigate and alleviate such vulnerabilities (Stewart et al., 2015).

3.3 Network and Protocol Security Techniques

            Although TCP/IP is the primary protocol model used on most networks, and on the internet, it has numerous security deficiencies, which need to be considered when developing network security. Many sub-protocol, techniques or applications have been developed to improve the security of the TCP/IP to protect the data confidentiality, integrity, and availability. 

3.3.1 Communication Protocol Security

            The secure communication protocols are the protocols which provide security services or application-specific communication channels.  Examples of these secure communication protocols include Secure Socket Layer (SSL), Simple Key Management for Internet Protocol (SKIP), Secure Remote Procedure Call (S-RPC), Sofware IP Encryption (swIPe), and Transport Layer Security (TLS). SSL can be used to secure web, email, FTP, or Telnet traffic.  SSL is a session-oriented protocol which provides confidentiality and integrity.   It is deployed using a 40-bit key or a 128-bit key. The SSL is superseded by the TLS.  The SSL and TLS support secure client/server communication across the insecure network while preventing tampering, spoofing, and eavesdropping.  They also support one-way authentication and two-way authentication using digital certificates. They can be implemented at lower layers such as layer three of the Network layer to operate as a VPN, which is known as OpenVPN.  The TLS can be used to encrypt the UDP and Session Initiation Protocol (SIP) connections.  The SIP is a protocol associated with VoIP.  The Secure Electronic Transaction (SET) is a security protocol for the transmission of transactions over the Internet.  SET has the support for major credit card companies such as Visa and MasterCard. However, SET has not been widely accepted by the Internet. Instead, SSL/TLS encrypted sessions are the preferred techniques or secure e-commerce (Stewart et al., 2015).

3.3.2 Secure Network Components

            While the Internet can be used for several information services and applications including Web, email, FTP, Telnet, chats and so forth, it can also be used for malicious attacks.  There are two segments of the network; intranet and extranet. The intranet is a private network which is designed to host the same information services found on the Internet but privately.  The extranet is a cross between the Internet and intranet. The extranet is a section of a network of the organization which acts as an intranet for the private network but also serves information to the public Internet.  The extranet for public consumption is label as a demilitarized zone (DMZ) or perimeter network (Stewart et al., 2015). 

3.3.2.1 Network Segmentation

The network is typically configured as segments and sub-divided into smaller units.  These units or segments or subnetworks, namely subnets can be used to improve various aspects of the network.  For instance, network segmentation can improve the performance by placing the systems which often communicate with each other in the same segments.  Moreover, the network segmentation reduces the congestion in the network and contains the problem of communications such as broadcast storms.  The network segmentation can also improve the security by isolating traffic and user access to those segments which require specific authorization.  Network segments can be created by using switch-based VLANs, routers, or firewalls.  Types of network segments include private LAN or intranet, a DMZ, and extranet.  Thus, during the design of the network security, evaluation of several networking devices must be performed.  While not all devices are required for secure network, the network devices can have a significant impact on the network security (Stewart et al., 2015). 

3.3.2.2 Network Access Control

            The underlying concept of the network access control (NAC) is to control access to an environment through strict implementation of security policy. NAC is used to prevent or reduce the malicious attacks, enforce the security policy throughout the network, and utilize identities to perform access control.  NAC acts as an automated detection and response system which can react in real time scenarios to stop threats as they occur and before they cause damage or a breach.  The NAC can be implemented using two different techniques; pre-admission or post-admission.  The pre-admission technique requires a system to meet all current security requirements such as a patch application and anti-virus updates before it is allowed to communicate with the network.  The post-admission techniques allow and deny access based on the user activity, which is based on a pre-defined authorization matrix (Stewart et al., 2015).

3.3.2.3 Firewalls

            Firewalls play a significant role in controlling the traffic of the network.  The firewall is a network device used to filter traffic.  The firewall is deployed between the private network and the link to the Internet.  However, it can be deployed between departments with an organization as well.   The firewall filter traffic based on a defined set of rules which is also called filters or access control lists which distinguish authorized traffic from the unauthorized traffic including the malicious attacks.   The firewall provides barriers which block all unauthorized traffic, and only the authorized traffic is allowed across those barriers.  The firewall can hide the structure and address scheme of a private network from the public.  Moreover, most firewalls offer extensive logging, auditing, and monitoring capabilities, alarms, and basic intrusion detection system (IDS) function.  However, firewalls cannot block virus or malicious code because they do not scan traffic as the anti-virus scanner does.  These companion products such as anti-virus scanners and IDS tools can be added.   There are firewall appliances and tools which are pre-configured to perform all or most of these add-on functions natively.  The firewalls are only one part of an overall security solution, as they offer no protection against traffic within a subnet which resides behind the firewall.  The firewall failures are mostly caused by human errors or misconfiguration.  There are four types of firewalls; static packet-filtering firewalls, application-level gateway firewalls, circuit-level gateway firewalls, and stateful inspection firewalls.  The multi-level firewall provides greater control over filtering traffic (Stewart et al., 2015). 

            Some firewall systems have more than one interface such as the multi-home firewall, also known as a dual-homed firewall, which must have at least two interfaces to filter traffic.  The IP forwarding is used in the multi-home firewall where traffic is sent to another interface automatically.  The bastion host or a screened host is just a firewall system logically positioned between the private network and the internet or untrusted network.  The inbound traffic is routed to the bastion host which acts as a proxy for all the trusted systems within the private network.  The bastion host is responsible for filtering traffic coming to the private network and for protecting the identity of the internal client (Armstrong, 2016; Stewart et al., 2015).

            The firewall deployment architectures can be single-tier deployment, two-tier deployment or three-tier deployment.  The single-tier firewall deployment is used for generic attacks only, which offers the minimal protection.  The two-tier firewall deployment architecture has two different designs.  One design uses the firewall with three or more interfaces. The other two-tier firewall architecture uses two firewalls in a series, which allows for DMZ or publicly accessible extranet.  Figure 8 illustrates the single-tier and two-tier firewall architecture.

Figure 8: Single-Tier and Two-Tier Firewall Architecture (Stewart et al., 2015).

            The three-tier firewall architecture is the deployment of multiple subnets between the private network and the Internet separated by firewalls.  Each firewall has more strict filtering rules to restrict traffic to only trusted sources.  The outermost subnet is the DMZ, while the middle subnet can serve as a transaction subnet where systems needed to support complex web applications in the DMZ area.  The third or back-end, a subnet can support the private network.  The three-tier architecture is the most secure. However, it is also most complicated to design, implement and manage. Figure 9 illustrates this three-tier firewall architecture, adapted from (Stewart et al., 2015).

Figure 9.  Three-Tier Firewall Architecture (Stewart et al., 2015).

3.3.2.4 Endpoint Security

            Each device must maintain local security whether or not its network channels also provide or offer security, which is the concept of the endpoint security, where the end device is responsible for its security.  Lack of internal security causes more issues when remote access services, including wireless and VPN allowing an external entity to gain access to the private network without having to go through the border security.  Endpoint security should be considered to provide sufficient security on each host and device.  Every system should have an appropriate combination of a local host firewall, antimalware scanners, authentication, authorization, auditing, spam filters, and IDS/IPS services.

3.3.2.5 Other Network Devices

            There are several hardware devices when constructing a network.  Understanding these secure network components can assign in the design of network security to avoid a single point of failure and provides strong support for availability.   Repeaters, concentrators, and amplifiers operate at the layer one of the OSI model.  They are used to extend the maximum length of a specific cable type by deploying one or more repeaters along a lengthy cable run.  The Hubs are used to connect multiple systems and connect network segments which use the same protocol. The Hubs operates at the layer one as well. The Modems is the traditional land-line communication device which covers an analog carrier signal and digital information to support computer communications over public switched telephone network line. The bridge is used to connect two networks, even networks of different topologies, cabling types, and speeds to connect network segments which use the same protocol.  Bridges operate at the layer two of the OSI model.  Switches know the address of the systems connected to each outbound port.  Switches offer greater efficiency for traffic delivery, create separate collision domains, and improve the overall throughput of data.  Switches or intelligent hub should be considered instead of the regular hub.  Routers are used to control traffic flow on networks and are often used to connect similar networks and control traffic flow between the two.   Brouter is a combination of the Bridge and Router, which operate at the layer three of OSI.  The Gateway connects the networks which are using different network protocols, and they operate at the layer seven of the OSI model.  The Proxy is a form of the gateway which does not translate across protocol but serves as a mediator, filter, caching server, and even NAT/PAT server for a network (Armstrong, 2016; Stewart et al., 2015). 

3.4 Healthcare Network Security Approach

            The network security approach for the healthcare organization of this project is layered. The layered approach is proven to be the most effective approach for protecting the network from intruders and malicious attacks (Sans, 2013).  The three-tier firewall will be implemented to add more protection to the network security of the healthcare organization.   The firewall will be between departments as well.  Thus, the outpatient department can access the patient data from the emergency room through a firewall.  The firewalls will use explicit permits, and implicit deny on the ACL rules.  They will also be configured to log all denied attempts on the firewall and provide audit reports (Sans, 2013).

            The threats to the network of healthcare organization are real.  Thus, the security measures must be implemented not only at the firewall levels.  The security measures must be implemented at every endpoint components.  Switches and routers must be configured to support secure connectivity, perimeter security, intrusion protection identity services, and security management.  Anti-virus packages must be updated regularly and maintained correctly.  Encryption must be implemented to ensure that data cannot be intercepted, tampered with, or read by anyone other than the authorized users.  VPNs must be implemented to enable physicians to securely connect to the network from virtually anywhere such as a remote hospital or clinic.   Figure 10 illustrates the network architecture for the healthcare organization, adapted from (IBM, 2005).

Figure 10.  Healthcare Collaborative Network Architecture. Adapted from (IBM, 2005).

Phase 4: Incident Response, Business Continuity, and Disaster Recovery

Organizations and businesses are confronted with disasters whether the disaster is caused by nature such as hurricane or earthquake or human-made calamities such as fire or burst water pipes.  Thus, organizations and business must be prepared for such disasters to recover and ensure the business continuity in the middle of these sudden damages.  The critical importance of planning for business continuity and disaster recovery has to lead the International Information System Security Certification Consortium (ISC) included the two process of Business Continuity and Disaster Recovery in the Common Body of Knowledge for the CISSP program (Abernathy & McMillan, 2016; Stewart et al., 2015).

4.1 Business Continuity Planning (BCP)

The Business Continuity Planning (BCP) involves the assessment of the risk to the organizational processes and the development of policies, plans, and process to minimize the impact of those risks if it occurs.   Organizations must implement BCP to maintain the continuous operation of the business if any disaster occurs.  The BCP emphasize on the keeping and maintaining the business operations with the reduction or restricted infrastructure capabilities or resources.  The BCP can be used to manage and restore the environment. If the continuity of the business is broken, then the business processes have seized, and the organization is in the disaster mode, which should follow the Disaster Recovery Planning (DRP).  The top priority of the BCP and DRP is always people.  The main concern is to get people out of the harm; and the organization can address the IT recovery and restorations issues (Abernathy & McMillan, 2016; Stewart et al., 2015).

4.1.1 BCP Steps

As indicated in (Abernathy & McMillan, 2016), the steps of the Special Publications (SP) 800-34 Revision 1 (R1) from the NIST include seven steps.  The first step involves the development of the contingency planning policy. The second step involves the implementation of the Business Impact Analysis.  The Preventive Controls should be identified representing the third step.  The development of Recovery Strategies is the fourth step. The fifth step involves the development of the BCP.  The six-step involves the testing, training, and exercise. The last step is to maintain the plan. Figure 11 summarizes these seven steps identified by the NIST. 

Figure 11.  Summary of the Business Continuity Steps (Abernathy & McMillan, 2016).

In (Stewart et al., 2015), the BCP process involves four main steps to provide a quick, calm, and efficient response in the event of an emergency and to enhance the ability of the organization to recover from a disruptive event in a timely fashion.  These four steps include (1) the Project Scope and Planning, (2) Business Impact Assessment, (3) Continuity Planning, and (4) Documentation and Approval (Stewart et al., 2015).  Figure 12 illustrates these four major elements to implement the BCP.

Figure 12.  Four Major Elements of the Business Continuity Planning (BPC).

4.1.1.1 Project Scope and Planning

The Project Scope and Planning is the first step in the BCP requiring the use of a proven methodology.   This step should cover the structured analysis of the organization from a crisis planning perspective.  It should also include the development of BCP team with the approval of senior management.  The assessment of the resources available to participate in the activities of the business continuity should also be part of the first step of Project Scope and Planning.   The first step should also include the analysis of the legal and regulatory landscape which governs the response of the organization to any catastrophic event.  It is very important to analyze the organization to identify all departments and individuals who have a stake in the BCP process.  This analysis step covers areas such as the Operational Department, the Critical Support Services such as IT department, and Senior Executives.  The identification process to all departments and individuals who have a stake in the BCP process is very critical for two reasons.  The first reason is to provide the groundwork required to help identify potential members of the BCP team.  The second reason is to provide the foundation for the remainder of the BCP process.  This first step should identify the BCP team selection.  The resource requirements should be identified as well, as the BCP team will require some resources to perform the major four elements of the BCP process. The BCP will require testing, training, and maintenance.  The BCP conducts a full-scale implementation of the BCP when a disaster occurs.  Moreover, there might be legal and regulatory requirements which organization must meet.  If the business has service-level agreement (SLA), the organization must comply with the business continuity as well (Stewart et al., 2015). 

4.1.1.2 Business Impact Assessment

The second element of the BCP plan is the Business Impact Assessment (BIA). The BIA identifies the resources which are critical to the ongoing viability of the organization and the threats posed to those resources.  The assessment should include the likelihood which each threat might occur and the impact of such occurrences on the business if it happens.   The result of the BIA should provide quantitative measures and qualitative analysis. The quantitative measures can assist in prioritizing the commitment of the business continuity resources to the various local, regional and global risk exposure facing the organization.  The quantitative decision making involves the use of numbers and formulas to reach a decision. This decision-making type is expressed in dollar value to the business. The quantitative assessment involves asset value (AV) in monetary terms to each asset.  The second quantitative measure is the maximum tolerable downtime (MTD) also known as the maximum tolerable outage (MTO).  The recovery time objective (RTO) is another metric for each business function.  The goal of the BCP process is to ensure that the RTOs are less than the MTDs,  The qualitative decision making takes the non-numerical factors, such as emotions, the confidence of the investors and customers, the stability of the workforce, and so forced into the account.  The result of the qualitative decision making include categories of prioritization such as high, medium and low.  Both the quantitative and qualitative analysis play a key role in the BCP process as each adds distinguished value to the business.  During this second step, the organization and BCP team must identify not only the priorities to the business but also the risk which involves the qualitative approach.   This step should also evaluate the likelihood of the identified risks, and the impact if any of these risks occurs (Stewart et al., 2015). 

4.1.1.3 Continuity Planning

The third element of the BCP is the Continuity Planning to focus on the development and the implementation of a continuity strategy to minimize the impact of the risk when it happens on the protected assets of the business.  This phase involves the development of continuity strategy, the provisions and processes, the plan approval, the plan implementation and the training and education.   The development of the strategy is to bridge the gap between the business impact assessment and the continuity planning phase of BCP development.  During this strategy development, the BCP team determine the risks to be addressed by the business continuity plan based on the prioritized list of concerns raised by the quantitative and qualitative analysis.  The provision and processes of the continuity planning phase involve the design of the specific procedures and techniques which will mitigate the risks which are identified as unacceptable during the strategy development stage.  People are at the top of the priority.  They should always be in safe place. The building and facilities should be addressed in the continuity planning to include hardening provisions and alternate sites. The infrastructure is the backbone of the business, which consists of some servers, workstations, and critical communications links between sites.  The BCP must address how these systems will be protected against risks identified during the strategy development phase.  The infrastructure in the continuity planning should include the physically hardening systems and the alternative systems.  After the completion of these steps, the continuity plan should be documented and approved.  The BCP should implement the continuity plan after the approval to validate it and provide training to all personnel of the organization (Stewart et al., 2015).

4.1.1.4 BCP Documentation

The last phase of the BCP is the documentation.  The documentation is a critical step to ensure the BPC personnel has a written continuity document to reference in the event of an emergency, to provide a historical record of the BCP process, and to force the team members to commit their thought to paper.  The major components of the BCP documentation should include the continuity planning goals, the statement of importance, the statement of priorities, the statement of organizational responsibility, the statement of urgency and timing, the risk assessment, the risk acceptance and mitigation, the vital records program, the emergency-response guidelines, maintenance and training (Stewart et al., 2015).  Figure 13 summarizes these elements of the BCP documentation.

Figure 13.  A Summary of the Elements to be Documented in the BCP.

4.2 Disaster Recovery Planning

            In case of the disaster event occur, the organization must have in place a strategy and plan to recover from such a disaster.  Organizations and businesses are exposed to various types of disasters.  However, these types of disaster are categorized to be either disaster caused by nature or disaster caused by a human.  The disasters which are nature related include the earthquakes, floods, storms, hurricanes, volcanos, and fires.  The human-made disasters include fires caused intentionally, acts of terrorism, explosions, and power outages.  Other disasters can be caused by hardware and software failures, strikes and picketing, theft and vandalism.  Thus, the organization must be prepared and ready to recover from any disaster.  Moreover, the organization must document the Disaster Recovery Plan and provide training to the personnel (Stewart et al., 2015).   

4.2.1 Fault Tolerance and System Resilience

            The security CIA Triad involves Confidentiality, Integrity, and Availability.  Thus, the fault tolerance and system resilience affect directly one of the security CIA Triad of the Availability.  The underlying concept behind the fault tolerance and the system resilience is to eliminate single points of failure.   The single point of failure represents any component which can cause the entire system to fail.  For instance, if a computer had data on a single disk, the failure of the disk can cause the computer to fail, so the disk is a single point of failure.  Another example involves the database when a single database serves multiple web servers; the database becomes a single point of failure (Stewart et al., 2015).  

            The fault tolerance reflects the ability of the system to suffer a fault but continue to operate.  The fault tolerance is implemented by adding redundant components such as additional disks within a redundant array of independent disks (RAID), sometimes it is called inexpensive disks, or additional servers within a failover clustered configuration.  The system resilience reflects the ability of the system to maintain the acceptable level of service during an adverse event, and be able to return to the previous state.  For instance, if a primary server in a failover cluster fails, the fault tolerance can ensure the failover to another system.  The system resilience indicates that the cluster can fail back to the original server after the original server is repaired (Stewart et al., 2015).

4.2.1.1 Hard Drives Protection Strategy

            The organization must have a plan and strategy to protect the hard drives from single points of failure to provide fault tolerance and system resilience.  The standard technique is to add a redundant array of disks of the RAID array.  The RAID array includes two or more disks, and most RAID configurations will continue to operate even after one of the disks fails. There are various types of arrays.  The organization must utilize the proper RAID for the fault tolerance and system resilience.    Figure 14 summarizes some of the conventional RAID configurations. 

Figure 14. A Summary of the Common RAID Configurations.

4.2.1.2 Servers Protection

            The fault tolerance can be considered and added to critical servers with failover clusters.  The failover cluster includes two or more servers or nodes, if one server fails, the other server in the cluster can take over its load automatically using the “failover” process.  The failover clusters can also provide fault tolerance for multiple devices or applications.  The typical topology to consider fault tolerance include multiple web servers with network load balancing, and multiple database servers at the backend with a load balancer as well, and RAID arrays for redundancy.  Figure 15 illustrates a simple failover cluster with network load balancing, adapted from (Stewart et al., 2015).

Figure 15.  Failover Cluster with Network Load Balancing (Stewart et al., 2015).

4.2.1.3 Power Sources Protection Using Uninterruptible Power Supply

            The organization must also consider the power sources with “uninterruptible power supply” (UPS), a generator, or both to ensure fault tolerance environment.   The UPS provides battery-supplied power for a short period between 5 and 30 minutes, while the generator provides long-term power.  The goal of using UPS is to provide power long enough to complete a logical shutdown of a system, or until a generator is powered on to provide stable power. 

4.2.1.4 Trusted and Secure Recovery

The organization must ensure that the recovered environment is secure to protect against any malicious attacks. Thus, the system administrator with the security professional must ensure that the system can be trusted by the users.  The system can be designed to fail in a fail-secure state or a fail-open state. The fail-secure system will default to a secure state in the event of a failure, blocking all access. The fail-open system will fail in an open state, granting all access to all users.  In a critical healthcare environment, the fail-secure system should be the default configuration in case of failure, and the security professional can set the access after the failure using an automated process to set up the access control as identified in the security plan (Stewart et al., 2015). 

4.2.1.5 Quality of Service (QoS)

            The control of Quality of Service (QoS) provides protection and integrity of the data network under load.  The QoS attempts to manage factors such as bandwidth, latency, the variation in latency between packets, known as “Jitter,” packet loss, and interference.  The QoS systems often prioritize certain traffic types which have a low tolerance for interference and have high business requirements (Stewart et al., 2015). 

4.2.1.6 Backup Plan

            The organization must implement a disaster recovery plan which covers the details of the recovery process of the system and environment in case of failure.  The disaster recovery plan should be designed to allow the recovery even in the absence of the DRP team, by allowing people in the scene to begin the recovery effort until the DRP team arrives. 

            The organization must engineer and develop the DBP to allow the business unit and operation with the highest priority are recovered first.  Thus, the priority of the business operations and the unit must be identified in the DRP.   All critical business operations must be placed with the top priority and be considered to be recovered first.   The organization must consider the panic associated with disaster in the DRP.  The personnel of the organization must be trained to be able to handle the disaster recovery process properly and reduce the panic associated with it.   Moreover, the organization must establish internal and external communications in case of the disaster recovery to allow people to communicate during the recovery process (Stewart et al., 2015).  

            The DRP should address in detail the backup strategy and plan.  There are three types of backups; Full Backup, Incremental Backup, and Differential Backup.  The Full Backup store a complete copy of the data contained on the protected devices.  It duplicates every file on the system.  The Incremental Backup stores only those files which have been modified since the time of the most recent full or incremental backup.  The Differential Backup store all files which have been modified since the time of the most recent full backup.  It is very critical for the healthcare organization to employ more than one type of backup.  The Full Backup over the weekend and incremental or differential backups on a nightly basis should be implemented as part of the DRP (Stewart et al., 2015).

Phase 5:  System and Application Security

            Security should be considered at every stage of the development of a system. Moreover, developers and programmers should strive to develop security into every application they develop, with high levels of security provided to critical applications and those that process sensitive information.  It is very critical to consider the security implications of a development project from the early state because it is much easier to develop security into a system than it is to add security to an existing system. 

Moreover, the software application is a critical component in the system security.  The software applications often have privileged access to the operating system, hardware, and other resources.  They routinely handle sensitive information, including credit card numbers, Social Security numbers, healthcare information, and proprietary business information.  Many software applications rely on databases which also contain sensitive information.  The software is the core of the modern enterprise and performs business-critical functions.  Any software failures can disrupt business with severe consequences.  Thus, software developers must integrate security into the application.  Moreover, careful testing of the software application is essential to meet the security requirement of the CIA Triad.   The organization must consider all security measures allowed for these developed applications to ensure that these applications will not be a vulnerable point in the systems.   These security measures can range from encryption to secure socket layers (SSL), to digital certificates, and multi-factor authentication techniques.

System security entails various components such as the implementation of secure system design, the implementation of the appropriate system security model, the implementation of the controls and countermeasures based on system security evaluation models, the implementation of the security capabilities of information systems. 

5.1 The Implementation of Secure System Design

There are essential security design principles which should be implemented and managed in the engineering process of the hardware or software project.  These principles include the access controls, closed and open systems, techniques for ensuring the security requirements of CIA Triad, Trust and Assurance.   

5.1.1 Access Control Implementation

The Access Control techniques must be implemented to ensure the security of a system.  The access rules are used to limit the access of a user to any resources in the system.  Access rules states which resource or objects are valid for each user or application. The object and resource can be valid for one type of access or application but not valid for another type of access.  As an example, the file access in the system can be protected from modification by making it read-only for most users but read-write for a small set of users or a group of users who have the authority to modify the file. 

Access Control (AC) can be either mandatory known as MAC or discretionary known as DAC.  With the MAC, the static attributes of the users and the resource are considered to determine the permissibility of access.  Each user and application has attributes which define its clearance or authority to access resources in the system.  Each object or resource has attributes that define its classification.  The Rule-Based-Access Control (RBAC) is a predefined rule stating which users can access which object or resources.  With respect to the DAC, it allows the user and application to define a list of resources to access as needed.  The access control list serves as a dynamic access rule set that the user can modify. The constraints which are imposed on the users and applications relate to the identity of the users and applications.  The users and applications may be allowed to add or modify the rules which define access to the resources and objects based on their identity and the attributes.  Both the MAC and DAC limit the access to resources and objects in the system by users and applications.  The primary goal of the access control is to ensure the Confidentiality and Integrity of data by disallowing the unauthorized access by authorized or unauthorized users and applications (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.2 Closed and Open System Implementation

Systems are designed to be either closed system or open system.  The closed system is designed to work well with a narrow range of other systems from the same manufactures. The standards for the closed systems are often proprietary and not normally disclosed. The open system is designed using agreed-upon industry standards and is much easier to integrate with systems from a different manufacturer which support the same standards. 

The closed system can be more secure, but they are hard to integrate with, unlike system.  The closed system often comprises proprietary hardware and software which does not incorporate industry standards. This lack of integration ease makes it more difficult for malicious attacks.  The attack on closed system is more difficult than the attack on open system. The open system is generally much easier to integrate with other open systems.  For instance, it is easy to create a LAN with a Microsoft Windows Server machine, a Linux machine, and a Macintosh machine because they are all open system and are designed using agreed-upon industry standards.  This easy integration of open systems comes with a price.  There are more predictable entry points and methods for launching attacks for the open systems because standard communications components are incorporated into each of these open systems. The open system is more vulnerable to attacks than the closed system, and the widespread availability makes it possible for attackers to find plenty of potential targets.  Thus, system security measures for open systems must be implemented with care so that the attacks cannot find its way easy to the system (Abernathy & McMillan, 2016; Stewart et al., 2015). 

5.1.3 Techniques for Ensuring CIA Triad

Organizations must ensure that all components that have access to data are secure and well behaved to ensure the security requirements of CIA Triad.   Software designers use different techniques to ensure that the program do only what is required and nothing more.  Thus, organizations must implement additional techniques such as confinement, bounds, isolation.  These three major concepts make the design of secure programs and operating system more difficult.  However, they also make it possible to implement more secure system (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.3.1 Process Confinement Implementation

Software developers use process confinement to restrict the actions of a program or application.  Process confinement allows a process to read from and write to only certain memory locations and resources.  This process is also known as “sandboxing,” where the operating system disallows illegal read/write requests. If a process attempts to initiate an action beyond its granted authority, that action will be denied, and the offending process is terminated.  Moreover, further actions such as logging the violation attempt may be taken as well.   Process confinement can be implemented in the operating system through process isolation and memory protection or through the use of a confinement application or service such as Sandboxie at www.sandboxie.com or a virtualization or hypervisor solution such ass VMWare or Oracle VirtualBox (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.3.2 Physical vs. Logical Process Bounds Implementation

            There are two authority levels for each process in the system to allow the process of it can or cannot do.  The two levels of authority include the user and the kernel.  The authority level informs the operating system how to set the bounds for a process. The process bound consists of limits set on the memory addresses and resources it can access.  The process bounds state the area within which a process is confined or contained.  In most systems, these bounds segment logical areas of memory for each process to use.  The operating system is responsible for enforcing these logical bounds and disallowing access to other processes.  More secure systems may require physically bounded processes. The physical bounds require each bounded process to run in an area of memory which is physically separated from other bounded processes, and not just logically bounded in the same memory space. The physically bounded memory can be costly, but it is also more secure than the logical bounds (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.3.3 Isolation Implementation

            When the process is confined through enforcing access founds, the process runs in isolation.  This process isolation ensures that any behavior will affect only the memory and resources associated with the isolated process. This isolation is used to protect the operating environment, the kernel of the operating system, and other independent applications.  The isolation is an essential component of a stable operating system. The isolation prevents an application from accessing the memory or resources of another application.  The operating system may provide intermediary services such as cut-and-paste and resource sharing such as the keyboard, network interface, and storage device access (Abernathy & McMillan, 2016; Stewart et al., 2015). 

5.1.4 Trusted System and Assurance

The organization must integrate and implement the proper security concept, controls, and techniques before and during the design and architectural period produce a reliably secure product. When the security is integrated into the design, it must be engineered, implemented, tested, audited, evaluated, certified, and finally accredited.  The trusted system provides protection mechanisms and techniques which work together to process sensitive data for many types of users while maintaining a stable and secure computing environment.  The assurance is defined as the degree of confidence in satisfaction of security needs.  The assurance must be continually maintained, updated, and reverified.  If a trusted system experiences a known change or if a significant amount of time has passed.  The change is often the antithesis of security because it diminishes the security of the system.  Thus, when changes occur, the system needs to be re-evaluated to verify that the level of security is provided previously is still intact.  Assurance must be established on individual systems basis because it varies from one system to another.  However, there grades or levels of assurance which can be placed across various systems of the same type such as systems which support the same services, or systems which are deployed in the same geographic location.  Thus, trust can be developed into a system by implementing specific security features and measures, whereas assurance is an assessment of the reliability and usability of those security measures in a real-world situation (Abernathy & McMillan, 2016; Stewart et al., 2015). 

5.1.5 System Security Model

            Models provide a method to formalize security policies in information security. These models can be abstract or intuitive and intended to provide an explicit set of rule which a computer and system can follow to implement the fundamental security concepts, processes, and procedures which make up a security policy.  These models explain the methods a computer operating system should be designed and developed to support a specific security policy.  The security model provides software developers with methods to map abstract statements into a security policy which prescribes the algorithms and data structures required to build hardware and software.  Examples of these system security models include State Machine Model, Information Flow Model, Take-Grant Model, Access Control Matrix, Bell-LaPadula Model, Biba Model, Clark-Wilson Model and more.  The organization must evaluate each model and implement the appropriate model to ensure system protection (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.6 Control and Countermeasures Based on System Security Evaluation Models. 

The organization must evaluate the systems to ensure security measures are implemented. This evaluation includes two significant steps. The first step involves testing the system and technical evaluation to ensure that the security of the system capabilities meets the criteria laid out for its intended use.  The second step involves formal comparison of the design and security criteria and its actual capabilities and performance.   The organization must decide whether to adopt the veracity of the systems, reject them or make some changes to their criteria and try them again.  There are three major system evaluation models or classification criteria models TCSEC for Trusted Computer System Evaluation Criteria, ITSEC for Information Technology Security Evaluation Criteria, and Common Criteria.

The TCSEC was created in the 1980s because the US DoD worked to develop and impose security standards for the systems it purchases and used.  The TCSEC established guidelines to be used when evaluating a stand-alone computer from the security perspective. These guidelines address basic security functionality and allow evaluators to measure and rated a system’s functionality and trustworthiness.  ITSEC is a European model developed in 1990 and used through 1998.  In the TSCEC, functionality and security assurance are combined and not separated as they are in security criteria developed later. The TCSEC guidelines were designed to be used when evaluating vendor products or by vendors to ensure that they build all necessary functionality and security assurance into new products. Both TCSEC and ITSEC were replaced with the so-called Common Criteria, adopted by the US, Canada, France, Germany, and United Kingdom in 1998 (Abernathy & McMillan, 2016; Stewart et al., 2015). 

5.1.7 Security Capabilities of Information Systems

The security measures and capabilities of information systems include memory protection, virtualization, Trusted Platform Module, interfaces, and Fault Tolerance. The organization must evaluate each aspect of the infrastructure to ensure that it supports security sufficiently.   The Memory Protection is a core security component which must be designed and implemented into the operating system.  It must be enforced with no regard to programs executing in the system.  The instability, violation of integrity, denial of service, and disclosure can result for memory that is not protected in the system.

The Virtualization technology is used to host one or more operating systems within the memory of a single host computer. This technique allows virtually any operating system to operate on any hardware. It also allows multiple operating systems to work simultaneously on the same hardware. Examples of virtualization technology include VMware, Microsoft’s Virtual PC, Microsoft Virtual Server, Hyper-V with Windows Server, Oracle VirtualBox, XenServer, and Parallels Desktop for Mac.  Virtualization offers several benefits such as the ability to launch individual instances of servers or services as needed, real-time scalability, and the ability to run the exact operating system version needed for a specific application.  However, with virtualization comes security concern.  The organization must implement the proper security measures when using virtualization.

The Trusted Platform Module (TPM) is both a specification for a cryptoprocessor chip on a mainboard and the general name for the implementation of the specification. The TPM chip is used to store and process cryptographic keys for hardware supported and implemented hard drive encryption system.  The hardware implementation is considered to be more secure than the software-only implementation.  When TPM-based whole-disk encryption is in use, the user must supply a password or physical USB token device to the computer to authenticate and allow the TPM chip to release the hardware drive encryption key into memory.  The encryption can be decrypted and accessed only with the original TPM.  When using the software-only hard drive encryption, the hard drive can be moved to a different computer without any access or use limitation.  The hardware security module (HSM) is a crypto processor used to manage and store digital encryption keys, accelerate crypto operations, support faster digital signatures, and improve authentication. 

The Interfaces should be constrained to limit or restrict the actions of both authorized and unauthorized users.  The use of the constrained interface is a practical implementation of the Clark-Wilson model of security.  The Fault Tolerance is the ability of the system to suffer a fault but continue to operate. The fault tolerance can be achieved by adding redundant components such as additional disk within a redundant array of inexpensive or independent disk (RAID) array, or additional servers within a failover clustered configuration.  The fault tolerance is a critical element of security design for the systems.  It is also considered part of avoiding the single points of failure and the implementation of redundancy, as discussed earlier in the Disaster Recovery Plan above (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.8 Vulnerabilities Evaluation and Mitigation

            The organization must evaluate and mitigate the vulnerabilities of security architectures, designs, and solution elements. The insecure systems are exposed to many common vulnerabilities and threats.  There are various types of systems whose vulnerabilities must be assessed.  The systems types include a client-based system, server-based systems, database systems, distributed systems, and industrial control systems.  The vulnerabilities evaluation and mitigation also involve web-based systems, and mobile systems (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.8.1 Client-Based Vulnerabilities Evaluation and Mitigation

            The client-based systems are the most widely used as the users most rely on to access resources. The client systems range from desktop systems to laptops to mobile devices of all types.  The traditional client-side vulnerabilities target web browsers, browser plug-ins, and the email clients. They can also be carried out through the applications and operating systems which are deployed.  The client systems are exposed to hostile servers. The client users are not security savvy and often inadvertently cause security issues on the client systems.  The security for this type of system should include policies and controls.  These security measures include the deployment of anti-malware and anti-virus software on every client system.  They also include the deployment of only licensed, supported operating systems, and the deployment of firewall and host-based intrusion detection system on the client system.  The security measures also include the use of drive encryption to protect the data on the hard drives.  They also include the test of all updates and patches, including those to bother the operating systems and applications before the deployment at the client level.  The user accounts must be issued with the minimum permission required for the job.  The applets should not be downloaded until from valid vendors as the applets and ActiveX can be used for malicious attacks.  Moreover, the client system contains several types of local caches. The DNS cache holds the results of DNS queries on the Internet and is the cache that is most often attacked.  Attackers may attempt to poison the DNS cache with false IP addresses for valid domains, by sending a malicious DNS reply to an affected system.  The organization should ensure that the operating system and all applications are kept up to date.  The client users should be trained not to click on unverified links (Abernathy & McMillan, 2016; Stewart et al., 2015). 

5.1.8.2 Server-Based Vulnerabilities Evaluation and Mitigation

The server-based systems are exposed to various attacks, and their vulnerabilities must be assessed as well.  The attack focuses on the operations of the server operating system rather than the web applications running on top of it.  The software attacks often target the intended data flow of a vulnerable program. For instance, the attackers exploit buffer overflows and format string vulnerabilities to write data to unintended locations. The goal is to either read data from prohibited locations or write data to a memory location to execute commands, crashing the system, or making malicious changes to the system.  The proper mitigation for these types of attacks is proper input validation and data flow controls which are developed into the system (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.8.3 Database Security Vulnerabilities Evaluation and Mitigation

The database is the target for most attackers because it holds and stores sensitive data. There are vulnerabilities in the databases such as aggregation, inference, data mining, data warehousing, data analytics, and large-scale parallel data systems. 

The aggregation reflects some functions which combine records from one or more tables to produce potentially useful information. The aggregation is a vulnerable source.  Aggregation attacks are used to collect numerous low-level security items or low-value items and combine them to create something of a higher security level or value.  The organization must strictly control access to aggregate functions and adequately assess and evaluate the potential information to be revealed to unauthorized users.

The Inference attack is another aspect of the database vulnerabilities.  The database security issues posed by the Inference attacks which are similar to those posed by the threat of data aggregation. Inference attacks involve combining several pieces of non-sensitive information to gain access to information which should be classified at a higher level.  The inference makes use of the deductive analysis capacity of the human rather than the raw mathematical ability of modern database platforms. The best defense against the inference attacks is to maintain constant vigilance over the permission granted to individual users. Database partitioning can help subvert the inference attacks.

Data Mining and Data Warehousing also represent database vulnerabilities. The data warehouse is used to store a significant amount of information from various databases fo ruse with specialized analysis techniques.  The data warehouse often contains detailed historical information not typically stored in the production databases because of the storage limitation or data security concerns.  The data dictionary is commonly used for storing critical information about data, including usage, type, sources, relationships, and format. The database management system software reads the data dictionary to determine the access rights for users attempting to access data. The data mining techniques allow the extraction of the data from the data warehouse and look for potential correlated information. The data warehouse and data mining are significant to the organization for two reasons.  The first reason is that the data warehouse contains a significant amount of potentially sensitive information vulnerable to aggregation and inference attacks, and the organization must ensure the adequate access controls and other security measures are in place to safeguard this data.  The second reason is that the data mining can be used as a security tool when it is used to develop the baseline for statistical anomaly-based intrusion detection systems.

The data analytics is the science of raw data examination with the focus on extracting useful information out of the dataset.  The results of data analytics could focus on essential outliers or exceptions to normal or standard items, a summary of all data items, or some focused extraction of interesting information.  Big Data Analytics is confronted with many challenges.  With regard to the security, the organization must implement the required security measures to ensure the protection of the systems which are involved in the data analytics.

The Large-Scale Parallel Data Systems are vulnerable as well.  The complexity of involving thousands of processing units often results in an unexpected increase in problems and risks along with the enormous level of computational capabilities.  The large-scale parallel data management involves technologies such as Cloud Computing, Grid Computing or Peer-To-Peer computing solutions.  The organization must implement the appropriate security measures to ensure the protection of the data based on the technology adopted (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.8.4 Industrial Control Systems (ICS) Vulnerabilities Evaluation and Mitigation

            The ICS is a form of computer-management device which controls industrial processes and machines.  There are types of ICS including the distributed control system (DCS), programmable logic controller (PLC), and supervisory control and data acquisition SCADA).  The static design of SCADA, PLC, and DCS units and their minimal human interfaces should make the system resistant to compromise or modification.  Thus, little security was built into these industrial control devices, especially in the past.  However, there have been well-known comprises ICS in recent years. For instance, Stuxnet delivered the first-ever rootkit to a SCADA system located in a nuclear facility. Many SCADA vendors have started implementing security improvements into their solutions to prevent or at least reduce the future compromises.  Thus, organizations are encouraged to implement security measures on those ICS systems (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.8.5 Web-Based Systems Vulnerabilities Evaluation and Mitigation

There is a wide range of application and system vulnerabilities and threats in web-based systems, and the range is constantly expanding.  Vulnerabilities include concerns related to XML and SAML plus many other concerns raised by the Open Community-Focused Web Project (OWASP).  The XML exploitation is a form of programming attack which is used to either falsify information being sent to a user or cause their system to give up information without authorization.  Security Association Markup Language (SAML) is one area of growing concern with regard to XML attacks. The SAML abuses are often focused on web-based authentication.  SAML is an XML-based convention for the organization and exchange of communication authentication and authorization details between security domains, often over web protocols. The SAML is often used to provide a web-based Single Sign-on solution.  If the attackers can falsify SAML communication or steal the access token, they can bypass authentication and gain unauthorized access to an internal site.  The OWASP is a non-profit security project focusing on improving security for online or web-based applications (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.8.6 Mobile Systems Vulnerabilities Evaluation and Mitigation

The mobile systems such as smartphones present ever-increasing security risk as they become more and more capable of interacting with the Internet and corporate networks.  The malicious attackers can bring in malicious code from outside on various storage devices, including the mobile phones, audio players, digital cameras, memory cards, optical discs, and USB drives.  The same storage devices can be used to leak or steal internal confidential and private data to disclose it to the public as the case with WikiLeaks.   Thus, a wide range of security features are available on mobile devices, and the users must ensure they are enabled and operating as expected.

Moreover, there are additional security measures which should be implemented when using mobile devices. These security measures include full device encryption, remote wiping, lockout, screen locks, GPS, and application control.  The mobile devices should have application control to limit which applications can be installed on the device.  The storage segmentation should also be used to separate the organization’s data and apps from the user data and apps.  Additional device security measures include asset tracking, inventory control, mobile device management, key management, credential management application whitelisting, and transitive trust and authentication (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.1.8.7 Embedded Devices and Cyber-Physical Systems Vulnerabilities Evaluation and Mitigation

The embedded system is a computer-implemented as part of a larger system. The embedded system is designed around a limited set of specific functions. It may consist of the same component found in a typical computer system or microcontroller. Examples of the embedded systems include network-attached printers, smart TVs, HVAC controls, smart appliances, smart thermostats, Ford SYNC and medical devices. The organization must implement the appropriate security measures as these devices are exposed to vulnerabilities and risks as well.  Network segmentation is one approach to control traffic among networked devices. Security layers and application firewalls are additional security measures which the organization must consider for the embedded system especially for the medical devices since this plan is written for the healthcare organization.  Manual updates and firmware version control must be implemented as well as part of the security measures (Abernathy & McMillan, 2016; Stewart et al., 2015).

5.2 System and Application Security Best Practice: Six Pillars

            The system security involves all aspects of information assets access.  Security is an essential component of a device operating at its optimum from authentication to software update, anti-virus protection, and modifications.  The organization must implement the five pillars of the best practice for system security.  The first pillar involves the use of change management procedures.  When changes are to be made to a system, it is a best practice to utilize change management methodologies to help eliminate unexpected issues.  The organization must utilize a test system with an identical setup as the production system to test changes before implementation into the production system.  The changes can involve systems updates such as patches or any system code or feature change. The organization must apply the updates to the test environment first, then test the system usage, and if all goes well, the application of the updates to the production system can take place.  This process can allow the system admin to build an estimation to downtime necessary for the system update and for steps to be done for the update to be applied and identify any issues that might need to be mitigated before implemented into production.

            The second pillar is the system logs utilization for audit trails.  The organization must implement this best practice to know what changes are made to the information assets, who made these changes, and when these changes were made to maintain the security requirements of CIA Triad of the information assets.  The Audit Trail mechanism must be used for every system to monitor the activities of the system.  The log files must be reviewed on a regular basis to catch any abnormal behavior in the systems soon enough to mitigate the negative impact.  

            The third pillar involves the data classification. The organization must identify the type of data collected, stored, and transmitted to assist in identifying the access control that is required for each data type to secure the information assets of the organization.  Some data must be classified as confidential due to federal regulation, while other data can be classified as private or public or information.  The security measures will be more robust for the confidential and private data than the public data.  Thus, it is essential for the organization to classify the data. 

            The fourth pillar involves the vendors who must follow the organization’s policies and standards.  The vendor should comply with all regulatory rules and laws such as HIPAA for the healthcare organization.  The vendor should comply with the same commitment of security to the information assets and protect the privacy of the patients.  The agreement with the vendor should be managed to include these standards and the compliance requirements.

            The fifth pillar involves the application which must be in a secure mode using either secure socket layer (SSL), or encryption to ensure or mitigate the harm if data is leaked.  The organization must configure every application which is accessed through the internet with security measures, especially if the application is XML-based.  Certificates and private keys can be implemented as part of the SSL configuration.  The access control technique must also be implemented to provide the minimum and least privileges for each user based on the role of the user. 

            The last pillar involves the patches that are required on top of the applications that are running on the system.  Most patches are to fix some security vulnerability issues.  Thus, the organization must apply every path promptly on the test environment first to ensure the full functionality of the application and the patch fix is successfully implemented.  After the patch is tested in the test environment, the patch can be deployed on the production system.  The full functionality of the application should be verified on the production system, and the patch fix should also be verified on the production system before it is rolled for use.

References

Aagedal, J. O., Den Braber, F., Dimitrakos, T., Gran, B. A., Raptis, D., & Stolen, K. (2002). Model-based risk assessment to improve enterprise security. Paper presented at the Enterprise Distributed Object Computing Conference, 2002. EDOC’02. Proceedings. Sixth International.

Abdul, A. M., Jena, S., Prasad, S. D., & Balraju, M. (2014). Trusted Environment In Virtual Cloud. International Journal of Advanced Research in Computer Science, 5(4).

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Adebisi, A. A., Adekanmi, A. A., & Oluwatobi, A. E. (2014). A Study of Cloud Computing at the University Enterprise. International Journal of Advanced Computer Research, 4(2), 450-458.

Armstrong, S. (2016). DevOps for Networking: Packt Publishing Ltd.

Avram, M. G. (2014). Advantages and Challenges of Adopting Cloud Computing from an Enterprise Perspective. Procedia Technology, 12, 529-534. doi:10.1016/j.protcy.2013.12.525

AWS. (2018). Architecting for HIPAA in the Cloud. Retrieved from https://aws.amazon.com/health/healthcare-compliance/hipaa/.

Aziz, A. A., & Osman, S. (2016-LR-Access). A Review of the Types of Access Control Models for Cloud Computing Environment. Paper presented at the Proceedings of the Informatics Conference.

Botta, A., de Donato, W., Persico, V., & Pescapé, A. (2016). Integration of Cloud Computing and Internet Of Things: a Survey. Future Generation computer systems, 56, 684-700.

Cash, B., & Branzell, R. (2017). How healthcare organizations are boosting security for IT systems. Health Data Management (Online).

Chen, Y., Paxson, V., & Katz, R. H. (2010). What’s New About Cloud Computing Security?

Dillon, T., Wu, C., & Chang, E. (2010). Cloud computing: issues and challenges. Paper presented at the 2010 24th IEEE international conference on advanced information networking and applications.

HITRUST. (2018). HITrust for Providers. Retrieved from https://hitrustalliance.net/.

IBM. (2005). IBM WebSphere Business Integration for Healthcare Collaborative Network V1.0 provides a platform for a clinical information exchange network to help maximize the quality of care. Retrieved from https://www-01.ibm.com/common/ssi/rep_ca/7/897/ENUS205-007/ENUS205-007.PDF.

Kazim, M., & Zhu, S. Y. (2015). A Survey on Top Security Threats in Cloud Computing. Int J Adv Comput Sci Appl (IJACSA), 6(3), 109-113.

Kolkowska, E., Hedström, K., & Karlsson, F. (2009). Information security goals in a Swedish hospital. Paper presented at the 8th Annual Security Conference, 15-16 April 2009, Las Vegas, USA.

Nazir, M., Bhardwaj, N., Chawda, R. K., & Mishra, R. G. (2015). Cloud computing: Current research challenges. Book chapter of cloud computing: Reviews, surveys, tools, techniques and applications-an open-access eBook published by HCTL open.

Padhy, R. P., Patra, M. R., & Satapathy, S. C. (2011). Cloud Computing: Security Issues and Research Challenges. International Journal of Computer Science and Information Technology & Security (IJCSITS), 1(2), 136-146.

Ramgovind, S., Eloff, M. M., & Smith, E. (2010). The management of security in cloud computing. Paper presented at the 2010 Information Security for South Africa.

Regola, N., & Chawla, N. (2013). Storing and using health data in a virtual private cloud. Journal of medical Internet research, 15(3), e63.

Sans. (2013). Layered Security:  Why It Works. Retrieved from https://www.sans.org/reading-room/whitepapers/analyst/layered-security-works-34805.

Singh, S., Jeong, Y.-S., & Park, J. H. (2016). A Survey on Cloud Computing Security: Issues, Threats, and Solutions. Journal of Network and Computer Applications, 75, 200-222.

Snell, E. (2016). The Role of Risk Assessments in Healthcare. Retrieved from https://healthitsecurity.com/features/the-role-of-risk-assessments-in-healthcare.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

Sultan, N. (2010). Cloud computing for education: A new dawn? International Journal of Information Management, 30(2), 109-116.

Venkatesan, T. (2012). A Literature Survey on Cloud Computing. i-Manager’s Journal on Information Technology, 1(1), 44-49.

Zhang, Q., Cheng, L., & Boutaba, R. (2010). Cloud Computing: State-of-the-Art and Research Challenges. Journal of internet services and applications, 1(1), 7-18.

Methods to Improve the Quality of Software Development

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze methods to improve the quality of software development to reduce the number of the error, to include security in the development cycle.

Security Implication of Compiled Code vs. Interpreted Code

Software is a core building block in the Information Technology infrastructure, and the applications are the outcome of software development.  The applications provide a way to achieve tasks which are related to the input, processing, and the output of data. Moreover, the applications are used to store, retrieve, process, transmit or destroy data.  Thus, the applications are critical with respect to a security (Srinivasan, 2016).

Security should be considered at every stage of the system development including software development process.  Software developers exert efforts to develop security into every application they develop, with higher levels of security for critical applications and those which process sensitive information.  Security consideration at an early stage of the software development should be implemented because it is much easier to develop security into the system than to add it to an existing system (Abernathy & McMillan, 2016; Srinivasan, 2016; Stewart, Chapple, & Gibson, 2015).

Software development languages can use either compilation or interpretation to execute the programs.  Languages such as C, Java, and Fortran are compiled languages which use a compiler to convert the higher-level language into an executable file designed for use on a specific operating system.  The executable file is then distributed to end users. It is not possible to modify an executable file. Other languages such as JavaScript and VBScript are interpreted languages.  The developers distribute the source code when using these interpreted languages, which contains instructions in the higher-level language.  The end users use an interpreter to execute that source code to their systems and can view the original instructions written by the developers (Abernathy & McMillan, 2016; Srinivasan, 2016; Stewart et al., 2015).

Each approach has security advantages and disadvantages the compiled code is less prone to manipulation by a third party.  However, it is also easier for malicious developers or unskilled developers to embed backdoors and other security flaws in the code and escape detection because the end users can not view the original instructions.  The interpreted code is less prone to the insertion of malicious code by the original developer because the end users may view the code and check it for accuracy.  However, the original version of the development can be modified to embed malicious code in the interpreted software (Abernathy & McMillan, 2016; Srinivasan, 2016; Stewart et al., 2015)

Security Controls in Software Development

The process of software development involves four significant phases; the design, development, testing, and integration.   The development, test, and operational facilities must be separated.  Thus, controlled access to the operating systems to developers and quality assurance engineers should be implemented and limited to prevent inappropriate developers access to the production system.  The separation of development, test and operational facilities should be implemented to prevent unintended operational system changes.

Changes in the software or application should be implemented formally using change control process and procedures to ensure that the changes in the development processes and implementation are done in a controlled manner.  Change control process can help prevent the corruption of data or programming.  When there is a change in the system or application, risk assessment on the impact of the proposed change must be implemented.  The appropriate security controls based on this risk assessment should also be implemented.  A technical review of the security of the application should be implemented due to any changes in the operating system. The integrity procedures should be implemented to ensure any changes either at the application level or operating system level should be reflected in the risk assessment and the application of the proper security measures.  Moreover, change control process and procedures should consider the business continuity security requirement and include the required tests for the business continuity plan (BCP) (Srinivasan, 2016).

In case of vendor-supplied software packages, any changes to the software by internal developers should be avoided.  If there is a requirement for changes, these changes can be implemented at the vendor side or through the internal developers after obtaining consent from the vendor.  This process can ensure the validity of the warranty.  Moreover, any patches from the vendor should be tested first in the test environment which is separated from the development and production environment.  The test environment should be capable of rolling back the patch in case of failure or any security hole issues.  Moreover, convert channels should be avoided as developers with malicious intent can provide a path for information leak, or circumventing security control.  Thus, covert channel analysis required to ensure data confidentiality (Srinivasan, 2016).  

Software Development Security Best Practices

In an effort to support the goal of ensuring that the software is soundly developed with regard to security and functionality, various organizations developed a set of software development best practice.  The Web Application Security Consortium (WASC) is an organization which provides best practices for web-based applications along with a variety of resources, tools, and information for developing web applications.  The continuous monitoring of attacks is one of the functions undertaken by WASC leading to the development of a list of top attack methods in use.  This list can assist in ensuring that organizations are not only aware of the latest attack methods and how widespread these attacks are but also can help them in make the proper changes to their web applications to mitigate these attack types (Abernathy & McMillan, 2016).

The Open Web Application Security Project (OWASP) is another group which monitors attacks, specifically web attacks.  It maintains a list of top 10 attacks on an ongoing basis. This group meets regularly worldwide, providing resources and tools including test procedures, code review steps, and development guidelines.  The Build Security In (BSI) is an initiative by Department of Homeland Security (DHS), which promotes a process-agnostic approach to make security recommendation with respect to architectures, testing methods, code reviews, and management processes.  The DHS Software Assurance program addresses methods to reduce vulnerabilities, mitigate exploitations, and improve the routine development and delivery of software solutions. Moreover, the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) created the 27034 standard, which is part of a more substantial body of standards called the ISO/IEC 27000 series. These standards guide organizations in integrating security into the development and maintenance of software applications (Abernathy & McMillan, 2016).

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Srinivasan, M. (2016). CISSP in 21 Days: Packt Publishing Ltd.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

Encryption: Public Key Encryption and Public Key Infrastructure (PKI)

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to elaborate on the previous discussion of the Encryption  and discuss the functionality provided by the public key encryption and public key infrastructure (PKI).

Public Key Infrastructure (PKI)

The PKI is a framework which enables the integration of various services which are related to cryptography. The purpose of the PKI is to provide confidentiality, integrity, access control, authentication and most importantly non-repudiation.  The encryption/decryption, digital signature, and key exchange are the three primary functions of the PKI (Srinivasan, 2016).

There are three major functional components to the PKI.  The first component involves The Certificate Authority (CA), an entity which issues certificates. One or more in-house servers, or a trusted third party such as VeriSign or GTE, can provide the CA function.  The second component involves the repository for keys, certificates, and Certificate Revocation List (CRLs), which is usually based on a Light-weight Directory Access Protocol (LDAP)-enabled directory service.  The third component involves the management function, which is typically implemented via a management console (RSA, 1999).  Moreover, if the PKI provides automated key recovery, there may also be a key recovery service. The Key Recovery is an advanced function required to recover data or messages when a key is lost.  PKI may also include Registration Authority (RA) which is an entity dedicated to user registration and accepting requests for certificates.  The user registration is the process of collecting information of the user and verifying the user identity, which is then used to register the user according to a policy.  This process is different from the process of creating, signing, and issuing a certificate.  For instance, the Human Resources department may manage the RA function, while the IT department manages the CA.  Moreover, a separate RA makes it harder for any single department to subvert the security system.  Organizations can choose to have registration handled by a separate RA, or included as a function of the CA.  Figure 1 illustrates the main server components of a PKI; certificate server, certificate repository, and key recovery server accompanied with management console, as well as PKI-enabled applications building blocks (RSA, 1999).

Figure 1.  The Main Server Components of PKI (RSA, 1999).

PKI Standards

The PKI standards permit multiple PKIs to interoperate, and multiple applications to interface with a single, consolidated PKI.  The PKI standards are required for enrollment procedures, certificate formats, CRL formats, certificate enrollment messages formats, digital signature formats, and challenge and response protocols.  The primary focus of interoperable PKI standards is the PKI working group of the Internet Engineering Task Force (IETF), also known as the PKIX group for PKI for X.509 certificates (RSA, 1999).  The PKIX specification is based on two other standards X.509 from the International Telecommunication Union (ITU) and the Public Key Cryptography Standards (PKCS) from RSA Data Security (RSA, 1999). 

Standards Rely on PKI

There are standards which rely on PKI.  Most major security standards are designed to work with PKI.  Secure Socket Layer (SSL) and Transport Layer Security (TLS), which are used to secure access to Web servers and web-based applications, rely on PKI.  The Secure Multipurpose Internet Mail Extensions (S/MIME), which is used to secure messaging rely on PKI.  The Secure Electronic Transaction (SET) to secure bank card payments, and  IPSEC to secure connection using VPN require PKI (Abernathy & McMillan, 2016; RSA, 1999; Stewart, Chapple, & Gibson, 2015).

The PKI Functions

The most common PKI functions are issuing certificates, revoking certificates, creating and publishing CRLs, storing and retrieving certificates and CRLs, and key lifecycle management.  The enhanced and emerging functions of PKI include the time-stamping and policy-based certificate validation.  The summary of the PKI functions is illustrated in Table 1, adapted from (RSA, 1999). 

Table 1.  PKI Functions (RSA, 1999).

Public Key Encryption

In 1976, the idea of public key cryptography was first presented in Stanford University by Martin Hellman, Ralph Merkle, and Whitfield Diffie (Janczewski, 2007; Maiwald, 2001; Srinivasan, 2016).  There are three requirements for the public key encryption method.  When the decryption process is applied to the encrypted message, the result must be the same as the original message before it was encrypted.  It must be exceedingly difficult to deduce the decryption (private) key from the encryption (public) key.  The encryption must not be able to be broken by a plaintext attack. Since the encryption and decryption algorithms and the encryption key will be public, people attempting to break the encryption will be able to experiment with the algorithms to attempt to find any flaws in the system (Janczewski, 2007).

One popular method of the public key encryption was discovered by a group of MIT in 1978 and was named RSA after the initials of the three members of the group Ron Rivest, Adi Shamir, and Leonard Adleman (Janczewski, 2007).   The RSA Algorithm was patented by MIT, and then this patent was handed over to a company in California called Public Key Partners (PKP), which holds an exclusive commercial license to sell and sublicense the RSA public key cryptosystem.  PKP also holds other patents which cover public key cryptography algorithm.  RSA encryption can be broken based on factoring numbers involved, which can be ignored due to the massive amount of time required to factor large numbers.   However, RSA is too slow for encrypting large amounts of data.  Thus, it is often used for encrypting the key used in a private key method such the International Data Encryption Algorithm (IDEA) (Janczewski, 2007).

The main difference between the symmetric key encryption and the public key encryption is the number of keys used in operation.  The symmetric key encryption utilizes a single key both to encrypt and decrypt information, while the public key encryption utilizes two keys, one key is used toencrypt, and a different key is then used to decrypt the information (Maiwald, 2001).  Figure 2 illustrates the primary public key or asymmetric encryption operation.  Both the sender and receiver must have a key. The keys are related to each other and called key pair, but they are different.  The relationship between the keys is that the information encrypted by one key can be decrypted by the other key.  One key is called private, while the other key is called public.  The private key is kept secret by the owner of the key pair.  The public key is published with information about who the owner is.  It is published as public because there is no way to publish a private key from it.

Figure 2.  Public Key Encryption Operation (Maiwald, 2001).

The encryption is performed with the public key, where only the owner of the key pair can decrypt the information since the private key is kept secret by the owner if the confidentiality is desired.   The owner of the key pair encrypts the information with the private key if authentication is desired.   The integrity of the information can be checked if the original information was encrypted with the private key of the owner (Maiwald, 2001; Stewart et al., 2015).

The asymmetric key cryptography or public key encryption provides an extremely flexible infrastructure, facilitating simple, secure communication between parties that do not necessarily know each other before initiating the communication.  The public key encryption also provides the framework for the digital signing of messages to ensure non-repudiation and message integrity.  It also provides a scalable cryptographic architecture for use by large numbers of users).  The significant strength of the public key encryption is the ability to facilitate communication between parties previously unknown to each other.  This process is made possible by PKI hierarchy of trust relationships.  These trusts permit combining asymmetric cryptography with symmetric cryptography along with hashing and digital certificates, providing hybrid cryptography (Abernathy & McMillan, 2016; Maiwald, 2001; Stewart et al., 2015)

The limitation of the public key encryption is that they tend to be computationally intensive and thus are much slower than symmetric key systems.  However, if the public key is teamed with the symmetric key encryption, the result is the much stronger system. The public key system is used to exchange keys and authenticate both ends of the connection.  The symmetric key system is then used to encrypt the rest of the traffic as it is faster than the public key system (Abernathy & McMillan, 2016; Maiwald, 2001; Stewart et al., 2015).

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Janczewski, L. (2007). Cyber warfare and cyber terrorism: IGI Global.

Maiwald, E. (2001). Network security: a beginner’s guide: McGraw-Hill Professional.

RSA. (1999). Understanding Public Key Infrastructure (PKI). Retrieved from ftp://ftp.rsa.com/pub/pdfs/understanding_pki.pdf, White Paper

Srinivasan, M. (2016). CISSP in 21 Days: Packt Publishing Ltd.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

Encryption

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze encryption, the encryption types, the impact of the encryption, and the use of encryption when streaming data in the network.

Cryptography and Encryption

The cryptography comprises a set of algorithms and system-design principles, some well-developed and some are just emerging for protecting data in the era of Big Data and Cloud Computing.  Cryptography is a field of knowledge whose products are encryption technology.  Encryption technology is an inhibitor to compromising privacy with well-designed protocols.  However, it is not the “silver bullet” to cut through the complexity of the existing issues of privacy and security (PCAST, May 2014).

The encryption technology involves the use of a key. Only with the key can encrypted data be used. At every stage of the life of the key, it is potentially open to misuse that can ultimately compromise the data which the key was intended to protect. If the user with access to private keys can be forced into sharing them, no system based on encryption is secure.

The keys were distributed physically, on paper or computer media, protected by registered mail, armed guards until the1970s.  However, the intervention of the “public-key cryptography” changed everything.  The public-key cryptography allows individuals to broadcast their personal key publicly.  However, this public key is only an encryption key, useful for turning plaintext into cryptotext which is meaningless to others.   The private key is used to transform cryptotext to plaintext and is kept secret by the recipient (PCAST, May 2014).  

The message typically is in the form of plaintext represented by the letter P when encryption functions are described.  The sender of the message uses a cryptographic algorithm to encrypt the plaintext message and generate a ciphertext or cryptotext represented by the letter C.  The message is transmitted, and the recipient uses a predetermined algorithm to decrypt the ciphertext message and retrieve the plaintext version.   All cryptographic algorithms rely on keys to maintain their security.  A key is just a number and usually substantial binary number.  Every algorithm has a specific “keyspace,” which is the range of values which are valid for use as a key for a specific algorithm and is defined by its “bit size.”  Thus, 128-bit key can have a value from 0 to 2128.  It is essential and critical to protecting the security of the secret keys because of the security of the cryptography relies on the ability to keep and maintain the keys used privately (Abernathy & McMillan, 2016; Maiwald, 2001; Stewart, Chapple, & Gibson, 2015).  Figure 1 illustrates the basic encryption operation (Maiwald, 2001). 

Figure 1.  Basic Encryption Operation (Maiwald, 2001).

The modern cryptosystems utilize computationally sophisticated algorithm and long cryptographic keys to meet the cryptographic four goals mentioned below.  There are three types of an algorithm which are commonly used:  symmetric encryption algorithm, asymmetric encryption algorithm, and hashing algorithms (Abernathy & McMillan, 2016; Connolly & Begg, 2015; Stewart et al., 2015; Woo & Lam, 1992).

Four Goals for Encryption

The cryptography provides an additional level of security to the data during processing, storage, and communications.  A series of increasingly sophisticated algorithms have been designed to ensure the confidentiality integrity, authentication, and non-repudiation.  At the same time, hackers also have devoted time to undermine this additional security layer of the encryption.  There are four fundamental goals for organizations to use the cryptographic systems; the confidentiality, the integrity, the authentication, and non-repudiation.   However, not all of the cryptosystems are intended to achieve all four goals (Abernathy & McMillan, 2016; Stewart et al., 2015). 

The confidentiality ensures that the data remains private while at rest, such as when stored on a disk, or in or transit such as during transmission between two or more parties.  The confidentiality is the most common reason for using the cryptosystems. There are two types of cryptosystems which enforce confidentiality; symmetric key and asymmetric key algorithms.  The symmetric key cryptosystems use a shared secret key available to all users of the cryptosystem.  The asymmetric key algorithm uses special combinations of public and private key for each user of the system (Abernathy & McMillan, 2016; Stewart et al., 2015). 

When implementing a cryptographic system to provide confidentiality, the data types must be considered, whether the data is at rest or in motion.  The data at rest is the data that is stored in a storage area waiting to be accessed.  For instance, the data at rest include data stored on hard drives, backup tapes, cloud storage services, USB devices and other storage media.  The data in motion or data “on the wire” is the data that is being transmitted across the network between systems the data in motion might be traveling on a corporate network, a wireless network, or public Internet.  Both types of the data post different types of confidentiality risks against which cryptographic system can protect.  For instance, the data in motion may be susceptible to eavesdropping attacks, while data at rest is more susceptible to the theft of physical devices (Abernathy & McMillan, 2016; Stewart et al., 2015).

The integrity ensures that the data is not modified without authorization.  If the integrity techniques are in place, the recipient of a message can ensure that the message received is identical to the message sent.  The integrity provides protection against all forms of modification such intentional modification by a third party attempting to insert false information, and unintentional modification by faults in the transmission process.  The message integrity is enforced through the use of encrypted message digests, known as digital signatures created upon transmission of a message.  The integrity can also be enforced by both public and secret key cryptosystems (Abernathy & McMillan, 2016; Stewart et al., 2015).

The authentication is a primary function of the cryptosystems and verifies the claimed identity of system users.  Secret code can be used for the authentication.  The nonrepudiation assures the recipient that the message was originated by the sender and not someone masquerading as the sender.  Moreover, the nonrepudiation also prevents the sender to deny sending the message or repudiating the message. The secret key or symmetric key cryptosystems do not provide this guarantee of non-repudiation.  The non-repudiation is offered only by the public key or asymmetric cryptosystem (Abernathy & McMillan, 2016; Stewart et al., 2015).

Symmetric Cryptosystem

The symmetric key algorithms rely on a “shared secret” encryption key which is distributed to all users who participate in the communication.  The key is used by all members to both encrypt and decrypt messages.  The sender encrypts with the shared key, and the receiver decrypts with the same shared key. The symmetric encryption is difficult to break because of the long and large-size key. The symmetric encryption is primarily used for bulk encryption and to meet the confidentiality goal. The symmetric key cryptography can also be called “secret key cryptography and private key cryptography.”  There are several common symmetric cryptosystems such as the Data Encryption Standard (DES), Triple DES (3DES), International Data Encryption Algorithm (IDEA), Blowfish, Skipjack, and the Advanced Encryption Standard (AES).  The advantage of the symmetric cryptosystem is that it operates at high speed and it is faster than the asymmetric (1,000 to 10,000 faster) (Abernathy & McMillan, 2016; Connolly & Begg, 2015; Maiwald, 2001; Stewart et al., 2015).  

The symmetric cryptosystem has limitations.  The first limitation is the key distribution. The members must implement a secure method of exchanging the shared secret key before establishing the communication with the symmetric key protocol.  If the secure electronic channel does not exist, an offline key distribution method must be used.  The second limitation represents the nonsupport to the non-repudiation, due to sharing the same key which makes it difficult to know the source of the message. The third limitation represents the inability of the scalable algorithm.  The last limitation of the symmetric cryptosystem is the frequent regeneration of the key (Abernathy & McMillan, 2016; Connolly & Begg, 2015; Stewart et al., 2015). 

Asymmetric Cryptosystem

The asymmetric key algorithm, also known as “public key algorithm, provides a solution to the limitation of the symmetric key encryption.  This system uses two keys; a public key which is shared with all users, and a private key which is secret and known only to the user.  If the public key encrypts the message, the private key can decrypt the message, and the same applies if the private key encrypts the message, the public key decrypts the message.   The asymmetric key cryptosystem provides support to the digital signature technology.  The advantages of the asymmetric key algorithm include the generation of only one public-private key for new users, and the removal of the users easily.  Moreover, the key generation is required only when the private key of the user is compromised. The asymmetric key algorithm provides integrity, authentication, and non-repudiation.  The key distribution is a simple process in the asymmetric key algorithm.  The asymmetric key algorithm does not require a pre-existing relationship to provide a secure mechanism for data exchange (Abernathy & McMillan, 2016; Connolly & Begg, 2015; Maiwald, 2001; Stewart et al., 2015).

The limitation of the asymmetric key cryptosystem is the slow speed of operation.  Thus, many applications which require the secure transmission of the large volume of data employs the public key cryptographic system to establish a connection and then exchange a symmetric secret key.  The remainder of the session then utilizes the symmetric cryptographic approach. Figure 2 illustrates a comparison of symmetric and asymmetric cryptographic systems (Abernathy & McMillan, 2016; Connolly & Begg, 2015; Stewart et al., 2015).

Figure 2.  Comparison of Symmetric and Asymmetric Cryptographic Systems. Adapted from (Abernathy & McMillan, 2016; Stewart et al., 2015).

The Hashing Algorithm

The hashing algorithm produces message digests which are summaries of the content of the message.  It is challenging to derive a message from an ideal hash function, and two messages will unlikely produce the same hash value.  There are some of the more common hashing algorithm in use today including the Message Digest 2 (MD2), Message Digest 5 (MD5), Secure Hash Algorithm (SHA-0, SHA-1, and SHA-2), and Hashed Message Authentication Code (HMAC).  Unlike symmetric and asymmetric algorithms, the hashing algorithm is publicly known.  The hash functions are performed in one direction and using in reverse is not required.  The hashing algorithm ensures the integrity of the data as it creates a number which is sent along with the data.  When the data gets to the destination, this number can be used to determine whether even a single bit has changed in the data by calculating the hash value from the data which was received. The hashing algorithm also helps in protecting against undetected corruption (Abernathy & McMillan, 2016; Connolly & Begg, 2015; Stewart et al., 2015).

Attacks Against Encryption

The encryption systems can be attacked in three ways, through weaknesses in the algorithm, through brute-force against the key, or through a weakness in the surrounding system. When the algorithm is attacked through the weakness in the way that the algorithm changes plaintext into ciphertext so that the plaintext may be recovered without knowing the key.  The algorithm that has weaknesses of this type are rarely considered reliable enough for use.  The brute-force attacks are attempts to use every possible key on the ciphertext to find the plaintext.  On the average, 50% of the keys must be tried before finding the correct key.  The strength of the algorithm is then only defined by the number of keys that must be attempted.  Thus, the longer the key, the more significant the total number of keys and the larger the number of keys which must be tried until the correct key is found. The brute-force attacks will always succeed eventually if enough time and resources are used.  Thus, the algorithms should be measured by the length of time the information is expected to be protected even in the fact of a brute-force attack.  The algorithm is considered computationally secure if the cost of acquiring the key through brute-force is more than the value of the information being protected. The last encryption attack through weaknesses in the surrounding system can involve keeping the key in a file that has a password, but the password is weak and can be guessed easily (Maiwald, 2001).

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Connolly, T., & Begg, C. (2015). Database Systems: A Practical Approach to Design, Implementation, and Management (6th Edition ed.): Pearson.

Maiwald, E. (2001). Network security: a beginner’s guide: McGraw-Hill Professional.

PCAST. (May 2014). Big Data and Privacy: A Technological Perspective.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

Woo, T. Y., & Lam, S. S. (1992). Authentication for distributed systems. Computer, 25(1), 39-52.

Business Impact Analysis (BIA)

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the Business Impact Analysis (BIA) in providing useful information for the Business Continuity Plan (BCP) and a Disaster Recovery Plan (DRP).  The discussion begins with a brief overview of BCP and DRP, followed by the discussion and analysis of the BIA.

Business Continuity Plan (BCP)

The Business Continuity Planning (BCP) involves the assessment of the risk to the organizational processes and the development of policies, plans, and process to minimize the impact of those risks if it occurs.   Organizations must implement BCP to maintain the continuous operation of the business if any disaster occurs.  The BCP emphasize on the keeping and maintaining the business operations with the reduction or restricted infrastructure capabilities or resources.  The BCP can be used to manage and restore the environment. If the continuity of the business is broken, then the business processes have seized, and the organization is in the disaster mode, which should follow the Disaster Recovery Planning (DRP).  The top priority of the BCP and DRP is always people.  The main concern is to get people out of the harm; and the organization can address the IT recovery and restorations issues (Abernathy & McMillan, 2016; Stewart, Chapple, & Gibson, 2015). The BCP process involves four main steps to provide a quick, calm, and efficient response in the event of an emergency and to enhance the ability of the organization to recover from a disruptive event in a timely fashion.  These four steps include (1) the Project Scope and Planning, (2) Business Impact Assessment, (3) Continuity Planning, and (4) Documentation and Approval (Stewart et al., 2015). 

However, as indicated in (Abernathy & McMillan, 2016), the steps of the Special Publications (SP) 800-34 Revision 1 (R1) from the NIST include seven steps.  The first step involves the development of the contingency planning policy. The second step involves the implementation of the Business Impact Analysis.  The Preventive Controls should be identified representing the third step.  The development of Recovery Strategies is the fourth step. The fifth step involves the development of the BCP.  The six-step involves the testing, training, and exercise. The last step is to maintain the plan. Figure 1 summarizes these seven steps identified by the NIST. 

Figure 1.  A Summary of the Business Continuity Steps (Abernathy & McMillan, 2016).

Disaster Recovery Plan (DRP)

In case of the disaster event occur, the organization must have in place a strategy and plan to recover from such a disaster.  Organizations and businesses are exposed to various types of disasters.  However, these types of disaster are categorized to be either disaster caused by nature or disaster caused by a human.  The disasters which are nature related include the earthquakes, floods, storms, hurricanes, volcanos, and fires.  The human-made disasters include fires caused intentionally, acts of terrorism, explosions, and power outages.  Other disasters can be caused by hardware and software failures, strikes and picketing, theft and vandalism.  Thus, the organization must be prepared and ready to recover from any disaster.  Moreover, the organization must document the Disaster Recovery Plan and provide training to the personnel (Stewart et al., 2015).

Business Impact Analysis (BIA)

As defined in (Abernathy & McMillan, 2016), the BIA is a functional analysis which occurs as an element and component of the Business Continuity and Disaster Recovery.  In (Srinivasan, 2016), BIA is described as a type of risk assessment exercise which attempt to assess and evaluate qualitative and quantitative impacts on the business due to a disruptive event. The qualitative impacts are an operational impact, such as the ability to deliver, while the quantitative impacts are related to financial loss and described in numeric monetary value (Srinivasan, 2016; Stewart et al., 2015).

Organizations should perform a detailed and thorough BIA to assist business units and operations understand the impact of a disaster.  The BIA should list the critical and required business functions, their resources dependencies, and their level of criticality to the overall organization.  The development of the BCP is based on the BIA, which assists the organization to understand the impact of a disruptive event on the organization.  This analysis of the BIA is a management level analysis which identifies the impact of losing the resources of the organization.  The BIA involves four main steps. The first step involves the identification of the critical processes and resources, followed by the identification of the outage impacts, and estimate downtime. The third step involves the identification of the resource requirements, followed by the last step of the identification of the recovery priorities.  The BIA relies on any vulnerability analysis and risk management which are completed and performed by the BCP committee or a separate task force team (Abernathy & McMillan, 2016). 

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Srinivasan, M. (2016). CISSP in 21 Days: Packt Publishing Ltd.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

The Ethics of Leaking Sensitive Information and How to Prevent it.

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the ethics of leaking sensitive information, and methods to prevent such activities.  The discussion addresses the methods to prosecute people who do leak sensitive information.  Moreover, the discussion address methods to detect these crimes and collect evidence to assist in identifying who leaked the information and in the prosecution of those suspected of committing cybercrime.

Sensitive Data and Data Classification

Sensitive data include any information which is not supposed to be revealed to the public.  It can include confidential information, proprietary, protected, or any other types of data which organizations need to protect due to its value to the organization, or to comply with the existing laws and regulation.  Data is classified from Class zero to Class 3.  Class zero represents the unclassified public information.  Class 1 represents sensitive and confidential information that can cause damage.  Class 2 represents private and secret information which can cause serious damage.  Class 3 represents top secrete which can cause exceptionally grave damage.  Figure 1 illustrates this Data Classification from government and non-government perspective, adapted from (Stewart, Chapple, & Gibson, 2015).

Figure 1.  Data Classification (Stewart et al., 2015).

Examples of attacks on sensitive information are Sony Attacks which took place in 2014.  As cited in (Stewart et al., 2015), the founder of Mandiant stated that “the scope of this attack differ from any we have responded to in the past, as its purpose was to both destroy property and release confidential information to the public.  The bottom line is that this was an unparalleled and well-planned crime, carried out by an organized group.”  The attackers obtained over 100 TB of data, including full-length versions of unreleased movies, salary information, and internal emails.  Some of this data was more valuable to the organization than other data. Thus, security measures must be implemented to mitigate such attacks to obtain any data in Class 1 through Class 3. 

The organization must implement various security measures to protect sensitive and confidential data.  For instance, emails must be encrypted.  The encryption converts cleartext data into scrambled ciphertext and makes it more difficult to read.  Sensitive and confidential data must be managed to prevent data breaches.  A data breach is an event in which an unauthorized user can view or access sensitive or confidential data.   Sensitive and confidential data must be marked as though to be distinguished from other data such as public data (Abernathy & McMillan, 2016; CSA, 2011; Stewart et al., 2015).

Organizations must handle sensitive and confidential data with care.  Secure transportation of media through the lifetime of the sensitive data must be implemented.  Example of mishandling sensitive information is Ministry of Defense in the United Kingdom which released in 2011 mistakenly classified information on nuclear submarines and sensitive information in response to Freedom of Information requests.  They then redacted the classified data by using image-editing software to black it out. However, the damage happened, and the sensitive data was not handled properly.  Another example of mishandling sensitive data is the incident by Science Applications International Corporation (SAIC) in 2011 which was a government contractor, who lost control of backup tapes which include personally identifiable information (PII) and protected health information (PHI) for 4.9 million patients.  SAIC personnel did not implement HIPAA because this information falls under HIPAA (CSA, 2011; Stewart et al., 2015). 

Ethics, Data Leaks, and Criminal Act Investigation

Data leaks is a criminal activity which requires investigation.  For the criminal investigation, law enforcement personnel conduct such investigation to investigate the alleged violation of criminal law. The criminal investigations may result in charging suspects with a crime and the prosecution of those charges in criminal court.  Most criminal cases must meet the “beyond a reasonable doubt” standard of evidence.  The prosecution must demonstrate that the defendant committed the crime by presenting the fact of which there are no other logical conclusions.  Thus, criminal investigations must follow very strict evidence collection and preservation processes. Moreover, with respect to healthcare and the application of HIPAA, the regulatory investigation can be conducted by government agencies to investigate the violation of regulations such as HIPAA (CSA, 2011; Stewart et al., 2015). 

The prosecuting attorney must provide sufficient evidence to prove the guilt of the person who conducted such act before it is allowed in the court.   The evidence is required before the case is allowed in the court.  There are three basic types of evidence for the case to be allowed in the court.  These three types are called “admissible evidence” to enter the court. The evidence must be relevant to determining a fact.  The evidence must be material to the case.  The evidence must be competent; meaning must be obtained legally.  Evidence can be real evidence, documentary evidence, and testimonial evidence (Stewart et al., 2015). 

Forensic Procedures and Evidence Collection

The International Organization on Computer Evidence (IOCE) outlines six principles to guide digital evidence technicians as they perform media analysis, network analysis, and software analysis in the pursuit of forensically recovered evidence.  The first principle indicates that all of the general forensic and procedural principles must be applied when dealing with digital evidence.  The second principle indicates that actions taken should not change that evidence upon seizing the digital evidence.  The third principle indicates that person should be trained for the purpose when it is required for a person to access original digital evidence.  The fourth principle indicates that all activities relating to the seizure, access, storage, or transfer of digital evidence must be fully documented, preserved, and available for review.  The fifth principle indicates that an individual is responsible for all actions taken concerning digital evidence while the digital evidence is in their possession.  The last principle indicates that any agency that is responsible for seizing, accessing, storing, or transferring digital evidence is responsible for compliance with these principles (Stewart et al., 2015).

The various forensic analysis is conducted when sensitive data is leaked.  Media analysis involves the identification and extraction f information from storage media including magnetic media, optical media, and memory such as RAM, solid-state storage.  Network analysis involves activities which took place over the network during a security incident.  Network forensic analysis often depends on either prior knowledge that an incident is underway or the use of pre-existing security controls which log network activity, including intrusion detection and prevention system logs, network flow data captured by a flow monitoring system, logs from firewalls.   Software forensic analysis includes forensic reviews of applications or the activity which takes place within a running application.  In some cases, when malicious insiders are suspected, the forensic analysis can include a review of software code, looking for the back door, logic bombs, or other security vulnerabilities.   The hardware and embedded devices analysis include the review of the contents of hardware and embedded devices such as personal computers, smartphones, tablets, embedded computers in cars, and other devices (Stewart et al., 2015).

In summary, data can be leaked from insiders as well as from outsiders who can have illegal access to sensitive and confidential information. These acts are criminal acts, and they require evidence to be allowed in the court.  Various evidence is required.  The various forensic analysis must be conducted to review and analyze the cause of such a leak.  Organizations must pay attention not only to an outsider but also to insiders.   

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

CSA. (2011). Security guidance for critical areas of focus in cloud computing v2. 1. Cloud Security Alliance, v3.0, 1-76.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

Performance and Security Relationship

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the relationship between performance and security and the impact of security implementation on the performance. The discussion also discusses and analyzes the balance between security and performance to provide good operational result in both categories.  The discussion begins with the characteristics of the distributed environment including a database to have a good understanding of the complexity of the distributed environment, the influential factors on the distributed system.  The discussion discusses and analyzes the security challenges in the distributed system and the negative correlation between security and performance in the distributed system.

Distributed Environment Challenges

The distributed system involves components located at networked computers communicating and coordinating their actions only by passing messages.  The distributed system includes concurrency of components, lack of a global clock and independent failures of components.   The challenges of the distributed system arise from the heterogeneity of the system components, openness to allow components to be added or replaced, security, scalability, failure handling, concurrency of components, transparency and providing quality of service (Coulouris, Dollimore, & Kindberg, 2005).  

Example of distributed systems includes the Web Search whose task is to index the entire content of the world wide web, containing a wide range of information types and styles including web pages, multimedia sources and scanned books.  Massively multiplayer online games (MMOGs) is another example of the distributed system.  Users interact through the Internet with a persistent virtual world using MMOGs.  The financial trading market is another example of the distributed system using real-time access to a wide range of information sources such as current share prices and trends, economic and political development (Coulouris et al., 2005).

Influential Factors in Distributed Systems

The distributed system is going through significant changes due to some trends.  The first influential trend in the distributed system involves the emergence of pervasive networking technology.  The emergence of ubiquitous computing coupled with the desire to support user mobility in a distributed system is another factor that is impacting the distributed system.  The increasing demand for multi-media services is another influential trend in the distributed system.  The last influential trend is the view of the distributed systems as a utility.  All these trends have a significant impact on the distributed system.  

 Security Challenge in Distributed System

Security is among some challenges in the distributed system.  Many of the information resources which are stored in a distributed system have a high value to their users. The security of such information is critically important.  Information Security involves confidentiality to protect against disclosure to unauthorized users, integrity to protect against alteration or corruption, and availability to protect against interferences with the means of accessing the resources. The security must comply with the CIA Triad for Confidentiality, Integrity, and Availability (Abernathy & McMillan, 2016; Coulouris et al., 2005; Stewart, Chapple, & Gibson, 2015).  The security risks are associated with allowing access to resources in an intranet within the organization.  Although the firewalls can be used to form barriers between department around the intranet, restricting access to the authorized users only, the proper use of the resource by users within the intranet and on the Internet cannot be ensured and guaranteed. 

In the distributed system, users send requests to access data managed by the server which involves sending information in messages over a network.  Examples include a user can send the credit card information in electronic commerce or bank, or a doctor can request access to patient’s information.  The challenge is to send sensitive information in a message over a network in a secure manner.  Moreover, the challenge is to ensure the recipient is the right user.  Such challenges can be met by using different security techniques such as encryption techniques. However, there are two security challenges which have not been resolved yet; The Denial of Service (DoS) and the Security of Mobile Code.  The DoS occurs when the service is disrupted, and users cannot access their data.  Currently, the DoS attacks are encountered by attempting to catch and punish the perpetrators after the event, which is a reactive solution and not proactive. The security of mobile code is another open challenge. Example of the mobile code is an image is sent which might be a source of DoS or access to a local resource (Coulouris et al., 2005). 

Negative Correlation between Security and Performance

The performance challenges of the Distribute System emerge from the more complex algorithm required for the distributed environment than for the centralized system.  The complexity of the algorithm emerges from the requirement of replicated database systems, fully interconnected network, network delays represented by the simplistic queuing models, and so forth.   Security is one of the most important issues in the distributed system. Security requires layers of security measure to protect the system from intruders.  These layers of protection have a negative impact on the performance of the distributed environment. Moreover, data and information in transit or storage become vulnerable to attacks.  There are four types of storage systems Server Attached Redundant Array of Independent Disk (RAID), centralized RAID, Network Attached Storage (NAS), and Storage Area Network (SAN).  NAS and SAN have different performance because they have different techniques for transferring the data.  NAS uses TCP/IP protocol to transfer the data across multiple devices, while SAN uses SCSI setup on fiber channels.  Thus, NAS can be implemented on any physical network supporting TCP/IP such as Ethernet, FDDI, or ATM.  However, SAN can be implemented only fiber channel.  SAN has better performance than NAS because TCP has higher overhead and SCSI faster than the TCP/IP network (Firdhous, 2012).

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Coulouris, G. F., Dollimore, J., & Kindberg, T. (2005). Distributed systems: concepts and design: Pearson education.

Firdhous, M. (2012). Implementation of security in distributed systems-a comparative study. arXiv preprint arXiv:1211.2032.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

Intrusion Detection and Prevention Systems

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the type of devices and methods which are required to be implemented and employed in an enterprise, and the reasons for such devices and methods.  The discussion also addresses the location of these devices within the network to provide intrusion detection.

 Intrusion Detection System (IDS)

The IDS is a system which is responsible for detecting unauthorized access or attacks against systems and networks.  IDS can verify, itemize and characterize threats from outside and inside the network.  Most IDSs are programmed to react certain ways in a specific situation.  Event notification and alerts are critical to the IDS.  They inform administrators and security professional when and where attacks are detected (Abernathy & McMillan, 2016).

The most common method to classify IDS is based on its information source: network-based (NIDS) or host-based (HIDS). The NIDS is the most common IDS to monitor the network traffic on a local network segment.  The network interface card must be operating in a promiscuous mode to monitor the traffic on the network segment.  The NIDS can only monitor the network traffic.  It cannot monitor the internal activity which occurs within a system, such as an attack against a system which is carried out by logging on to the local terminal of the system.  The NIDS is affected by a switched network because the NIDS only monitors a single network segment (Abernathy & McMillan, 2016).

The HIDS monitors traffic on a single system.  The primary role of the HIDS is to protect the system on which it is installed.  The HIDS uses information from the operating system audit trails and system logs.  The detection capabilities of the HIDS are limited by how complete the audit logs and system logs are (Abernathy & McMillan, 2016).

The implementation of IDS is divided into four categories.  The first category is the “signature-based” which analyzes traffic and compares it to attack or state patterns, and reside within the IDS database. The signature-based IDS is also referred to as a misuse-detection system.  This type of IDS is popular despite the fact that it can only recognize attacks as compared with its database and is only as effective as the signatures provided.  The signature-based IDS requires frequent updates.  The signature-based IDS has two types: the pattern-matching, and stateful matching. The pattern-matching signature-based IDS compares traffic to a database of attack patterns.  It carries out specific steps when it detects traffic which matches an attack pattern.  The stateful-matching signature-based IDS records the initial operating system states.  Any changes to the system state which violate the defined rules result in an alert or notification being sent (Abernathy & McMillan, 2016).

The anomaly-based IDS is another type of IDS, which analyzes the traffic and compares it to normal traffic to determine whether said traffic is a threat.  This type of IDS is also referred to as behavior-based or profile-based system.  The limitation of this type of IDS is that any traffic outside of expected norms is reported, resulting in more false positives than signature-based IDS.  There are three types of anomaly-based IDS.  The statistical anomaly-based, protocol anomaly-based, and traffic anomaly-based.  The statistical anomaly-based IDS samples the live environment to record activities.  The more accurate a profile will be built, the longer the IDS is in operation.  However, the development of a profile which will not have a large number of false positive can be difficult and time-consuming.  The threshold for activity deviation is important in this IDS.  When the threshold is too low, the result is a false positive. However, when the threshold is too high, the result is false negatives.  The protocol anomaly-based IDS knows the protocols which it will monitor.  A profile of normal usage is built and compared to activity. The last type of the anomaly-based is the traffic anomaly-based IDS which tracks traffic pattern changes.  Using this types allows all future traffic patterns to be compared to the sample (Abernathy & McMillan, 2016). 

The rule-based and heuristic-based IDS is another type of IDS which is described to be an expert system using a knowledge base, inference engine, and rule-based programming.  The knowledge is configured as rules.  The traffic and the data are analyzed, and the rules are applied to the analyzed traffic.  The inference engine uses its intelligent software to learn, and if the characteristics of an attack are discovered and met, alerts or notification trigger.  This IDS type is also referred to as IF/THEN or expert system. The last type of IDS is application-based which analyzes transaction log files for a single application.  This type of IDS is provided as part of the application or can be purchased as an add-on.

Additional tools can be employed to complement IDS such as vulnerability analysis system, honeypots, and padded cells. The honeypots are systems which are configured with reduced security to entice attackers so that administrators can learn about attack techniques.  Padded cells are special hosts to which an attacker is transferred during an attack.

IDS monitors the system behavior and alert on potentially malicious network traffic.  It can be set inline, attached to a spanning port of a switch, or make use of a hub in place of a switch.  The underlying concept is to allow access to all packets that are required to be monitored by the IDS.  Tuning IDS is important because of a balancing act between these four event categories: true positive, false positive, true negative and false negative. Table 1 shows the relationship between these points, adapted from (Robel, 2015). 

Table 1.  Relationship of Event Categories (Robel, 2015).

  The ideal IDS tuning maximize instances of events categorized in the cells with a shaded background. True positive occur when the system alerts on intrusion attempts or other malicious activity, while false negative is of a null situation but are important nonetheless.  The false negative is comprised of the system failing to alert on malicious traffic, while false positive is alerting on benign activity.  There are few methods to connect IDS to capture and monitor traffic.  IDS needs to collect network traffic for analysis. Three main methods can be applied to IDS:  IDS using hub or switch spanning port, IDS using network tap, and IDS connected inline.  Figure 1 illustrates the IDS on the edge of a network or zone (Robel, 2015).

Figure 1.  IDS on the Edge of a Network or Zone. Adapted from (Robel, 2015)

Intrusion Prevention System (IPS)

The IPS is responsible for preventing attacks. When an attack begins, the IPS takes action to prevent and contain the attack.  The IPS can either be network-based IPS or host-based IPS.  IPS can also be signature-based or anomaly-based, or rate-based metric which analyzes the volume of traffic and the type of traffic.  IPS is more costly than the IDS because of the added security of preventing attacks versus detecting attacks.  Moreover, running IPS is more of an overall performance load than running IDS (Abernathy & McMillan, 2016).

A firewall is commonly used to provide a layer of security. However, the firewall has a limitation, as most firewalls can only block based on IP addresses or port.  In contrast, Network Intrusion Prevention System (NIPS) can use signatures designed to detect and defend from specific attacks such as DoS.  This feature is advantages for sites hosting web servers.  IPS have also been known to block buffer overflow type attacks and can be configured to report on network scans which typically signal a potential attack.  The advanced usage of IPS may not drop malicious packets but rather redirect specific attacks to a honeypot (Robel, 2015).

The IPS is connected inline.  This inline requirement enables IPS to drop selected packets, and defend against an attack before it takes hold of the internal network.  IPS connected inline to capture the traffic is illustrated in Figure 2, adapted from (Robel, 2015).

Figure 2. IPS on the border of a network or zone (Robel, 2015).

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Robel, D. (2015). SANS Institute InfoSec Reading Room.

Cyber Warfare and Cyber Terrorism

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the cyber warfare and cyber terrorism.  The discussion addresses the damages that could be to the government, companies, and ourselves in United Stated if we get attacked by a foreign government using cyber warfare or cyber terrorism.  The discussion also discusses whether the United States is prepared for such a scenario.

Cyber Warfare and Cyber Terrorism

The term cyberterrorism was coined in 1996 by combining the terms cyberspace and terrorism.  The term, since then, has become widely accepted after being embraced by the United States Armed Forces.  In 1998, a report was generated by the Center for Strategic and International Studies entitled Cybercrime, Cyberterrorism, Cyberwarfare, Averting an Electronic Waterloo.  In this report, the probabilities of these activities affecting a nation were discussed, followed by a discussion of the potential outcomes of such attacks and methods to limit the likelihood of such events (Janczewski, 2007).  

The term cyberterrorism is defined in (Janczewski, 2007) as “means premeditated, politically motivated attacks by subnational groups or clandestine agents, or individuals against information and computer systems, computer programs, and data that result in violence against non-combatant targets.”

Cyber attacks are usually observed after physical attacks.  The increased wave of cyberattacks was observed after the downing of an American plane near the cost of China, cyber attacks from both countries began against facilities of the other side is a good example.  Another example includes the cyber attacks throughout the Israeli/Palestinian conflict, and the Balkans War and the collapse of Yugoslavia.  Moreover, cyber attacks are aimed at targets representing high publicity value.  Favorite targets by attackers are top IT and transportation industry companies such as Microsoft, Boeing, and Ford. The increases in cyber attacks have clear political/terrorist foundations.  The available statistics indicate that any of the previously mentioned conflicts result in a steady increase in cyber attacks.  For instance, attacks by Chinese hackers and the Israeli/Palestinian conflict show a pattern of phased escalation (Janczewski, 2007).

Building protections against cyber attacks requires understanding the reasons for such attacks, to reduce and eliminate the attacks.  The most probable reasons for cyber attacks include a fear factor, spectacular factor, and vulnerability factor.  The fear factor is the most common denominator of the majority of terrorist attacks because the attacker desires to create fear in individuals, groups or societies.  The spectacular factor reflects the attacks that aim at either creating huge direct losses and/or resulting in a lot of negative publicity.  Example include the Amazon.com site which was closed for some time due to a Denial of Service (DoS) attack in 1999.   As a result, Amazon incurred losses due to suspended trading, but the publicity the attack created was widespread.  The vulnerability factor includes the cyber activities which do not always end up with huge financial losses.  Some of the most effective ways to demonstrate the vulnerability of organization are to cause a denial of service to the commercial server or something as simple as the defacement of web pages of organizations, very often referred to as computer graffiti (Janczewski, 2007). 

Cyber attacks consist of virus and worms attacks which can be delivered through email attachments, web browser scripts, and vulnerability exploits engines.  They can also include Denial of Service (DoS) attacks designed to prevent the use of public systems by legitimate users by overloading the normal mechanisms inherent in establishing and maintaining computer-to-computer connections.  Cyber attacks can also include web defacements of informational sites which service governmental and commercial interests to spread disinformation, propaganda, and/or disrupt information flows.  Unauthorized intrusions into systems are another form of Cyberattacks which leads to the theft of confidential and/or proprietary information, modification and/or corruption of data, and the inappropriate usage of a system for launching attacks on other systems (Janczewski, 2007). 

Cyber Terrorist Attacks are used to cause disruptions.  They come into forms; one against data and another control system.  Theft and corruption of data lead to services being sabotaged, and this is the most common form of Internet and computer attack.  The control system attacks are used to disable or manipulate physical infrastructure such railroads, electrical networks, water supplies and so forth. Example include the incident in Australia in March 2000 which happened by an employee who could not secure full-time employment used the Internet to release one million liters of raw sewage into the river and coastal waters in Queensland.

Potential Impact and Defenses and Fortifications

The cyber attacks and cyber terrorism have negative impact and consequence on the nation.  These consequences may include loss of life, significant damage to property, serious adverse U.S. foreign policy consequences, or serious economic impact on the United States (DoD, 2015). The preparation of a program of activities aimed at setting up effective defenses against potential threats plays a key role in mitigating the impact of such attacks.  These fortifications include physical defenses, system defenses, personnel defenses, and organizational defenses.   The physical defenses are required to control physical access to facilities. The system defenses are also required to limit the capabilities of unauthorized changes to data in storage or transit.  The personnel defenses are required to limit the changes of inappropriate staff behavior.  The organizational defenses are required to create and implement an information security plan.  Table 1 summarizes these defenses (Janczewski, 2007).

Table 1.  Summary of Required Defenses.

In summary, the cyber attacks and cyber terrorism have a negative impact on the nation.  The government and organizations must prepare the appropriate defenses to mitigate and alleviate such negative impact.  These defenses include physical, system, personnel and organizational.

References

DoD. (2015). The DOD Cyber Strategy. Retrieved from https://www.defense.gov/Portals/1/features/2015/0415_cyber-strategy/Final_2015_DoD_CYBER_STRATEGY_for_web.pdf.

Janczewski, L. (2007). Cyber warfare and cyber terrorism: IGI Global.