The Ethics of Leaking Sensitive Information and How to Prevent it.

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the ethics of leaking sensitive information, and methods to prevent such activities.  The discussion addresses the methods to prosecute people who do leak sensitive information.  Moreover, the discussion address methods to detect these crimes and collect evidence to assist in identifying who leaked the information and in the prosecution of those suspected of committing cybercrime.

Sensitive Data and Data Classification

Sensitive data include any information which is not supposed to be revealed to the public.  It can include confidential information, proprietary, protected, or any other types of data which organizations need to protect due to its value to the organization, or to comply with the existing laws and regulation.  Data is classified from Class zero to Class 3.  Class zero represents the unclassified public information.  Class 1 represents sensitive and confidential information that can cause damage.  Class 2 represents private and secret information which can cause serious damage.  Class 3 represents top secrete which can cause exceptionally grave damage.  Figure 1 illustrates this Data Classification from government and non-government perspective, adapted from (Stewart, Chapple, & Gibson, 2015).

Figure 1.  Data Classification (Stewart et al., 2015).

Examples of attacks on sensitive information are Sony Attacks which took place in 2014.  As cited in (Stewart et al., 2015), the founder of Mandiant stated that “the scope of this attack differ from any we have responded to in the past, as its purpose was to both destroy property and release confidential information to the public.  The bottom line is that this was an unparalleled and well-planned crime, carried out by an organized group.”  The attackers obtained over 100 TB of data, including full-length versions of unreleased movies, salary information, and internal emails.  Some of this data was more valuable to the organization than other data. Thus, security measures must be implemented to mitigate such attacks to obtain any data in Class 1 through Class 3. 

The organization must implement various security measures to protect sensitive and confidential data.  For instance, emails must be encrypted.  The encryption converts cleartext data into scrambled ciphertext and makes it more difficult to read.  Sensitive and confidential data must be managed to prevent data breaches.  A data breach is an event in which an unauthorized user can view or access sensitive or confidential data.   Sensitive and confidential data must be marked as though to be distinguished from other data such as public data (Abernathy & McMillan, 2016; CSA, 2011; Stewart et al., 2015).

Organizations must handle sensitive and confidential data with care.  Secure transportation of media through the lifetime of the sensitive data must be implemented.  Example of mishandling sensitive information is Ministry of Defense in the United Kingdom which released in 2011 mistakenly classified information on nuclear submarines and sensitive information in response to Freedom of Information requests.  They then redacted the classified data by using image-editing software to black it out. However, the damage happened, and the sensitive data was not handled properly.  Another example of mishandling sensitive data is the incident by Science Applications International Corporation (SAIC) in 2011 which was a government contractor, who lost control of backup tapes which include personally identifiable information (PII) and protected health information (PHI) for 4.9 million patients.  SAIC personnel did not implement HIPAA because this information falls under HIPAA (CSA, 2011; Stewart et al., 2015). 

Ethics, Data Leaks, and Criminal Act Investigation

Data leaks is a criminal activity which requires investigation.  For the criminal investigation, law enforcement personnel conduct such investigation to investigate the alleged violation of criminal law. The criminal investigations may result in charging suspects with a crime and the prosecution of those charges in criminal court.  Most criminal cases must meet the “beyond a reasonable doubt” standard of evidence.  The prosecution must demonstrate that the defendant committed the crime by presenting the fact of which there are no other logical conclusions.  Thus, criminal investigations must follow very strict evidence collection and preservation processes. Moreover, with respect to healthcare and the application of HIPAA, the regulatory investigation can be conducted by government agencies to investigate the violation of regulations such as HIPAA (CSA, 2011; Stewart et al., 2015). 

The prosecuting attorney must provide sufficient evidence to prove the guilt of the person who conducted such act before it is allowed in the court.   The evidence is required before the case is allowed in the court.  There are three basic types of evidence for the case to be allowed in the court.  These three types are called “admissible evidence” to enter the court. The evidence must be relevant to determining a fact.  The evidence must be material to the case.  The evidence must be competent; meaning must be obtained legally.  Evidence can be real evidence, documentary evidence, and testimonial evidence (Stewart et al., 2015). 

Forensic Procedures and Evidence Collection

The International Organization on Computer Evidence (IOCE) outlines six principles to guide digital evidence technicians as they perform media analysis, network analysis, and software analysis in the pursuit of forensically recovered evidence.  The first principle indicates that all of the general forensic and procedural principles must be applied when dealing with digital evidence.  The second principle indicates that actions taken should not change that evidence upon seizing the digital evidence.  The third principle indicates that person should be trained for the purpose when it is required for a person to access original digital evidence.  The fourth principle indicates that all activities relating to the seizure, access, storage, or transfer of digital evidence must be fully documented, preserved, and available for review.  The fifth principle indicates that an individual is responsible for all actions taken concerning digital evidence while the digital evidence is in their possession.  The last principle indicates that any agency that is responsible for seizing, accessing, storing, or transferring digital evidence is responsible for compliance with these principles (Stewart et al., 2015).

The various forensic analysis is conducted when sensitive data is leaked.  Media analysis involves the identification and extraction f information from storage media including magnetic media, optical media, and memory such as RAM, solid-state storage.  Network analysis involves activities which took place over the network during a security incident.  Network forensic analysis often depends on either prior knowledge that an incident is underway or the use of pre-existing security controls which log network activity, including intrusion detection and prevention system logs, network flow data captured by a flow monitoring system, logs from firewalls.   Software forensic analysis includes forensic reviews of applications or the activity which takes place within a running application.  In some cases, when malicious insiders are suspected, the forensic analysis can include a review of software code, looking for the back door, logic bombs, or other security vulnerabilities.   The hardware and embedded devices analysis include the review of the contents of hardware and embedded devices such as personal computers, smartphones, tablets, embedded computers in cars, and other devices (Stewart et al., 2015).

In summary, data can be leaked from insiders as well as from outsiders who can have illegal access to sensitive and confidential information. These acts are criminal acts, and they require evidence to be allowed in the court.  Various evidence is required.  The various forensic analysis must be conducted to review and analyze the cause of such a leak.  Organizations must pay attention not only to an outsider but also to insiders.   

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

CSA. (2011). Security guidance for critical areas of focus in cloud computing v2. 1. Cloud Security Alliance, v3.0, 1-76.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

Performance and Security Relationship

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the relationship between performance and security and the impact of security implementation on the performance. The discussion also discusses and analyzes the balance between security and performance to provide good operational result in both categories.  The discussion begins with the characteristics of the distributed environment including a database to have a good understanding of the complexity of the distributed environment, the influential factors on the distributed system.  The discussion discusses and analyzes the security challenges in the distributed system and the negative correlation between security and performance in the distributed system.

Distributed Environment Challenges

The distributed system involves components located at networked computers communicating and coordinating their actions only by passing messages.  The distributed system includes concurrency of components, lack of a global clock and independent failures of components.   The challenges of the distributed system arise from the heterogeneity of the system components, openness to allow components to be added or replaced, security, scalability, failure handling, concurrency of components, transparency and providing quality of service (Coulouris, Dollimore, & Kindberg, 2005).  

Example of distributed systems includes the Web Search whose task is to index the entire content of the world wide web, containing a wide range of information types and styles including web pages, multimedia sources and scanned books.  Massively multiplayer online games (MMOGs) is another example of the distributed system.  Users interact through the Internet with a persistent virtual world using MMOGs.  The financial trading market is another example of the distributed system using real-time access to a wide range of information sources such as current share prices and trends, economic and political development (Coulouris et al., 2005).

Influential Factors in Distributed Systems

The distributed system is going through significant changes due to some trends.  The first influential trend in the distributed system involves the emergence of pervasive networking technology.  The emergence of ubiquitous computing coupled with the desire to support user mobility in a distributed system is another factor that is impacting the distributed system.  The increasing demand for multi-media services is another influential trend in the distributed system.  The last influential trend is the view of the distributed systems as a utility.  All these trends have a significant impact on the distributed system.  

 Security Challenge in Distributed System

Security is among some challenges in the distributed system.  Many of the information resources which are stored in a distributed system have a high value to their users. The security of such information is critically important.  Information Security involves confidentiality to protect against disclosure to unauthorized users, integrity to protect against alteration or corruption, and availability to protect against interferences with the means of accessing the resources. The security must comply with the CIA Triad for Confidentiality, Integrity, and Availability (Abernathy & McMillan, 2016; Coulouris et al., 2005; Stewart, Chapple, & Gibson, 2015).  The security risks are associated with allowing access to resources in an intranet within the organization.  Although the firewalls can be used to form barriers between department around the intranet, restricting access to the authorized users only, the proper use of the resource by users within the intranet and on the Internet cannot be ensured and guaranteed. 

In the distributed system, users send requests to access data managed by the server which involves sending information in messages over a network.  Examples include a user can send the credit card information in electronic commerce or bank, or a doctor can request access to patient’s information.  The challenge is to send sensitive information in a message over a network in a secure manner.  Moreover, the challenge is to ensure the recipient is the right user.  Such challenges can be met by using different security techniques such as encryption techniques. However, there are two security challenges which have not been resolved yet; The Denial of Service (DoS) and the Security of Mobile Code.  The DoS occurs when the service is disrupted, and users cannot access their data.  Currently, the DoS attacks are encountered by attempting to catch and punish the perpetrators after the event, which is a reactive solution and not proactive. The security of mobile code is another open challenge. Example of the mobile code is an image is sent which might be a source of DoS or access to a local resource (Coulouris et al., 2005). 

Negative Correlation between Security and Performance

The performance challenges of the Distribute System emerge from the more complex algorithm required for the distributed environment than for the centralized system.  The complexity of the algorithm emerges from the requirement of replicated database systems, fully interconnected network, network delays represented by the simplistic queuing models, and so forth.   Security is one of the most important issues in the distributed system. Security requires layers of security measure to protect the system from intruders.  These layers of protection have a negative impact on the performance of the distributed environment. Moreover, data and information in transit or storage become vulnerable to attacks.  There are four types of storage systems Server Attached Redundant Array of Independent Disk (RAID), centralized RAID, Network Attached Storage (NAS), and Storage Area Network (SAN).  NAS and SAN have different performance because they have different techniques for transferring the data.  NAS uses TCP/IP protocol to transfer the data across multiple devices, while SAN uses SCSI setup on fiber channels.  Thus, NAS can be implemented on any physical network supporting TCP/IP such as Ethernet, FDDI, or ATM.  However, SAN can be implemented only fiber channel.  SAN has better performance than NAS because TCP has higher overhead and SCSI faster than the TCP/IP network (Firdhous, 2012).

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Coulouris, G. F., Dollimore, J., & Kindberg, T. (2005). Distributed systems: concepts and design: Pearson education.

Firdhous, M. (2012). Implementation of security in distributed systems-a comparative study. arXiv preprint arXiv:1211.2032.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

Intrusion Detection and Prevention Systems

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the type of devices and methods which are required to be implemented and employed in an enterprise, and the reasons for such devices and methods.  The discussion also addresses the location of these devices within the network to provide intrusion detection.

 Intrusion Detection System (IDS)

The IDS is a system which is responsible for detecting unauthorized access or attacks against systems and networks.  IDS can verify, itemize and characterize threats from outside and inside the network.  Most IDSs are programmed to react certain ways in a specific situation.  Event notification and alerts are critical to the IDS.  They inform administrators and security professional when and where attacks are detected (Abernathy & McMillan, 2016).

The most common method to classify IDS is based on its information source: network-based (NIDS) or host-based (HIDS). The NIDS is the most common IDS to monitor the network traffic on a local network segment.  The network interface card must be operating in a promiscuous mode to monitor the traffic on the network segment.  The NIDS can only monitor the network traffic.  It cannot monitor the internal activity which occurs within a system, such as an attack against a system which is carried out by logging on to the local terminal of the system.  The NIDS is affected by a switched network because the NIDS only monitors a single network segment (Abernathy & McMillan, 2016).

The HIDS monitors traffic on a single system.  The primary role of the HIDS is to protect the system on which it is installed.  The HIDS uses information from the operating system audit trails and system logs.  The detection capabilities of the HIDS are limited by how complete the audit logs and system logs are (Abernathy & McMillan, 2016).

The implementation of IDS is divided into four categories.  The first category is the “signature-based” which analyzes traffic and compares it to attack or state patterns, and reside within the IDS database. The signature-based IDS is also referred to as a misuse-detection system.  This type of IDS is popular despite the fact that it can only recognize attacks as compared with its database and is only as effective as the signatures provided.  The signature-based IDS requires frequent updates.  The signature-based IDS has two types: the pattern-matching, and stateful matching. The pattern-matching signature-based IDS compares traffic to a database of attack patterns.  It carries out specific steps when it detects traffic which matches an attack pattern.  The stateful-matching signature-based IDS records the initial operating system states.  Any changes to the system state which violate the defined rules result in an alert or notification being sent (Abernathy & McMillan, 2016).

The anomaly-based IDS is another type of IDS, which analyzes the traffic and compares it to normal traffic to determine whether said traffic is a threat.  This type of IDS is also referred to as behavior-based or profile-based system.  The limitation of this type of IDS is that any traffic outside of expected norms is reported, resulting in more false positives than signature-based IDS.  There are three types of anomaly-based IDS.  The statistical anomaly-based, protocol anomaly-based, and traffic anomaly-based.  The statistical anomaly-based IDS samples the live environment to record activities.  The more accurate a profile will be built, the longer the IDS is in operation.  However, the development of a profile which will not have a large number of false positive can be difficult and time-consuming.  The threshold for activity deviation is important in this IDS.  When the threshold is too low, the result is a false positive. However, when the threshold is too high, the result is false negatives.  The protocol anomaly-based IDS knows the protocols which it will monitor.  A profile of normal usage is built and compared to activity. The last type of the anomaly-based is the traffic anomaly-based IDS which tracks traffic pattern changes.  Using this types allows all future traffic patterns to be compared to the sample (Abernathy & McMillan, 2016). 

The rule-based and heuristic-based IDS is another type of IDS which is described to be an expert system using a knowledge base, inference engine, and rule-based programming.  The knowledge is configured as rules.  The traffic and the data are analyzed, and the rules are applied to the analyzed traffic.  The inference engine uses its intelligent software to learn, and if the characteristics of an attack are discovered and met, alerts or notification trigger.  This IDS type is also referred to as IF/THEN or expert system. The last type of IDS is application-based which analyzes transaction log files for a single application.  This type of IDS is provided as part of the application or can be purchased as an add-on.

Additional tools can be employed to complement IDS such as vulnerability analysis system, honeypots, and padded cells. The honeypots are systems which are configured with reduced security to entice attackers so that administrators can learn about attack techniques.  Padded cells are special hosts to which an attacker is transferred during an attack.

IDS monitors the system behavior and alert on potentially malicious network traffic.  It can be set inline, attached to a spanning port of a switch, or make use of a hub in place of a switch.  The underlying concept is to allow access to all packets that are required to be monitored by the IDS.  Tuning IDS is important because of a balancing act between these four event categories: true positive, false positive, true negative and false negative. Table 1 shows the relationship between these points, adapted from (Robel, 2015). 

Table 1.  Relationship of Event Categories (Robel, 2015).

  The ideal IDS tuning maximize instances of events categorized in the cells with a shaded background. True positive occur when the system alerts on intrusion attempts or other malicious activity, while false negative is of a null situation but are important nonetheless.  The false negative is comprised of the system failing to alert on malicious traffic, while false positive is alerting on benign activity.  There are few methods to connect IDS to capture and monitor traffic.  IDS needs to collect network traffic for analysis. Three main methods can be applied to IDS:  IDS using hub or switch spanning port, IDS using network tap, and IDS connected inline.  Figure 1 illustrates the IDS on the edge of a network or zone (Robel, 2015).

Figure 1.  IDS on the Edge of a Network or Zone. Adapted from (Robel, 2015)

Intrusion Prevention System (IPS)

The IPS is responsible for preventing attacks. When an attack begins, the IPS takes action to prevent and contain the attack.  The IPS can either be network-based IPS or host-based IPS.  IPS can also be signature-based or anomaly-based, or rate-based metric which analyzes the volume of traffic and the type of traffic.  IPS is more costly than the IDS because of the added security of preventing attacks versus detecting attacks.  Moreover, running IPS is more of an overall performance load than running IDS (Abernathy & McMillan, 2016).

A firewall is commonly used to provide a layer of security. However, the firewall has a limitation, as most firewalls can only block based on IP addresses or port.  In contrast, Network Intrusion Prevention System (NIPS) can use signatures designed to detect and defend from specific attacks such as DoS.  This feature is advantages for sites hosting web servers.  IPS have also been known to block buffer overflow type attacks and can be configured to report on network scans which typically signal a potential attack.  The advanced usage of IPS may not drop malicious packets but rather redirect specific attacks to a honeypot (Robel, 2015).

The IPS is connected inline.  This inline requirement enables IPS to drop selected packets, and defend against an attack before it takes hold of the internal network.  IPS connected inline to capture the traffic is illustrated in Figure 2, adapted from (Robel, 2015).

Figure 2. IPS on the border of a network or zone (Robel, 2015).

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Robel, D. (2015). SANS Institute InfoSec Reading Room.

Cyber Warfare and Cyber Terrorism

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the cyber warfare and cyber terrorism.  The discussion addresses the damages that could be to the government, companies, and ourselves in United Stated if we get attacked by a foreign government using cyber warfare or cyber terrorism.  The discussion also discusses whether the United States is prepared for such a scenario.

Cyber Warfare and Cyber Terrorism

The term cyberterrorism was coined in 1996 by combining the terms cyberspace and terrorism.  The term, since then, has become widely accepted after being embraced by the United States Armed Forces.  In 1998, a report was generated by the Center for Strategic and International Studies entitled Cybercrime, Cyberterrorism, Cyberwarfare, Averting an Electronic Waterloo.  In this report, the probabilities of these activities affecting a nation were discussed, followed by a discussion of the potential outcomes of such attacks and methods to limit the likelihood of such events (Janczewski, 2007).  

The term cyberterrorism is defined in (Janczewski, 2007) as “means premeditated, politically motivated attacks by subnational groups or clandestine agents, or individuals against information and computer systems, computer programs, and data that result in violence against non-combatant targets.”

Cyber attacks are usually observed after physical attacks.  The increased wave of cyberattacks was observed after the downing of an American plane near the cost of China, cyber attacks from both countries began against facilities of the other side is a good example.  Another example includes the cyber attacks throughout the Israeli/Palestinian conflict, and the Balkans War and the collapse of Yugoslavia.  Moreover, cyber attacks are aimed at targets representing high publicity value.  Favorite targets by attackers are top IT and transportation industry companies such as Microsoft, Boeing, and Ford. The increases in cyber attacks have clear political/terrorist foundations.  The available statistics indicate that any of the previously mentioned conflicts result in a steady increase in cyber attacks.  For instance, attacks by Chinese hackers and the Israeli/Palestinian conflict show a pattern of phased escalation (Janczewski, 2007).

Building protections against cyber attacks requires understanding the reasons for such attacks, to reduce and eliminate the attacks.  The most probable reasons for cyber attacks include a fear factor, spectacular factor, and vulnerability factor.  The fear factor is the most common denominator of the majority of terrorist attacks because the attacker desires to create fear in individuals, groups or societies.  The spectacular factor reflects the attacks that aim at either creating huge direct losses and/or resulting in a lot of negative publicity.  Example include the Amazon.com site which was closed for some time due to a Denial of Service (DoS) attack in 1999.   As a result, Amazon incurred losses due to suspended trading, but the publicity the attack created was widespread.  The vulnerability factor includes the cyber activities which do not always end up with huge financial losses.  Some of the most effective ways to demonstrate the vulnerability of organization are to cause a denial of service to the commercial server or something as simple as the defacement of web pages of organizations, very often referred to as computer graffiti (Janczewski, 2007). 

Cyber attacks consist of virus and worms attacks which can be delivered through email attachments, web browser scripts, and vulnerability exploits engines.  They can also include Denial of Service (DoS) attacks designed to prevent the use of public systems by legitimate users by overloading the normal mechanisms inherent in establishing and maintaining computer-to-computer connections.  Cyber attacks can also include web defacements of informational sites which service governmental and commercial interests to spread disinformation, propaganda, and/or disrupt information flows.  Unauthorized intrusions into systems are another form of Cyberattacks which leads to the theft of confidential and/or proprietary information, modification and/or corruption of data, and the inappropriate usage of a system for launching attacks on other systems (Janczewski, 2007). 

Cyber Terrorist Attacks are used to cause disruptions.  They come into forms; one against data and another control system.  Theft and corruption of data lead to services being sabotaged, and this is the most common form of Internet and computer attack.  The control system attacks are used to disable or manipulate physical infrastructure such railroads, electrical networks, water supplies and so forth. Example include the incident in Australia in March 2000 which happened by an employee who could not secure full-time employment used the Internet to release one million liters of raw sewage into the river and coastal waters in Queensland.

Potential Impact and Defenses and Fortifications

The cyber attacks and cyber terrorism have negative impact and consequence on the nation.  These consequences may include loss of life, significant damage to property, serious adverse U.S. foreign policy consequences, or serious economic impact on the United States (DoD, 2015). The preparation of a program of activities aimed at setting up effective defenses against potential threats plays a key role in mitigating the impact of such attacks.  These fortifications include physical defenses, system defenses, personnel defenses, and organizational defenses.   The physical defenses are required to control physical access to facilities. The system defenses are also required to limit the capabilities of unauthorized changes to data in storage or transit.  The personnel defenses are required to limit the changes of inappropriate staff behavior.  The organizational defenses are required to create and implement an information security plan.  Table 1 summarizes these defenses (Janczewski, 2007).

Table 1.  Summary of Required Defenses.

In summary, the cyber attacks and cyber terrorism have a negative impact on the nation.  The government and organizations must prepare the appropriate defenses to mitigate and alleviate such negative impact.  These defenses include physical, system, personnel and organizational.

References

DoD. (2015). The DOD Cyber Strategy. Retrieved from https://www.defense.gov/Portals/1/features/2015/0415_cyber-strategy/Final_2015_DoD_CYBER_STRATEGY_for_web.pdf.

Janczewski, L. (2007). Cyber warfare and cyber terrorism: IGI Global.

Steganography

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze steganography. The discussion also addresses the methods to detect information and possible threats that utilize this method of steganography.

Steganography

It is a method that uses the cryptographic technique to embed secret messages within another message.  The algorithm of steganographic method work by making alterations to the least significant bits of the many bits which make up image files. The changes are minor which does not impact the viewed image.  This method allows communicating parties to hide messages in plain sight. For instance, they might embed a secret message within an illustration on an innocent web page (Abernathy & McMillan, 2016; Stewart, Chapple, & Gibson, 2015). 

The steganographic method is often used to embed secret messages within images or WAV files because these files are often so large that the secret message would be easily missed by even the most observant inspector. This method is used for illegal or questionable activities such as espionage and child pornography. It can also be used for legitimate reasons such adding watermarks to documents to protect intellectual property.  The hidden information is known only to the creator of the file.  If another user later creates an unauthorized copy of the content, the watermark can be used to detect the copy and trace the offending copy back to the source.  The steganographic method is a simple technology to use with free tools openly available on the Internet, such as iSteg tool which requires you specify a text file containing your secret message and an image file that you wish to use to hide the message (Stewart et al., 2015).

Methods for Steganography Detection

Although the message is hidden within an image or WAV files, it can be detected with a comparison between the original file which was used and the file that is suspected with the hidden message.  The hashing algorithm such as MD5, a hash can be created for both files. If the hashes are the same, the file doe does not have a hidden message. However, if the hashes are different, it indicates that the second file has been modified.  The Forensic Analysis technique can retrieve the message.  With respect to the egress monitoring, the organization can periodically capture hashes of internal files which rarely change. For instance, graphics files such as JPEG and GIF files stay the same and do not get changes.  If security experts suspect a malicious insider is embedding additional data within these files and emailing them outside the organization, they can compare the original hashes with the hashes of the files the malicious insider sent out.  If the hashes are different, it indicates the files are different and may contain hidden messages (Stewart et al., 2015).   

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

Physical Security Consideration

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the Physical Security consideration when developing and creating an environmental design for a data center, and the reasons for such consideration.  The discussion also analyzes various control access to the data center and the types of access.  The discussion begins with a brief overview of Physical Threats and Physical Security, followed by the Seven Safeguards for Sensitive Computer and Equipment. The discussion also discussed the Internal Security, and the Environmental Physical Security measures.

Physical Threats and Physical Security

The purpose of the physical security is to protect against physical threats (Stewart, Chapple, & Gibson, 2015).  The physical threats include can be either natural-based threats or human-based threats.  In both cases, they must be considered during the design of the data center.  Natural-based threats include flooding, earthquakes, landslides, or volcanoes.  The human-based threats include theft, vandalism, or intentional fire.  Table 1 summarizes a brief list of these physical threats which should be considered during the design of a data center.  Thus, the physical and environmental security should be considered in two domains of security.  The first domain reflects the engineering aspect of the security as well as the management of the security.  The second domain reflects the foundational concepts, investigation and incident management as well as the disaster recovery (Abernathy & McMillan, 2016; Stewart et al., 2015). 

Table 1.  Physical Threats to Data Center Design Consideration.

Thus, the physical security should be the first in a line of defense which should be considered from the selection of the site and the design (Abernathy & McMillan, 2016).  A realistic assessment of the historical natural disaster events of an area should be performed, and cost/benefit analysis must be implemented to determine the most occurring threats and which threats can be addressed and which should be accepted (Abernathy & McMillan, 2016). Moreover, some of these threats are human-based threats such as the explosion and fire whether intentional or accidental, vandalism, and theft.  

All physical security should be based on the “Layered Defense Model” (Abernathy & McMillan, 2016).  The underlying concept of this model is the use of multiple approaches which support each other.  Thus, there is no single point of failure or total dependency on a single physical security concept.  If one tier of defense such as perimeter security fails, another layer will serve as the backup. 

The physical security can be enhanced by applying the following concepts.  The first concept is the Crime Prevention Through Environmental Design (CPTED) which is applied in any building.  This concept addresses the main design of the data center starting from the entrance, landscaping, and interior design.  The purpose of this concept is to create behavioral effects and minimize the crime. There are three main strategies to apply the CPTED during the design of the data center.  The first strategy is the “Natural Access Control,” which applies to the entrance of the building, such as doors, light, fences, and landscaping.  The underlying concept of this first strategy is to minimize the entry points and tight the control over those entry points to develop a “Security Zone” in the building.  The second strategy of the CPTED is “Natural Surveillance,” to maximize the visibility of the data center, and decrease crime.  The third strategy involves the “Natural Territorial Reinforcement” to extend the sense of ownership to the employees by creating a feeling of community in the area.  This strategy is implemented by using walls, fences, landscaping and light design.

The implementation of the strategies of the CPTED and achieving their goals are not always possible, and a security plan must discuss and address these strategies to close any gaps.  Thus, the Physical Security Plan is the second concept in this layered defense model.  The Physical Security Plan should address techniques for issues such as criminal activity deterrents, intruders delay, intruder detection, situation assessment, and intrusion response and disruption. Additional physical security issues include visibility, surrounding area and external entities, accessibility, a construction such as walls, and doors.  The data center should not have any internal compartment such as drop ceiling or partitions as they can be used to gain access and increase the risks.  Separate heating, ventilation and air conditioning (HVAC) for these rooms are highly recommended (Abernathy & McMillan, 2016).

Seven Safeguards for Sensitive Computers and Equipment

With respect to the computers and equipment rooms, the physical access should be controlled to those which contain sensitive servers and critical network gear, by locking these rooms all the time and secured. The design of these rooms which contains sensitive servers and critical networks should consider the following seven safeguards.  The first safeguard is to locate computer and equipment room in the center of the building.  The second safeguard is to make a single access door or point of entry to these computer and equipment rooms. The third safeguard is to avoid the top floor or basement of the building.  The fourth safeguard involves the installation and the frequent test of the fire detection and suppressions systems.  The fifth safeguard involves the installation of raised flooring.  The sixth safeguard is to install separate power supplies for these computer and equipment rooms. The last safeguard involves the use of only solid doors (Abernathy & McMillan, 2016).

Internal Security

While the perimeter security is important, the security within the building is as important, as prescribed in the “Concentric Circle” model.  These security measures affect the interior of the data center, such as doors, door lock types.  There are different types of doors such as vault doors, bullet-resistant door.  With respect to the door lock types, there are various types such as electric locks or cipher locks, and proximity authentication devices which contain Electronic Access Control (EAC). Various types of locks can also be used for protecting cabinets and securing devices such as warded locks, tumbler locks, and combination locks.  Moreover, biometrics can be used to provide the highest level of physical access control and is regarded to be the most expensive to deploy in the data center.  The glass entries are also considered in many facilities and data center in windows, glass doors, and glass walls.  Various types of glass should be considered such as standard glass for a residential area, tempered glass with extra strength, acrylic glass, laminated glass.   With respect to the visitors, there must be a control technique for protection.  Additional physical security measures include the equipment rooms and work areas.  Additional physical security measures should include a restricted work area, media storage facilities, and evidence storage (Abernathy & McMillan, 2016).

Environmental Physical Security

Physical security measures should include environmental security measures to address the availability principle of the CIA triad.  These measures include fire protection, fire detection, fire suppression.  The power supply should be considered in the environmental, physical security measures, including types of outages such as surge, brownout, fault, blackout, and sags.  The environmental, physical security measures should also include preventive measures such as the prevention of static electricity.  HVAC should be considered as part of the environmental, physical security measures as the excessive heating can cause a problem, or humidity can cause corrosion problem with the connections.   The water leakage and flooding should be considered as well (Abernathy & McMillan, 2016). 

In summary, security professionals must consider various techniques for protecting the data center starting from the selecting of the building to the interior security to the environment security.  They consider the CPTED strategies, and the seven safeguards.  The natural access control is a discussion in this discussion, and the security professional must consider these natural control access. 

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Stewart, J., Chapple, M., & Gibson, D. (2015). ISC Official Study Guide.  CISSP Security Professional Official Study Guide (7th ed.): Wiley.

Biometric Access Control

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze the biometric access control to secure a highly sensitive area of the organization operating environment.  The discussion begins with a brief overview of the Access Control, followed by Biometric Technology, and the Implementation of Biometric System.

Access Control

The Access Control technique whether for the physical asset or logical assets such as sensitive data is to limit and control the access to the authorized users only to access network, system or device.  The Access Control technique involves access type to the network, system or device.  The Access Control is provided to those authorized users through physical and logical controls.  The physical access is to limit access to the physical components such as network, system, or device.  Locks are the most popular physical Access Control technique to prevent access to the data centers including the network devices such as routers, switches and wires, and systems.   Other physical Access Control techniques include guards and biometrics, which should be considered as part of the security measures, based on the assets values, and the need to protect such assets.  The logical Access Control, on the other hand, limits and control the access of the authorized users using software or hardware components.  Examples of the logical Access Control include authentication and encryption.  The implementation of the physical and logical Access Control requires a good comprehension of the requirements, the administration methods of the Access Control, and the assets which will be protected.  Protecting a physical data center is different protecting the data stored in the data center (Abernathy & McMillan, 2016).

Biometric Technology

Biometric technology is physiological or behavioral characteristics.  The physiological characteristics include any unique physical attribute of the user, including iris, retina, and fingerprints.  The behavioral characteristics measure the actions of the user in a situation, including voice patterns, and data entry characteristics.  Biometric technologies as security measures started to be embedded into the operating system such as Apple’s Touch ID technology.  Understanding both physiological and behavioral characteristics must have a priority to ensure the adoption of these technologies for more secure access control.

The physiological characteristics of the Biometric technology employ a biometric scanning device to measure certain information about a physiological characteristic.  The physiological biometric systems include fingerprint, finger scan, hand geometry, hand topography, palm or hand scans, facial scans, retina scans, iris scans, and vascular scans.

The behavioral characteristics of the Biometric technology employ a biometric scanning device to measure the action of the person.  The biometric behavior system includes signature dynamics, keystroke dynamics, and voice pattern or print. 

The security professional must have a good understanding of the following biometric related technology so that they would not struggle during the implementation of such a technology.  These terms include enrollment time, feature extraction, accuracy, throughput rate, acceptability, false rejection rate (FRR), false acceptance rate (FAR), crossover error rate (CER).  Table 1 summarizes each of these terms with a brief description.

Table 1.  Biometric Technology Related Terms.

When using Biometric technology, security professionals often refer to a Zephyr Chart which illustrates the comparative strengths and weaknesses of the biometric system. However, other methods should also be considered to measure the effectiveness of each biometric system, and its level of user acceptance.   Table 2 summarizes popular biometric methods.  The first popular biometric methods ranked by the effectiveness of the most effective method first.  The second popular methods ranked by user acceptance.  As shown in the table, an iris scan is on the top list as an effective method, while voice pattern is at the top of user acceptance method. 

Table 2.  Summary of the Popular Biometric Methods.

Implementation of Biometric System

In accordance to (CSA, 2011), security control must be strategically positioned and conform to acceptable quality standards consistent with prevalent norms and best practices.  Thus, entry points must be secured using Access Control system such as proximity cards/biometric access.  When dealing with Cloud environment, the traditional authentication method for user username and password should not be sufficient.  Organizations and Cloud users must employ strong authentication techniques such as smartcard/PKI, Biometrics, RSA token, and so forth (Sukhai, 2004).  The implementation of Biometric technology provides a more secure layer to access either the physical location where systems, network, and devices are located or to the data which stored in these data centers.  With respect to the user, the user can view it as a convenient method as these biometric methods are part of the bodies which can last as long as the user is authorized to access these facilities and these data.  Since the iris scan seems to be the most effective biometric method, the researcher will employ such a method during the implementation of the Biometric technology.  The iris scan method scans the colored portion of the eye, including all rifts, coronas, and furrows.  It has a higher accuracy than any other biometric scan.

In summary, this discussion discussed and analyzed Biometric Access Control which can be implemented to secure a highly sensitive area of the organization.  The discussion analyzed the Access Control techniques, Biometric Methods, and the Implementation of Biometric Method. The analysis indicates that iris scan is the most effective methods, while voice pattern is ranked at the top of the user acceptance.

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

CSA. (2011). Security guidance for critical areas of focus in cloud computing v2. 1. Cloud Security Alliance, v3.0, 1-76.

Sukhai, N. B. (2004). Access control & biometrics. Paper presented at the Proceedings of the 1st annual conference on Information security curriculum development.

Security Measures for Virtual and Cloud Environment

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to discuss and analyze security measures for virtual and cloud environments. It also discusses and analyzes the current security models and the possibility for additional enhancements to increase the protection for these virtual and cloud environments. 

Virtualization

Virtualization is a core technology in Cloud Computing technology.  The purpose of Virtualization in Cloud Computing is to virtualize the resources to Cloud Computing Service Models such as Software-as-a-Service (SaaS), Infrastructure-as-a-Service (IaaS), and Platform-as-a-Service (PaaS) (Gupta, Srivastava, & Chauhan, 2016).   Virtualization allows creating many instances of Virtual Machines (VMs) in a single physical operating system.  The utilization of these VMS provides flexibility, agility, and scalability to the Cloud Computing resources.  The VM is provided to the client to access resources at a remote location using the virtualization computing technique.  Key features of Virtualization include the resource utilization using isolation among hardware, operating systems, and software.  Another key feature of Virtualization is the multi-tenancy for simultaneous access of the VMs residing in a single physical machine. After the VM is created, it can be copied and migrated.  These features of the Virtualization are double-edged as they provide flexibility, scalability, and agility, while they cause security challenges and concerns.  The security concerns are one of the biggest obstacles to the widespread adoption of the Cloud Computing (Ali, Khan, & Vasilakos, 2015). 

The hardware Virtualization using the physical machine is implemented using hypervisor.  The hypervisor has two types:  Type 1 and Type 2. Type 1 of the hypervisor is called “Bare Metal Hypervisor” as illustrated in Figure 1.  Type 2 of the hypervisor is called “Hosted Hypervisor” as illustrated in Figure 2.   The “Bare Metal Hypervisor” provides a layer between the physical system and the VMs, while the “Hosted Hypervisor” is deployed on the Operating System.

Figure 1.  Hypervisor Type 1: Bare Metal Hypervisor. Adapted from (Gupta et al., 2016).

Figure 2: Hypervisor Type 2: Hosted Hypervisor. Adapted from (Gupta et al., 2016).

Virtualization has many security flaws to intruders.  The traditional security measures that control physical systems are found inadequate or ineffective when dealing with the virtualized data center, hybrid and private Cloud environment (Gupta et al., 2016).  Moreover, the default configuration of the hypervisor does not always include security measures that can protect the virtual and cloud environment.

One of the roles of the hypervisor is to control the management between the VMs and the physical resources.  In Type 1 Hypervisor “Bare Metal Hypervisor,” the single point of failure increases the security breaches for the whole virtualized physical environment on the physical system.  In Type 2 Hypervisor “Hosted Hypervisor,” the configuration exposes more threats than the “Bare Metal Hypervisor.”  The VMs, which are hosted in the physical system, communicate with each other which can cause the loopholes to the intruders. 

Virtualization is exposed to various types of threats and vulnerabilities.  These vulnerabilities in Virtualization Security include VM Escape, VM Hoping, VM Theft, VM Sprawl, Insecure VM Migration, Sniffing and Spoofing.  Figure 3 illustrates the vulnerabilities of the Virtualization. 

Figure 3.  Vulnerabilities of Virtualization. Adapted from (Gupta et al., 2016).

As indicated in (Gupta et al., 2016), Hypervisor should be inbuilt with the firewall security and disable access console (USB, NIC) to prevent unauthorized access.   The access to the Role Based Access Control (RBAC) is effective to control Hyper jacking of VMs.  The role and responsibilities should be defined to the users of the VMs to check the access authorization. 

Security Principles, Security Mode. Security Models and Security Implementation

As indicated in (Abernathy & McMillan, 2016), the primary goal of all security measures is to provide protection and ensure that the measure is successful.  Three major principles of security include confidentiality, integrity, and availability (CIA).  These Security Principles are known as CIA triad.  The confidentiality is provided if the data cannot be read either through access control and encryption for data as it exists on the hard drive or through encryption as the data is in transit.   Confidentiality is the opposite of “disclosure” (Abernathy & McMillan, 2016).  The Integrity is provided if the data is not changed in any way by unauthorized users.  The integrity principle is provided through the hashing algorithm or a checksum.  The availability principles provide the time the resources or data is available. The availability is measured as a percentage of “up” time with 99.9% of uptime representing more availability than 99% uptime.   The availability principle ensures the availability and access of the data whenever it is needed.  The availability principle is described as a prime goal of security.  Most of the attacks result in a violation of one of these security principles of confidentiality, integrity, or availability.  Thus, the defense-in-depth technique is recommended as an additional layer of security.  For instance, even if the firewall is configured for protection, access control list should still be applied to resources to help prevent access to sensitive data in case the firewall gets breached.  Thus, the defense-in-depth technique is highly recommended.

Security has four major Security Modes which are typically used by the Mandatory Access Control (MAC).  These four security modes include Dedicated Security Mode, System High-Security Mode, Compartmented Security Mode, and Multi-Level Security Mode.  The MAC operates in different security modes at different times based on variables such as sensitivity of data, the clearance level of the user, and the actions users are authorized to take.  In all the four security modes, a non-disclosure agreement (NDA) must be signed, and the access to certain information is based on each mode.

Security Models provide a mapping technique for the security policymakers to the rules which a computer system must follow.  Various types of the Security Models provide various approaches to implement such a mapping technique (Abernathy & McMillan, 2016). 

  • State Machine Model,
  • Multi-Level Lattice Models, 
  • Matrix-Based Models,
  • Non-Interface Models, and
  • Information Flow Models.

Moreover, there are formal Security Models which are incorporating security concepts and principles to guide the security design of systems. These formal Security Models include the following seven Models (Abernathy & McMillan, 2016).  The detail for each model is beyond the scope of this discussion.

  • Bell-LaPadula Model.
  • Biba Model.
  • Clark-Wilson Integrity Model.
  • Lipner Model.
  • Brewer-Nash Model.
  • Graham-Denning Model.
  • Harrison-Ruzzo-Ullman Model.

With respect to the Security Implementation, there are standards which must be followed when implementing security measures for protection.  These standards include ISO/IEC27001 and 27002 and PCI-DSS.   The ISO/IEC27001 is the most popular standards, which is used by the organization to obtain certification for information security.  These standard guides ensure that the information security management system (ISMS) of the organization is properly built, administered, maintained and progressed.  The ISO/IEC 27002 standard provides a code of practice for information security management. This standard includes security measures such as access control, cryptography, compliance.  The PCI-DSS v3.1 is specific for payment card industry. 

Security Models in Cloud Computing

As Service Model is one of the main models in Cloud Computing.  These services are offered through a Service Provider known as a Cloud Service Provider to the cloud users.  Security and privacy are the main challenges and concern when using Cloud Computing environment.  Although there is a demand to leverage the resources of the Cloud Computing to provide services to clients, there is also need and the requirement for the Cloud servers and resources not to learn any sensitive information about the data being managed, stored, or queried (Chaturvedi & Zarger, 2015).   Effort should be exerted to improve the control of users to their data in the public environment.  Cloud Computing Security Models include Multi-Tenancy Model, Cloud Cube Security Model, the Mapping Model of Cloud, Security and Compliance, and the Cloud Risk Accumulation Model of CSA (Chaturvedi & Zarger, 2015).

The Multi-Tenancy Model is described to be the major functional characteristic of Cloud Computing allowing multiple applications to provide cloud services to the clients.  The user’s tenants are separated by virtual partitions, and each partition holds clients tenant’s data, customized settings and configuration settings.  Virtualization in a physical machine allows users to share computing resources such as memory, processor I/O and storage to different users’ applications and amends the utilization of Cloud resources.  SaaS is a good example of Multi-Tenant Model which provides scalability to serve a large number of clients based on Web service.  This model of Multi-Tenancy is described by the security experts to be vulnerable and expose confidentiality which is regarded to be one of the Security Principles to risk between the tenants.  Side channel attack is a significant risk in the Multi-Tenancy Model.  This kind of attack is based on information obtained from bandwidth monitoring.   Another risk of the Multi-Tenancy Model is the assignment of resources to the clients with unknown identity and intentions.  Another security risk associated with Multi-Tenancy involves data storage of multiple tenants in the same database tablespaces or backup tapes. 

The Cloud Cube Security Model is characterized by four main elements; Internal/External, Proprietary/Open, Parameterized/De-parameterized, and Insourced/Outsourced.  The Mapping Model of Cloud, Security, and Compliance Model is another Model to provide a better method to analyze the gaps between cloud architecture and compliance framework and the corresponding security control strategies provided by the Cloud Service Provider, or third parties.  The Cloud Risk Accumulation Model of CSA is the last Security Models of Cloud Computing.  The three Cloud Models of IaaS, PaaS, and SaaS have various security requirements due to the layer dependencies.

Security Implementation: Virtual Private Cloud (VPC)

The VPC Deployment Model is a model that provides more security than the Public Deployment Model.  In this Model, the user can apply Access Control at the instance level as well as at the network level.  Policies are configured and assigned to groups based on the access role.   The VPC as a Deployment Model of the Cloud Computing did solve problems such as the loss of authentication, loss of confidentiality, loss of availability, loss, and corruption of data (Abdul, Jena, Prasad, & Balraju, 2014).  The VPC is logically isolated from other virtual networks in the cloud.  As indicated in (Abdul et al., 2014), VPC is regarded as the most prominent approach to Trusted Computing technology.  However, organizations must implement the security measures based on the requirements of the business.  For instance, organizations and users have control to select the IP address range, create a subnet, route tables, network gateway and security as illustrated in Figure 4.

Figure 4.  Virtual Private Cloud Security Implementation.

In summary, security measures must be implemented to protect the cloud environment.  Virtualization imposes threats to the Cloud environment.  The hypervisor is a major component of Virtualization.  It is recommended that the Hypervisor should be inbuilt with the firewall security and disable access console (USB, NIC) to prevent unauthorized access.   The access to the Role Based Access Control (RBAC) should be effective to control Hyper jacking of VMs.  The role and responsibilities should be defined to the users of the VMs to check the access authorization.  Virtual Private Cloud as a trusted deployment model of the Cloud Computing provides a more secure cloud environment than the Public Cloud. The Security Implementation must follow certain standards.  The organization must comply with these standards to protect organizations and users.

References

Abdul, A. M., Jena, S., Prasad, S. D., & Balraju, M. (2014). Trusted Environment In Virtual Cloud. International Journal of Advanced Research in Computer Science, 5(4).

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Ali, M., Khan, S. U., & Vasilakos, A. V. (2015). Security in cloud computing: Opportunities and challenges. Information Sciences, 305, 357-383. doi:10.1016/j.ins.2015.01.025

Chaturvedi, D. A., & Zarger, S. A. (2015). A review of security models in cloud computing and an Innovative approach. International Journal of Computer Trends and Technology (IJCTT), 30(2), 87-92.

Gupta, M., Srivastava, D. K., & Chauhan, D. S. (2016). Security Challenges of Virtualization in Cloud Computing. Paper presented at the Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies, Udaipur, India.

Risk Management and Risk Assessment

Dr. Aly, O.
Computer Science

Introduction

The purpose of this discussion is to analyze risk management and risk assessment and the techniques to incorporate them into the information security plans and programs.  The discussion also discusses the purpose of the risk analysis, the benefits, and the techniques to address emerging threats.  The vulnerability analysis, and techniques to mitigate risks are also discussed in this discussion.

Risk Management and Risk Assessment

Risk management must be implemented to identify, analyze, evaluate and monitor the risk.  Management should evaluate and assess the risk associated with the business.   Incorporating risk management and risk assessment into information security program is critical for the business.  For instance, the organization should have business continuity plan which is part of the risk management (Abernathy & McMillan, 2016).

Understanding various security topics such as vulnerability, threat, threat agent, risk, exposure, and countermeasure must be understood to incorporate risk management, and risk assessment into the information security program of the organization.  For instance, the absence of access control is causing a threat which the attacker can take advantage of, the attacker, in this case, called the threat agent. The implementation of appropriate Access Control is a countermeasure.  The risk is the probability or likelihood which the threat agent can exploit a vulnerability and the impact of such a risk if the threat is carried out.   The exposure, on the other hand, happens when assets such as private data get lost or manipulated. The exposure can be caused by the lack of appropriate security measures such as access control.  In summary, the security concept cycle involves a threat which can exploit a vulnerability which can develop a risk.  The risk can damage assets, which can cause exposure.  The exposure needs safeguards, which affects the threat agent to discover any threats (Abernathy & McMillan, 2016). 

Risk management process must be considered and committed by the top management such as CEO (Chief Executive Officer), CFO (Chief Financial Officer), CIO (Chief Information Officer), CPO (Chief Privacy Officer), and CSO (Chief Security Officer), as they have the ultimate responsibility for protecting the data of the organization. The policy of the risk management provides the guidelines and the direction to implement and incorporate the risk management into the business security plan.  The risk management policy must include all players such as risk management team, and risk analysis team (Abernathy & McMillan, 2016). 

The risk assessment is a very critical tool in risk management to determine the vulnerabilities and threats, to evaluate their impact, and to identify the security measures to control them.  There are four main components of risk assessment which should be considered by the risk management team.  In the first step, the assets and the asset value must be identified.  The vulnerabilities and threats must be determined in the second step of the risk assessment process.  The threat probability and the impact on business must be calculated in the third step.  The last step in the risk assessment involves the threat impact which must be balanced with the countermeasure cost (Abernathy & McMillan, 2016).

Risk Analysis

Risk Analysis helps organizations identify the most important security resources.  Due to the dynamic nature of the security function of the organization, it is highly recommended that the risk analysis process must be revisited on a regular basis to identify any areas for improvements.  The role of the CSO is to implement and manage all security aspects of the business, including risk analysis, security policies and procedures, incident handling, security awareness training, and emerging technologies (Abernathy & McMillan, 2016).

The risk analysis team plays a significant role in the risk management and assessment.  The threat events which can occur, the potential impact of the threats, the frequency of the threats, and the level of confidence in the information gathered must be determined and analyzed by the risk analysis team.  During the risk analysis process, the risk analysis team collects information in accordance with NIST SP 800-30 using automated risk assessment tools, questionnaires, interviews, and policy document reviews.  During this analysis, the assets and their values are identified.  The threat and the vulnerabilities are also identified during this process.  The probability and likelihood of threat events are determined.  The risk analysis team must determine the impact of such a threat.  The last step in this process involves the determination of the risk as a combination of likelihood and impact.  When the likelihood is high, the impact becomes high, and the priority is raised (Abernathy & McMillan, 2016).

There are two types of Risk Analysis; Quantitative and Qualitative.  The quantitative risk analysis involves monetary and numeric values to every aspect of the risk analysis process, including asset value, threat frequency, vulnerability severity, impact, safeguard costs and so forth.  The quantitative risk analysis has the advantage over the qualitative risk analysis because it uses less guesswork than the qualitative which uses guesswork. However, the disadvantage of the quantitative risk analysis involves the time, the effort to collect the data, and the mathematic equations which can be difficult (Abernathy & McMillan, 2016).

The qualitative risk analysis, unlike the quantitative risk analysis, does not use monetary and numeric values to every aspect of the risk analysis process.  The qualitative risk analysis process involves intuition, experience, and best practice technique such as brainstorming, focus group, surveys, questionnaires, interview, Delphi method, and meetings.  The advantage of the qualitative risk analysis includes the prioritization of the risks, the identified areas for immediate improvement in addressing the threats.  All results of the qualitative risk analysis are subjective which reflects the disadvantage of the qualitative risk analysis. Moreover, the dollar value is not provided for cost-benefit analysis or for budget help which also reflects another disadvantage of the qualitative risk analysis (Abernathy & McMillan, 2016).

Giving the advantages and disadvantages of both risk analyses, most organizations implement both the quantitative risk analysis for tangible assets, in conjunction with the qualitative risk analysis for intangible assets (Abernathy & McMillan, 2016).

Threat Modeling

Organizations must implement a threat model to identify threats and the potential attacks, and to implement the appropriate mitigations accordingly against these attacks and threats.  The threat modeling is used to not only identify the threats but also to rate the threats and their impact on the organization.  The basic steps for the threat modelling involve six major steps; the identification of the assets, the identification of the threat agents, the research of the existing countermeasure in use, the identification of any vulnerabilities, the prioritization of the identified risks, and the identification of the countermeasures to reduce the risk (Abernathy & McMillan, 2016).

Threat modeling has three approaches; the application-centric, asset-centric, and attacker-centric. The application-centric threat modeling focuses on the use of the architecture diagram of the application to analyze threats.  The asset-centric threat modeling focuses on the assets of the organization and classifying them according to the sensitivity of the data and their value to the attacker.  This threat modeling of asset-centric uses attack trees, attack graphs, or displaying patterns to determine the method which can be used to attack the asset.  The attacker-centric threat modeling focuses on profiling the characteristics, skills, and motivation of the attacker to exploit vulnerabilities. A mitigation strategy is implemented based on the attacker’s profile. The tree diagrams are used in this modeling (Abernathy & McMillan, 2016). 

Vulnerability Analysis

The business impact analysis development (BIA) relies heavily on vulnerability analysis and risk assessment.  Both the vulnerability analysis and risk assessment can be performed by the Business Continuity Plan (BCP) committee or a separate appointed risk assessment team.  It is critical for organizations to perform vulnerability analysis to identify the vulnerability.  The vulnerability assessment and penetration tests are part of the security control assessment process to mitigate risk.  The vulnerability analysis involves hardware, software or personnel to identify any weakness or absence of countermeasure.   The analysis of this weakness in or the absence of countermeasure can help in reducing the risk and the probability of an attacker or threat agent to exploit such vulnerability.  The compensative type of access control as a countermeasure for the identified vulnerabilities must be implemented as it acts as mitigation to risks.  The organization can minimize the risk to a more manageable level by applying the compensative access control.  Example of the compensative control can be the requirement for two authorized signature to release sensitive data (Abernathy & McMillan, 2016).

References

Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT Certification.

Scalable and Intelligent Security Analytics Tools

Dr. Aly, O.
Computer Science

Abstract

The purpose of this project is to discuss and analyze scalable and intelligent security analytics tools from different vendors.  The discussion begins with an overview of SIEM and the essential capabilities of an analytics-driven SIEM.  The first generation of SIEM has been overwhelmed with the volume of the data and the rise of the advanced threats.  The second generation of SIEM was introduced because Big Data tools have the potential to provide significant enhancement in the action of the security intelligence by minimizing the time for correlating, consolidating, and contextualizing diverse security event information and also for correlating long-term historical data for forensic purposes.  The discussion and the analysis are limited to only four scalable and intelligent security analytics; AlienVault, QRadar, Splunk, and LogRhythm.  The Unified Security Management (USM) of AlienVault includes SIEM, vulnerability assessment, asset discovery, flow and packet capture, network and host detection, and file integrity monitoring.  The USM is utilized to improve the security visibility throughout the organization and to detect security incidents in the real-time.  IBM Security QRadar provides management for log and events, behavioral and reporting analysis for the network and applications.  Splunk Enterprise Security enables the search of the data and the implementation of visual correlation to identify malicious events and collect data about the context of those events.  LogRhythm supports log management and has the capabilities of the network forensic.  They can be deployed in smaller environments. The discussion and the analysis include the key design considerations for scalability of each tool.  The project also discusses the advantages and the disadvantages of these four intelligent security analytic tools.

Keywords: SIEM, Security Analytics Tools, QRadar, AlienVault, LogRhythm, Splunk.

Introduction

The advanced threats which often called advanced persistent threats or APTs are the primary reason which drives the organization to collect and analyze information.   The APTs are sophisticated attacks (Oprea, Li, Yen, Chin, & Alrwais, 2015), and can involve multiple events occurring across the organization which would otherwise not be connected without the utilization of advanced intelligence (SANS, 2013).  APTs cause severe risks and damages to organizations and governments because they target the confidential propriety information (SANS, 2013). 

Advanced threats require advanced intelligent and analytic tools.  Recent years have witnessed the rise of more sophisticated attacks including the APTs (IBM, 2013; Oprea et al., 2015; SANS, 2013).  In a survey conducted by (SANS, 2013), the APTs affected two-thirds of the respondents in the past two years.  The result showed that one respondent is seeing almost 500 attacks in a nine-month period, and many seeing somewhere between one and twenty.  Thus, the number of the APTs is increasing.   As indicated in (Radford, 2014) “Frequently, a victim of these attacks [APTs] do not even know that their perimeter security has been penetrated for an average of 243 days. They all have up-to-date anti-virus software, and 100% of breaches involve stolen credentials.”  The APTs is in every organization’s mind (Radford, 2014).  The Cloud Service Providers (CSP) is the prime target for cyber-attacks. Organizations who utilize the Cloud technology must consider the myriad access points to their data when hosted in a Cloud environment and should consider at length the solutions available to them, and pertinently that all data access points are covered (Radford, 2014).  The PAT is one of various security threats against which organizations must protect their information.

Various security techniques are offered to deal with security threats including the APTs.   This project discusses and analyzes four scalable and intelligent security tools from different vendors.  The analysis covers the main functions such as anomaly detection capability, event correlation capability, real-time analytics capability, and so forth of each tool.  The analysis includes an assessment if the tool is suited or used in the Cloud Computing, and an identification of the targeted applications.   The analysis also includes the pros and cons of each tool.   The project begins with an overview of Security Information and Event Management, followed by the scalable and intelligent security tools from different vendors.

Security Information and Event Management (SIEM)

The traditional and conventional model for protecting the data often focused on network-centric and perimeter security, using devices such as firewalls and intrusion detection systems (Oprea et al., 2015; Shamsolmoali & Zareapoor, 2016).  However, this conventional approach is not adequate when dealing with Big Data and Cloud Computing technologies.  The conventional approach does not provide enough protection against the advanced persistent threats (APT), privileged users or any other malicious security attacks (Oltsik, 2013; Oprea et al., 2015; Shamsolmoali & Zareapoor, 2016). 

Thus, many organizations deploy other techniques such as the database audit and protection (DAP), and security information and event management (SIEM) to collect information about the network activities (Shamsolmoali & Zareapoor, 2016).  Examples of the SIEM techniques include RSA envision and HP ArchSight using a standardized approach to collecting information and events, storing and querying and providing correlation degrees driven by rules (Pearson, 2013).   However, SIEM provides inputs which need to be properly analyzed and translated into a certain format to be utilized by senior risk evaluators and strategic policymakers. This manual process is not adequate when dealing with security issues.   Moreover, the standards of the risk assessment such as ISO2700x and NIST operate at a macro-level and usually do not fully leverage information coming from logging and auditing activities carried out by the IT operations (Pearson, 2013). Thus, SIEM lacks a solution for the business audit and strategic risk assessment (Pearson, 2013). 

In (Splunk, 2017), SIEM which is analytic-driven has six essential capabilities, (1) Real-Time Monitoring, (2) Incident Response, (3) User Monitoring, (4) Threat Intelligence, (5) Advanced Analytics, and (6) Advanced Threat Detection.  Table 1 summarizes these six essential capabilities of an analytics-driven SIEM (Splunk, 2017).

Table 1.  Six Essential Capabilities of an Analytics-Driven SIEM. Adapted from (Splunk, 2017).

      In a survey implemented by (SANS, 2013), the result showed 58% of the respondents were using dedicated log management and 37% were using a SIEM system.  The result also showed that nearly half of the respondents have dedicated log management platforms, SIEM, and scripted searches a part of their data collection and analysis processes.  The result showed that less than 10% utilizes unstructured data repositories and specific Big Data framework for analysis and search. Figure 1 illustrates the types of Security Data Analysis Tools in Use.

Figure 1. Types of Security Data Analysis Tool in Use. Adapted from (SANS, 2013).

      As indicated in (SANS, 2013), the dedicated log management platform does not meet the needs created by Big Data collection.  SIEM is anticipated to step up and meet the needs created by Big Data collection (SANS, 2013).  The survey results also showed that 26% of the respondents rely on analytics tools to do the heavy lifting of threat intelligence, and others leverage SIEM platforms and manual techniques as illustrated in Figure 2.

Figure 2.  Threat Intelligence Capabilities. Adapted from (SANS, 2013).

            In the same survey, the result showed that 51% of the respondents indicated they are currently using third-party intelligence services as illustrated in Figure 3.

Figure 3.  The Use of Third Party Intelligence Tools.  Adapted from (SANS, 2013).

The result of the survey also showed that the most organizations are still focused on the fundamentals such as better SIEM, more training to detect patterns of malicious activity, vulnerability management, and network protection tools, and endpoints visibility for future investments in security analytics and Big Data platform as illustrated in Figure 4.

Figure 4.  Future Investments in Analytics/Intelligence.  Adapted from (SANS, 2013).

Scalable and Intelligent Security Analytic Tools

            The management of alerts from different intrusion detection sensors and rules was a big challenge in the settings of the organizations.  The first generation of SIEM was able to aggregate and filter alarms from many sources and present actionable information to security analysis (CSA, 2013).  However, the first generation of SIEM has been overwhelmed with the volume of the data (Glick, 2014), and cannot keep up with the rate and complexity of the current wave of the cyber attacks and advanced threats (Splunk, 2017).  The second generation of SIEM was introduced because Big Data tools have the potential to provide a significant enhancement in the action of the security intelligence by minimizing the time for correlating, consolidating, and contextualizing diverse security event information, and also for correlating long-term historical data for forensic purposes (CSA, 2013).   

Most of the current SIEM systems provide the same basic features, except for those features proposed by vendors.  However, the basic feature of the SIEM systems contains Server, Database, FrontEnd, Probes, and Agents (Di Mauro & Di Sarno, 2018).  The Server is the core component of the whole deployment for collecting and processing the log coming from the external sources on behalf of the correlation engine.  The Database stores the data for analysis and runtime configuration of SIEM.  The FrontEnd is the user interface to the Server, while the Probe is the collection of sensors deployed within the monitored infrastructure.  The typical examples of the Probes include perimeter defense systems such as firewalls and intrusion prevention systems, host sensors such as Host IDSs, or security applications such as web firewalls and authentication systems (Di Mauro & Di Sarno, 2018).  The Agent represents the counterparts of probes embedded in the server and can convert heterogenous logs generated by different probes, in logs with the same syntax and a specific semantic (Di Mauro & Di Sarno, 2018).  Figure 5 illustrates the classical framework of the SIEM systems.

Figure 5.  A Classical Framework of a SIEM System.  Adapted from (Di Mauro & Di Sarno, 2018).

The security analytics market is rapidly evolving as vendors are merging, developers are adding new features, and tools once deployed exclusively on-premises which are also offered as a Cloud service (Dan Sullivan, 2015).   As indicated in (Dan Sullivan, 2015), there are three reasons for organizations to deploy security analytics software.  These reasons are the compliance, security event detection and remediation, and forensics.  The compliance is regarded to be the key driver of security requirement for more organizations.  It is imperative to verify the compliance because the government and industry regulations, organizations implement their security policies and procedures (Dan Sullivan, 2015).   The security analytics tools should alert organizations to significant events which are defined by rules such as trigger (Dan Sullivan, 2015).  The tools can help in minimizing the time and effort required to collect, filter, and analyze event data (Dan Sullivan, 2015).  The attacks can occur at high speed; these tools should also be at high speed to detect the malicious attacks.  In case the attack took place, the organization should be able to block any future attacks through the forensics, as the forensic analysis can reveal vulnerabilities in the network of the organization or desktop security controls which were not known prior the attack (Dan Sullivan, 2015). 

Thus, organizations must consider deploying security analytics software.  Various security analytics tools are introduced with the aim to detect and block any malicious attacks.  Examples of these security analytics tools include AlienVault, QRadar, Splunk, LogRhythm, FireEye, McAfee Enterprise Security Manager, and so forth.  This project discusses and analyzes four of these security analytic tools, their features, the pros, and cons. 

  1. AlienVault

AlienVault started in 2007.  During 2010 the company received an initial round of venture capital funding and relocated the headquarters from Spain to United States (Nicolett & Kavanagh, 2011).  The AlienVault’s Unified Security Management (USM) provides SIEM, vulnerability assessment, network and host intrusion detection, file integrity monitoring functions via software or appliance options (Mello, 2016; Nicolett & Kavanagh, 2011), asset discovery, and the capture of the flow and packets (Mello, 2016).  The AlienVault Unified SIEM contains the proprietary and the Open Source Security Information Management (OSSIM).  OSSIM has been available since 2003 and is an open source security management platform. AlienVault integrates OSSIM into the SIEM solution to offer enhanced performance, consolidated reporting and administration, and multi-tenanting for most managed security service providers (Nicolett & Kavanagh, 2011).    AlienVault added the real-time feature in 2010.   AlienVault’s long-term plan features to solve existing competitive gaps in areas such as application, monitoring feature for data and users, while the short-term plan includes dynamic monitor system to rule-based correlation (Nicolett & Kavanagh, 2011).

AlienVault’s USM offers three major advantages.  AlienVault offers SIEM solution, monitoring system for file integrity, assessment system for vulnerability, control for the endpoint, and intrusion detection system.   AlienVault is based on open source.  It is regarded to be less expensive than the corresponding product sets from most competitors in the SIEM domain (Nicolett & Kavanagh, 2011).  A more recent review of AlienVault Unified SIEM by (Vanhees, 2018), indicates that the AlienVault Unified Security System incorporates various technologies such as vulnerability scanning, NetFlow, host intrusion detection system, and network intrusion detection system. Moreover, it is easy to scale up and down, and it scores very high in that aspect (Vanhees, 2018). In another review by (Morrissey, 2015), AlienVault can identify risks and vulnerabilities on systems, to provide log consolidation and analysis, and to correlate threats between different systems.  As cited in (Mello, 2016), Gartner recommends AlienVault’ USM for organizations which require a broad set of integrated security capabilities, either on-premises or in AWS environment.

AlienVault has disadvantages and limitations. AlienVault lacks support for Database Activity Monitoring (DAM).  Moreover, there is no feature to integrate third-party DAM technologies (Mello, 2016; Nicolett & Kavanagh, 2011).   AlienVault lack supports the integration of Identity and Access Management (IAM) beyond Active Directory Monitoring (Mello, 2016; Nicolett & Kavanagh, 2011).   Report on AlienVault Unified Security Management by (Morrissey, 2015; Vanhees, 2018) indicates that it is not easy to develop custom plugins compared to other products, and to setup correlation rules.  Moreover, it is difficult to deal with static data as it deals only with dynamic data like syslogs, NetFlow, data captures, and so forth.  The custom reporting is very limited, and the task of creating a bar chart to visualize most common attacked ports is not possible.  Organizations which requires high-end reporting, advanced correlation rules or complex use case scenarios, should not consider AlienVault (Vanhees, 2018). 

Although AlienVault’s USM has this limitation, in a review by (Vanhees, 2018), AlienVault’s USM is described as a “huge value” as it does not require any additional setup and is baked into the tool nicely and smoothly compared with other vendors.  It helps detect suspicious traffic (Morrissey, 2015; Vanhees, 2018), and it automatically syncs with other intelligence feeds which are regarded to be handy.  The correlation rules are used to spot unwanted behavior proactively (Vanhees, 2018). 

  • QRadar

QRadar is IBM’s SIEM platform (Mello, 2016).  It is composed of QRadara Log Manager, Data Node SIEM, Risk Manager, Vulnerability Manager, QFlow, and VFlow Collectors, and Incident Forensics (Mello, 2016).  It uses the capabilities of Big Data to keep up with the advanced threats and prevent attacks before they occur (IBM, 2013).   QRadar has the capabilities to reveal hidden relationships within massive amount of security data using proven analytics to minimize billions of security events to a manageable set of prioritized incidents (IBM, 2013). 

The platform of IBM Security QRadar can be deployed as a physical or virtual or as a cloud service solution (Mello, 2016).   It can be installed using various options such as “all-in-one” implementation option or scaled.  QRadar provides capabilities such as collection and processing of the event and log data, NetFlow, deep-packet inspection of network traffic, and full-packet capture and behavior analysis (McKelvey, Curran, Gordon, Devlin, & Johnston, 2015; Mello, 2016).   IBM added more enhanced features to QRadar to support IBM X-Force Exchange for sharing threat intelligence and IBM Security App Exchange for sharing applications, and security app extensions (Mello, 2016).  After the acquisition of Resilient Systems in 2016 (Rowinski, 2016), IBM developed an integrated end-to-end security operations and response platform offering a quick response to cyber incidents.  It also enhanced the multi-tenant feature, and the capabilities of the search and system administration (Mello, 2016).

The IBM’s QRadar utilizes a distributed data management system providing horizontal scaling of data storage (D. Sullivan, 2016).  While organizations can utilize distributed USIM to access local data in some cases, they may also require searching across the distributed platform in some other scenarios.  QRadar incorporates a search engine which enables searching locally as well as across platforms (D. Sullivan, 2016).  QRadar is a big data SIEM utilizing data nodes instead of storage area network (SAN), which help in reducing the associated cost and the complexity of the management (D. Sullivan, 2016).  QRadar is a distributed storage model based on data nodes and can scale to petabytes of storage and can meet the requirement of organizations for a large volume of long-term storage (D. Sullivan, 2016).  QRadar has a vulnerability management component which is designed to integrate data from various vulnerability scanners and enhance that data with context-relevant information about network usage (D. Sullivan, 2016).   It has been used to process a large volume of events per second in real-world applications (D. Sullivan, 2016).   QRadar can be deployed in the Cloud to reduce the infrastructure management (D. Sullivan, 2016)).   The Security QRadar Risk Manager add-on offers the capability of the automated monitoring,  provides support for multiple vendor product audits, and assessment of compliance policy, and threat modeling (D. Sullivan, 2016).  QRadar platform can meet the requirement of mid-size and large organization with general SIEM needs.  It is also a good fit for mid-size organizations which require a solution with flexible implementation, hosting, and monitoring options.  QRadar is also good for organizations which look for a single security event and response platform for their security operation centers (Mello, 2016).  In recent reviews by (Verified-User2, 2017), IBM QRadar was the preferred option for the clients of the organizations across all departments for fast deployment and instant log visibility to meet security and compliance requirements. 

QRadar has various areas of strength and advantages.  It provides an integrated view of log and event data and the correlation of network traffic behavior across NetFlow and event logs.  It also supports security event and monitoring feature for the log in IaaS Cloud Service Model, including the monitoring for AWS CloudTrail and SoftLayer (Mello, 2016).   The security platform of QRadar is straightforward to implement and maintain (Mello, 2016).  In more recent reviews of IBM QRadar by (Verified-User2, 2017), IBB QRadar was described as simple, flexible framework, easy deployment and out of the box content good enough to have quick wins.  In another review by (Verified-User1, 2017), the creation of rules is intuitive and fast helping in emergency scenarios.  The maintenance of IBM QRadar is light, and the appliance has nearly flawless uptime (Verified-User1, 2017).  The generation of the reports is very functional and efficient (Verified-User1, 2017).  It was described as a positive return on investment in a recent review by (Verified-User2, 2017).  Moreover, third-party capabilities can be plugged into the framework through the Security App Exchange (Mello, 2016).  This capability of third-party support is useful because QRadar has limitation for the endpoint monitoring for threat detection and response and the integrity of the basic file (Mello, 2016). 

QRadar has additional limitations besides the endpoint monitoring.  In recent reviews by (Verified-User1, 2017), the limitations of IBM QRadar includes the steep learning curve compared to other platforms.  QRadar does require training and homework.  Moreover, there is a lack of threat feed utilization of STIX (Structured Threat Information Expression)/TAXII (Trusted Automated Exchange of Indicator Information), which remains very limited (Verified-User1, 2017).  It may require a considerable amount of tuning during the deployment with very little “out of the box” offense information (Verified-User1, 2017).  In another recent review by (Verified-User2, 2017), IBM QRadar is limited in event log parsing, and the correlation engine needs to be more flexible and dynamic (Verified-User2, 2017).

  • Splunk

The Splunk Enterprise (SE) is the core product of the company (Mello, 2016).  It provides log and event collection, searches and visualization with the query language of Splunk (Mello, 2016).  The Splunk Enterprise Security (SES) provides security features including correlation rules, reports and pre-defined dashboards (Mello, 2016).  SES supports the real-time monitoring and alerts, incident response and compliance reporting (Mello, 2016).  SE and SES can be deployed locally on-premise, or in public, private or hybrid Cloud, or as a service using the Cloud Service Models (Mello, 2016).  Splunk acquired Caspida in 2015 (Mello, 2016; Tinsley, 2015).  After the acquisition, Splunk added the native behavioral analytics to its repertoire (Mello, 2016).  Moreover, it provided support to third-party UEBA products (Mello, 2016).  Additional features have been added to SES to integrate with other behavioral products.  More capabilities such as improved incident management and workflow capabilities, lower data storage requirements, better visualizations have been implemented.  The expansion of monitoring system to additional infrastructure and software-as-a-service provider (Mello, 2016).

            Splunk has the capabilities of broad data ingestion, offering connectors to the data sources and allowing custom connectors (D. Sullivan, 2016).   Splunk systems stores data in the schema-less database and uses indexes based on ingestion enabling various data types with rapid query response (D. Sullivan, 2016).  Splunk provides flexible SIEM platform which can handle various data sources and has the analytic capabilities or a single data analysis platform (Mello, 2016).  Splunk was found gaining “significant” visibility across the client base of Gartner (Mello, 2016).  Splunk has strong advanced security analytics for combating advanced threat detection and insider threats (Mello, 2016). 

            Splunk has various advantages.  In a review by (Taitingfong, 2015), Splunk was found to be flexible and extensible.  It can ingest logs from disparate systems using disparate formats and disparate file types (Murke, 2015; Taitingfong, 2015).  Splunk was found to be flexible in parsing, formatting, and enhancing the data (Taitingfong, 2015).  Splunk was found to scale very well in large environments adding more indexers as needed with the expanded environment (Kasliwal, 2015; Murke, 2015; Taitingfong, 2015).  Splunk can do multi-site clustering and search head clustering providing load balancing and redundancy (Taitingfong, 2015).   In another review by (Murke, 2015), Splunk has the capability of the real-time analysis.   Splunk provided the best results amongst its competitors as indicated in the review of (Murke, 2015).   It provided fast results on large datasets and is easy to manage as indicated in the review by (Murke, 2015).

            Splunk system has limitations.  SES product provides only the basic pre-defined correlations for user monitoring and reporting (Mello, 2016).  The licensing model for high volume data costs more than other SIEM products, although Splunk offers a new licensing scheme for high-volume data users (Mello, 2016).  In recent reviews by (Kasliwal, 2015; Taitingfong, 2015), the search language of Splunk, the more advanced formatting or statistical analysis were found very deep and required a learning curve.  The dashboard of Splunk may require more visualization which requires development using simple XML, Java Scripts and CSS (Taitingfong, 2015).  Moreover, Splunk releases minor revisions very quickly due to the increased number of bugs (Taitingfong, 2015).  In another review by (Murke, 2015), Splunk was found limited in providing optimized results with the smaller size of data.  Moreover, it was found by (Kasliwal, 2015; Murke, 2015) as costly.

  • LogRhythm

The SIEM of the LogRhythm supports n-tier-scalable, decentralized architecture (Mello, 2016).  LogRhythm’s SIEM is composed of various tools such as Platform Manager, AI Engine, Data Processors, Data Indexers, and Data Collectors (Mello, 2016).   The deployment of LogRhythm can be consolidated as an all-in-one.  The implementation options can be appliance, software or virtual instance format (Mello, 2016).  The behavioral analytics of the user and entity, an integrated incident response workflow and automated response capabilities can be combined with other capabilities such as event, endpoint and network monitoring (Mello, 2016).   The log processing and indexing capabilities of the LogRhythm’s SIEM are divided into two components of the systems.  Moreover, more features to the system are added such as the capabilities of the unstructured search through a new storage backend based on Elasticsearch (Mello, 2016). More features are added such as the clustered full data replication; more parsers for applications and protocols, an improved risk-based prioritization (RBP), support for the Cloud services such as AWS, Box, and Okta.  The integration with the Cloud Access Security Broker solutions such as Microsoft’s Cloud App Security and Zscaler are also added to enhance the LogRhythm’s SIEM system (Mello, 2016).  

LogRhythm has various advantages.  LogRhythm can integrate the capabilities of the advanced threat monitoring with SIEM (Mello, 2016).   It offers effective out-of-the-box content and workflows which can help in the automation process (Mello, 2016).  It provides highly interactive and customizable and automated response capabilities for performing actions on remote devices (Mello, 2016). It is praised for offering straightforward SIEM systems implementation and maintenance (Mello, 2016).  LogRhythm is also described to be very visible in the SIEM evaluations of the Gartner’s clients (Mello, 2016).  In a review for LogRhythm by (Eng, 2016), it was described as a great SIEM which is easy to implement.  Building blocks using LogRhythm is intuitive and easy using drag and drop building block technique which can be easily manipulated (Eng, 2016).  It offers statistical building blocks with powerful anomaly detection capabilities which are found to be more difficult or not possible in other SIEMs products (Eng, 2016).   LogRhythm provides better event classification than any other SIEM products (Eng, 2016).   In another review by (Ilbery, 2016), LogRhythm has the capabilities to import log files from hundreds of devices into one, and it is easy to search the database (Ilbery, 2016).  It also has the capabilities to send alert messages to the network activities using emails.  It provides a good view of the network equipment, traffic, and the servers. 

            LogRhythm some limitations.  More enhancement and improvement are required for the custom report engine included (Mello, 2016).  In a review by (Eng, 2016), there is a requirement and need for LogRhythm to provide back-end support for threat intelligence lists.  There is a proposal by (Eng, 2016) for log rhythm to replace the code with hash tables to avoid the excessive cost associated with referencing lists in the rule.   The reporting of LogRhythm was described by (Eng, 2016) as the worst of all SIEM systems because it is not intuitive and needs improvement.  In another review for LogRhythm by (Ilbery, 2016),  the upgrade process was described as a not easy process.    

Conclusion

This project discussed and analyzed scalable and intelligent security analytics tools from different vendors.  The discussion began with an overview of SIEM and the essential capabilities of an analytics-driven SIEM.  The first generation of SIEM has been overwhelmed with the volume of the data and the rise of the advanced threats.  The second generation of SIEM was introduced because Big Data tools have the potential to provide significant enhancement in the action of the security intelligence by minimizing the time for correlating, consolidating, and contextualizing diverse security event information and also for correlating long-term historical data for forensic purposes.

Examples of these security analytics tools include AlienVault, QRadar, Splunk, LogRhythm, FireEye, McAfee Enterprise Security Manager, and so forth.  The discussion and the analysis were limited to only four scalable and intelligent security analytics; AlienVault, QRadar, Splunk, and LogRhythm.  The Unified Security Management (USM) of AlienVault includes SIEM, vulnerability assessment, asset discovery, flow and packet capture, network and host detection, and file integrity monitoring.  The USM is utilized to improve the security visibility throughout the organization and to detect security incidents in the real-time.  IBM Security QRadar provides management for log and events, behavioral and reporting analysis for the network and applications.  Splunk Enterprise Security enables the search of the data and the implementation of visual correlation to identify malicious events and collect data about the context of those events.  LogRhythm supports log management and has the capabilities of the network forensic.  They can be deployed in smaller environments. The discussion and the analysis included the key design considerations for scalability of each tool.  The project also discussed the advantages and the disadvantages of these four intelligent security analytic tools.

References

CSA. (2013). Expanded Top Ten Big Data Security and Privacy Challenges. Cloud Security Alliance, Big Data Working Group.

Di Mauro, M., & Di Sarno, C. (2018). Improving SIEM capabilities through an enhanced probe for encrypted Skype traffic detection. Journal of Information Security and Applications, 38, 85-95.

Eng, J. (2016). LogRhythm Review: “So you want to know which SIEM to buy”. Retrieved  from https://www.trustradius.com/reviews/logrhythm-2016-06-08-06-30-33.

Glick, B. (2014). Information Security is a Big Data Issue. Retrieved from http://www.computerweekly.com/feature/Information-security-is-a-bigdata(05.05).

IBM. (2013). Extending Security Intelligence with Big Data Solutions. Retrieved from http://www.ndm.net/siem/pdf/Extending%20security%20intelligence%20with%20big%20data%20solutions.PDF.

Ilbery, S. (2016). User Review: “LogRhythm does what it promises.”. Retrieved from https://www.trustradius.com/reviews/logrhythm-2016-06-07-13-22-49.

Kasliwal, G. (2015). Splunk Enterprise Review: “Splunk: Dynamic and Fast compliance tool”. Retrieved from https://www.trustradius.com/reviews/splunk-2015-12-07-15-43-44.

McKelvey, N., Curran, K., Gordon, B., Devlin, E., & Johnston, K. (2015). Cloud Computing and Security in the Future Guide to Security Assurance for Cloud Computing (pp. 95-108): Springer.

Mello, J. P. J. (2016). Gartner Magic Quadrant for SIEM 2016: Not just for compliance anymore. Retrieved from https://techbeacon.com/highlights-gartner-magic-quadrant-siem-2016.

Morrissey, M. (2015). AlienVault USM Review: “Making sense of the logging overload.”. Retrieved from https://www.trustradius.com/reviews/alienvault-unified-security-management-2015-09-29-23-21-40.

Murke, S. (2015). Splunk Enterprise Review: “For Real-Time Data Analyzing Get Splunk.”. Retrieved from https://www.trustradius.com/reviews/splunk-2015-12-07-23-21-33.

Nicolett, M., & Kavanagh, K. M. (2011). Magic Quadrant for Security Information and Event Management.

Oltsik, J. (2013). The Big Data Security Analytics Era Is Here.

Oprea, A., Li, Z., Yen, T.-F., Chin, S. H., & Alrwais, S. (2015). Detection of early-stage enterprise infection by mining large-scale log data. Paper presented at the Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on.

Pearson, S. (2013). Privacy, security and trust in cloud computing Privacy and Security for Cloud Computing (pp. 3-42): Springer.

Radford, C. J. (2014). Challenges and Solutions Protecting Data within Amazon Web Services. Network Security, 2014(6), 5-8. doi:10.1016/S1353-4858(14)70058-3

Rowinski, M. (2016). IBM Security Closes Acquisition of Resilient Systems. Retrieved from http://www-03.ibm.com/press/us/en/pressrelease/49472.wss, IBM News Release.

SANS. (2013). SANS Security Analytic Survey Retrieved from https://www.sans.org/reading-room/whitepapers/analyst/security-analytics-survey-34980, White Paper.

Shamsolmoali, P., & Zareapoor, M. (2016). Data Security Model In Cloud Computing. Proceedings of 2nd International Conference on Computer Science Networks and Information Technology, Held on 27th – 28th Aug 2016, in Montreal, Canada.

Splunk. (2017). The Six Essential Capabilities of an Analytics-Driven SIEM. Retrieved from https://www.splunk.com/en_us/form/the-six-essential-capabilities-of-analytics-driven-siem.html, White Paper.

Sullivan, D. (2015). Three reasons to deploy security analytics software in the enterprise. Retrieved from http://searchsecurity.techtarget.com/feature/Three-reasons-to-deploy-security-analytics-software-in-the-enterprise.

Sullivan, D. (2016). Comparing the top big data security analytics tools. Retrieved from http://searchsecurity.techtarget.com/feature/Comparing-the-top-big-data-security-analytics-tools.

Taitingfong, K. (2015). Splunk Enterprise Review: “Splunk – the most flexible SIEM tool on the market.”. Retrieved from https://www.trustradius.com/reviews/splunk-2015-12-01-16-51-52.

Tinsley, K. (2015). Splunk Acquires Caspida. Retrieved from https://www.splunk.com/en_us/newsroom/press-releases/2015/splunk-acquires-caspida.html, Splunk Press Release.

Vanhees, K. (2018). AlienVault USM Review: “Nothing is what it SIEMs”. Retrieved from https://www.trustradius.com/reviews/alienvault-unified-security-management-2015-09-29-01-55-26.

Verified-User1. (2017). IBM Security QRadar Review: “Qradar – Big League SIEM Solution”. Retrieved from https://www.trustradius.com/products/ibm-security-qradar/reviews. Verified-User2. (2017). IBM Security QRadar Review: “IBM QRadar – A go-to SIEM product”. Retrieved from https://www.trustradius.com/reviews/ibm-security-qradar-2017-06-21-04-06-13, Trust Radius Reviews.