"Artificial Intelligence without Big Data Analytics is lame, and Big Data Analytics without Artificial Intelligence is blind." Dr. O. Aly, Computer Science.
The purpose of this discussion is to discuss and analyze steganography. The discussion also addresses the methods to detect information and possible threats that utilize this method of steganography.
Steganography
It is a method that uses the cryptographic technique to embed secret messages within another message. The algorithm of steganographic method work by making alterations to the least significant bits of the many bits which make up image files. The changes are minor which does not impact the viewed image. This method allows communicating parties to hide messages in plain sight. For instance, they might embed a secret message within an illustration on an innocent web page (Abernathy & McMillan, 2016; Stewart, Chapple, & Gibson, 2015).
The steganographic method is often
used to embed secret messages within images or WAV files because these files
are often so large that the secret message would
be easily missed by even the most observant inspector. This method is used for illegal or questionable activities
such as espionage and child pornography. It can also be used for legitimate reasons such adding watermarks to documents
to protect intellectual property. The
hidden information is known only to the creator of the file. If another user later creates an unauthorized
copy of the content, the watermark can be used to detect the copy and trace the
offending copy back to the source. The
steganographic method is a simple technology to use with free tools openly
available on the Internet, such as iSteg
tool which requires you specify a text file containing your secret message and
an image file that you wish to use to hide the message (Stewart et al., 2015).
Methods for Steganography Detection
Although the message is hidden within an image or WAV files, it can be detected with a comparison between the original file which was used and the file that is suspected with the hidden message. The hashing algorithm such as MD5, a hash can be created for both files. If the hashes are the same, the file doe does not have a hidden message. However, if the hashes are different, it indicates that the second file has been modified. The Forensic Analysis technique can retrieve the message. With respect to the egress monitoring, the organization can periodically capture hashes of internal files which rarely change. For instance, graphics files such as JPEG and GIF files stay the same and do not get changes. If security experts suspect a malicious insider is embedding additional data within these files and emailing them outside the organization, they can compare the original hashes with the hashes of the files the malicious insider sent out. If the hashes are different, it indicates the files are different and may contain hidden messages (Stewart et al., 2015).
References
Abernathy, R., & McMillan, T. (2016). CISSP Cert Guide: Pearson IT
Certification.
Stewart, J.,
Chapple, M., & Gibson, D. (2015). ISC
Official Study Guide. CISSP Security
Professional Official Study Guide (7th ed.): Wiley.
The purpose of this discussion is to discuss and analyze the Physical Security consideration when developing and creating an environmental design for a data center, and the reasons for such consideration. The discussion also analyzes various control access to the data center and the types of access. The discussion begins with a brief overview of Physical Threats and Physical Security, followed by the Seven Safeguards for Sensitive Computer and Equipment. The discussion also discussed the Internal Security, and the Environmental Physical Security measures.
Physical Threats and Physical Security
The purpose of the physical security is to protect against physical threats (Stewart, Chapple, & Gibson, 2015). The physical threats include can be either natural-based threats or human-based threats. In both cases, they must be considered during the design of the data center. Natural-based threats include flooding, earthquakes, landslides, or volcanoes. The human-based threats include theft, vandalism, or intentional fire. Table 1 summarizes a brief list of these physical threats which should be considered during the design of a data center. Thus, the physical and environmental security should be considered in two domains of security. The first domain reflects the engineering aspect of the security as well as the management of the security. The second domain reflects the foundational concepts, investigation and incident management as well as the disaster recovery (Abernathy & McMillan, 2016; Stewart et al., 2015).
Table 1. Physical Threats to Data Center Design Consideration.
Thus, the physical security should
be the first in a line of defense which
should be considered from the selection
of the site and the design (Abernathy & McMillan,
2016). A realistic assessment of the historical
natural disaster events of an area should be
performed, and cost/benefit
analysis must be implemented to determine the most occurring threats and which
threats can be addressed and which should
be accepted (Abernathy & McMillan,
2016).
Moreover, some of these threats are human-based threats such as the explosion
and fire whether intentional or accidental, vandalism, and theft.
All physical security should be based on the “Layered Defense Model” (Abernathy & McMillan,
2016).
The underlying concept of this model is
the use of multiple approaches which support each other. Thus, there is no single point of failure or
total dependency on a single physical security concept. If one tier of defense such as perimeter
security fails, another layer will serve as the backup.
The physical security can be enhanced by applying the following
concepts. The first concept is the Crime
Prevention Through Environmental Design (CPTED) which is applied in any building.
This concept addresses the main design of the data center starting from the
entrance, landscaping, and interior
design. The purpose of this concept is
to create behavioral effects and minimize the crime. There are three main
strategies to apply the CPTED during the design of the data center. The first strategy is the “Natural Access
Control,” which applies to the entrance of the building, such as doors, light,
fences, and landscaping. The underlying
concept of this first strategy is to minimize the entry points and tight the control
over those entry points to develop a “Security Zone” in the building. The second strategy of the CPTED is “Natural
Surveillance,” to maximize the visibility of the data center, and decrease
crime. The third strategy involves the
“Natural Territorial Reinforcement” to extend the sense of ownership to the
employees by creating a feeling of community in the area. This strategy is
implemented by using walls, fences, landscaping and light design.
The implementation of the strategies
of the CPTED and achieving their goals are not always possible, and a security plan must discuss and address
these strategies to close any gaps.
Thus, the Physical Security Plan is the second concept in this layered
defense model. The Physical Security
Plan should address techniques for issues such as criminal activity deterrents,
intruders delay, intruder detection, situation assessment, and intrusion
response and disruption. Additional physical security issues include
visibility, surrounding area and external entities, accessibility, a construction such as walls, and doors. The data center should not have any internal
compartment such as drop ceiling or partitions as they can be used to gain
access and increase the risks. Separate
heating, ventilation and air conditioning (HVAC) for these rooms are highly recommended (Abernathy & McMillan,
2016).
Seven Safeguards for Sensitive Computers and Equipment
With respect to the computers and equipment rooms, the physical access should be controlled to those which contain sensitive servers and critical network gear, by locking these rooms all the time and secured. The design of these rooms which contains sensitive servers and critical networks should consider the following seven safeguards. The first safeguard is to locate computer and equipment room in the center of the building. The second safeguard is to make a single access door or point of entry to these computer and equipment rooms. The third safeguard is to avoid the top floor or basement of the building. The fourth safeguard involves the installation and the frequent test of the fire detection and suppressions systems. The fifth safeguard involves the installation of raised flooring. The sixth safeguard is to install separate power supplies for these computer and equipment rooms. The last safeguard involves the use of only solid doors (Abernathy & McMillan, 2016).
Internal Security
While the perimeter security is important, the security within the building is as important, as prescribed in the “Concentric Circle” model. These security measures affect the interior of the data center, such as doors, door lock types. There are different types of doors such as vault doors, bullet-resistant door. With respect to the door lock types, there are various types such as electric locks or cipher locks, and proximity authentication devices which contain Electronic Access Control (EAC). Various types of locks can also be used for protecting cabinets and securing devices such as warded locks, tumbler locks, and combination locks. Moreover, biometrics can be used to provide the highest level of physical access control and is regarded to be the most expensive to deploy in the data center. The glass entries are also considered in many facilities and data center in windows, glass doors, and glass walls. Various types of glass should be considered such as standard glass for a residential area, tempered glass with extra strength, acrylic glass, laminated glass. With respect to the visitors, there must be a control technique for protection. Additional physical security measures include the equipment rooms and work areas. Additional physical security measures should include a restricted work area, media storage facilities, and evidence storage (Abernathy & McMillan, 2016).
Environmental Physical Security
Physical security measures should include environmental security measures to address the availability principle of the CIA triad. These measures include fire protection, fire detection, fire suppression. The power supply should be considered in the environmental, physical security measures, including types of outages such as surge, brownout, fault, blackout, and sags. The environmental, physical security measures should also include preventive measures such as the prevention of static electricity. HVAC should be considered as part of the environmental, physical security measures as the excessive heating can cause a problem, or humidity can cause corrosion problem with the connections. The water leakage and flooding should be considered as well (Abernathy & McMillan, 2016).
In summary, security professionals
must consider various techniques for protecting the data center starting from
the selecting of the building to the interior security to the environment
security. They consider the CPTED
strategies, and the seven safeguards.
The natural access control is a discussion
in this discussion, and the security
professional must consider these natural control access.
References
Abernathy, R., & McMillan, T.
(2016). CISSP Cert Guide: Pearson IT
Certification.
Stewart,
J., Chapple, M., & Gibson, D. (2015). ISC
Official Study Guide. CISSP Security
Professional Official Study Guide (7th ed.): Wiley.
The purpose of this discussion is to discuss and analyze the biometric access control to secure a highly sensitive area of the organization operating environment. The discussion begins with a brief overview of the Access Control, followed by Biometric Technology, and the Implementation of Biometric System.
Access Control
The Access Control technique whether for the physical asset or logical assets such as sensitive data is to limit and control the access to the authorized users only to access network, system or device. The Access Control technique involves access type to the network, system or device. The Access Control is provided to those authorized users through physical and logical controls. The physical access is to limit access to the physical components such as network, system, or device. Locks are the most popular physical Access Control technique to prevent access to the data centers including the network devices such as routers, switches and wires, and systems. Other physical Access Control techniques include guards and biometrics, which should be considered as part of the security measures, based on the assets values, and the need to protect such assets. The logical Access Control, on the other hand, limits and control the access of the authorized users using software or hardware components. Examples of the logical Access Control include authentication and encryption. The implementation of the physical and logical Access Control requires a good comprehension of the requirements, the administration methods of the Access Control, and the assets which will be protected. Protecting a physical data center is different protecting the data stored in the data center (Abernathy & McMillan, 2016).
Biometric Technology
Biometric technology is physiological or behavioral characteristics. The physiological characteristics include any unique physical attribute of the user, including iris, retina, and fingerprints. The behavioral characteristics measure the actions of the user in a situation, including voice patterns, and data entry characteristics. Biometric technologies as security measures started to be embedded into the operating system such as Apple’s Touch ID technology. Understanding both physiological and behavioral characteristics must have a priority to ensure the adoption of these technologies for more secure access control.
The physiological characteristics of the Biometric
technology employ a biometric scanning
device to measure certain information about a physiological
characteristic. The physiological
biometric systems include fingerprint, finger scan, hand geometry, hand
topography, palm or hand scans, facial scans, retina scans, iris scans, and
vascular scans.
The behavioral characteristics of the Biometric
technology employ a biometric scanning
device to measure the action of the person.
The biometric behavior system includes signature dynamics, keystroke
dynamics, and voice pattern or print.
The security professional must have a good
understanding of the following biometric related technology so that they would
not struggle during the implementation of such a technology. These terms include enrollment time, feature
extraction, accuracy, throughput rate, acceptability, false rejection rate
(FRR), false acceptance rate (FAR), crossover error rate (CER). Table 1 summarizes each of these terms with a
brief description.
Table
1. Biometric Technology Related Terms.
When using Biometric technology, security professionals
often refer to a Zephyr Chart which illustrates the comparative strengths and
weaknesses of the biometric system. However,
other methods should also be considered to measure the effectiveness of each biometric system, and its level of user
acceptance. Table 2 summarizes popular
biometric methods. The first popular
biometric methods ranked by the effectiveness
of the most effective method first. The second popular methods ranked by user acceptance. As shown in the table, an iris scan is on
the top list as an effective method,
while voice pattern is at the top of user
acceptance method.
Table 2. Summary of the Popular Biometric Methods.
Implementation of Biometric System
In accordance to (CSA, 2011), security control must be strategically positioned and conform to acceptable quality standards consistent with prevalent norms and best practices. Thus, entry points must be secured using Access Control system such as proximity cards/biometric access. When dealing with Cloud environment, the traditional authentication method for user username and password should not be sufficient. Organizations and Cloud users must employ strong authentication techniques such as smartcard/PKI, Biometrics, RSA token, and so forth (Sukhai, 2004). The implementation of Biometric technology provides a more secure layer to access either the physical location where systems, network, and devices are located or to the data which stored in these data centers. With respect to the user, the user can view it as a convenient method as these biometric methods are part of the bodies which can last as long as the user is authorized to access these facilities and these data. Since the iris scan seems to be the most effective biometric method, the researcher will employ such a method during the implementation of the Biometric technology. The iris scan method scans the colored portion of the eye, including all rifts, coronas, and furrows. It has a higher accuracy than any other biometric scan.
In summary, this discussion discussed and analyzed
Biometric Access Control which can be implemented to secure a highly sensitive area of the organization. The discussion analyzed the Access Control
techniques, Biometric Methods, and the Implementation of Biometric Method. The
analysis indicates that iris scan is the most effective methods, while voice
pattern is ranked at the top of the user
acceptance.
References
Abernathy, R., & McMillan, T.
(2016). CISSP Cert Guide: Pearson IT
Certification.
CSA. (2011).
Security guidance for critical areas of focus in cloud computing v2. 1. Cloud Security Alliance, v3.0, 1-76.
Sukhai,
N. B. (2004). Access control &
biometrics. Paper presented at the Proceedings of the 1st annual conference
on Information security curriculum development.
The purpose of this discussion is to discuss and analyze security measures for virtual and cloud environments. It also discusses and analyzes the current security models and the possibility for additional enhancements to increase the protection for these virtual and cloud environments.
Virtualization
Virtualization is a core technology in Cloud Computing technology. The purpose of Virtualization in Cloud Computing is to virtualize the resources to Cloud Computing Service Models such as Software-as-a-Service (SaaS), Infrastructure-as-a-Service (IaaS), and Platform-as-a-Service (PaaS) (Gupta, Srivastava, & Chauhan, 2016). Virtualization allows creating many instances of Virtual Machines (VMs) in a single physical operating system. The utilization of these VMS provides flexibility, agility, and scalability to the Cloud Computing resources. The VM is provided to the client to access resources at a remote location using the virtualization computing technique. Key features of Virtualization include the resource utilization using isolation among hardware, operating systems, and software. Another key feature of Virtualization is the multi-tenancy for simultaneous access of the VMs residing in a single physical machine. After the VM is created, it can be copied and migrated. These features of the Virtualization are double-edged as they provide flexibility, scalability, and agility, while they cause security challenges and concerns. The security concerns are one of the biggest obstacles to the widespread adoption of the Cloud Computing (Ali, Khan, & Vasilakos, 2015).
The hardware Virtualization using the physical machine is implemented using hypervisor. The hypervisor has two types: Type 1 and Type 2. Type 1 of the hypervisor is called “Bare Metal Hypervisor” as illustrated in Figure 1. Type 2 of the hypervisor is called “Hosted Hypervisor” as illustrated in Figure 2. The “Bare Metal Hypervisor” provides a layer between the physical system and the VMs, while the “Hosted Hypervisor” is deployed on the Operating System.
Figure 1. Hypervisor Type 1: Bare Metal Hypervisor. Adapted from (Gupta et al., 2016).
Figure 2: Hypervisor Type 2: Hosted Hypervisor. Adapted from (Gupta et al., 2016).
Virtualization has many security flaws to
intruders. The traditional security
measures that control physical systems are found inadequate or ineffective when
dealing with the virtualized data center,
hybrid and private Cloud environment (Gupta et al., 2016). Moreover, the default configuration of the
hypervisor does not always include security measures that can protect the
virtual and cloud environment.
One of the roles of the hypervisor is to control the
management between the VMs and the physical resources. In Type 1 Hypervisor “Bare Metal Hypervisor,”
the single point of failure increases the security breaches for the whole
virtualized physical environment on the physical system. In Type 2 Hypervisor “Hosted Hypervisor,” the
configuration exposes more threats than the “Bare Metal Hypervisor.” The VMs, which are hosted in the physical
system, communicate with each other which can cause the loopholes to the intruders.
Virtualization is exposed to various types of threats and vulnerabilities. These vulnerabilities in Virtualization Security include VM Escape, VM Hoping, VM Theft, VM Sprawl, Insecure VM Migration, Sniffing and Spoofing. Figure 3 illustrates the vulnerabilities of the Virtualization.
Figure 3. Vulnerabilities of Virtualization. Adapted from (Gupta et al., 2016).
As indicated in (Gupta et al., 2016), Hypervisor
should be inbuilt with the firewall security and disable access console (USB,
NIC) to prevent unauthorized access.
The access to the Role Based
Access Control (RBAC) is effective to control Hyper jacking of VMs. The role and responsibilities should be
defined to the users of the VMs to check the access authorization.
Security Principles, Security Mode. Security Models and Security Implementation
As indicated in (Abernathy & McMillan, 2016), the primary goal of all security measures is to provide protection and ensure that the measure is successful. Three major principles of security include confidentiality, integrity, and availability (CIA). These Security Principles are known as CIA triad. The confidentiality is provided if the data cannot be read either through access control and encryption for data as it exists on the hard drive or through encryption as the data is in transit. Confidentiality is the opposite of “disclosure” (Abernathy & McMillan, 2016). The Integrity is provided if the data is not changed in any way by unauthorized users. The integrity principle is provided through the hashing algorithm or a checksum. The availability principles provide the time the resources or data is available. The availability is measured as a percentage of “up” time with 99.9% of uptime representing more availability than 99% uptime. The availability principle ensures the availability and access of the data whenever it is needed. The availability principle is described as a prime goal of security. Most of the attacks result in a violation of one of these security principles of confidentiality, integrity, or availability. Thus, the defense-in-depth technique is recommended as an additional layer of security. For instance, even if the firewall is configured for protection, access control list should still be applied to resources to help prevent access to sensitive data in case the firewall gets breached. Thus, the defense-in-depth technique is highly recommended.
Security has four major Security Modes which are
typically used by the Mandatory Access Control (MAC). These four security modes include Dedicated
Security Mode, System High-Security Mode,
Compartmented Security Mode, and Multi-Level Security Mode. The MAC operates in different security modes
at different times based on variables such as sensitivity of data, the
clearance level of the user, and the actions users are authorized to take. In all the four security modes, a
non-disclosure agreement (NDA) must be signed,
and the access to certain information is based
on each mode.
Security Models provide a mapping technique for the
security policymakers to the rules which
a computer system must follow. Various
types of the Security Models provide various approaches to implement such a
mapping technique (Abernathy & McMillan, 2016).
State Machine Model,
Multi-Level Lattice Models,
Matrix-Based Models,
Non-Interface Models, and
Information Flow Models.
Moreover, there are formal Security Models which are
incorporating security concepts and principles to guide the security design of systems. These formal Security Models
include the following seven Models (Abernathy & McMillan, 2016). The detail for each model is beyond the scope
of this discussion.
Bell-LaPadula Model.
Biba Model.
Clark-Wilson Integrity Model.
Lipner Model.
Brewer-Nash Model.
Graham-Denning Model.
Harrison-Ruzzo-Ullman Model.
With respect to
the Security Implementation, there are standards which must be followed when implementing security measures
for protection. These standards include
ISO/IEC27001 and 27002 and PCI-DSS. The
ISO/IEC27001 is the most popular standards, which is used by the organization to obtain certification for
information security. These standard guides
ensure that the information security management system (ISMS) of the
organization is properly built, administered, maintained and progressed. The ISO/IEC 27002 standard provides a code of practice for information
security management. This standard includes security measures such as access
control, cryptography, compliance. The
PCI-DSS v3.1 is specific for payment card industry.
Security Models in Cloud Computing
As Service Model is one of the main models in Cloud Computing. These services are offered through a Service Provider known as a Cloud Service Provider to the cloud users. Security and privacy are the main challenges and concern when using Cloud Computing environment. Although there is a demand to leverage the resources of the Cloud Computing to provide services to clients, there is also need and the requirement for the Cloud servers and resources not to learn any sensitive information about the data being managed, stored, or queried (Chaturvedi & Zarger, 2015). Effort should be exerted to improve the control of users to their data in the public environment. Cloud Computing Security Models include Multi-Tenancy Model, Cloud Cube Security Model, the Mapping Model of Cloud, Security and Compliance, and the Cloud Risk Accumulation Model of CSA (Chaturvedi & Zarger, 2015).
The Multi-Tenancy Model is described to be the major
functional characteristic of Cloud Computing allowing multiple applications to
provide cloud services to the clients. The user’s tenants are separated by virtual partitions,
and each partition holds clients tenant’s data, customized settings and
configuration settings. Virtualization
in a physical machine allows users to share computing resources such as memory,
processor I/O and storage to different users’ applications
and amends the utilization of Cloud resources.
SaaS is a good example of Multi-Tenant
Model which provides scalability to serve a large number of clients based on
Web service. This model of Multi-Tenancy
is described by the security experts to be vulnerable and expose
confidentiality which is regarded to be one of the Security Principles to risk
between the tenants. Side channel attack
is a significant risk in the Multi-Tenancy Model. This kind of attack is based on information obtained from bandwidth monitoring. Another risk of the Multi-Tenancy Model is
the assignment of resources to the clients with unknown identity and
intentions. Another security risk
associated with Multi-Tenancy involves data storage of multiple tenants in the
same database tablespaces or backup tapes.
The Cloud Cube Security Model is characterized by four main elements; Internal/External, Proprietary/Open, Parameterized/De-parameterized, and Insourced/Outsourced. The Mapping Model of Cloud, Security, and Compliance Model is another Model to provide a better method to analyze the gaps between cloud architecture and compliance framework and the corresponding security control strategies provided by the Cloud Service Provider, or third parties. The Cloud Risk Accumulation Model of CSA is the last Security Models of Cloud Computing. The three Cloud Models of IaaS, PaaS, and SaaS have various security requirements due to the layer dependencies.
The VPC Deployment Model is a model that provides more security than the Public Deployment Model. In this Model, the user can apply Access Control at the instance level as well as at the network level. Policies are configured and assigned to groups based on the access role. The VPC as a Deployment Model of the Cloud Computing did solve problems such as the loss of authentication, loss of confidentiality, loss of availability, loss, and corruption of data (Abdul, Jena, Prasad, & Balraju, 2014). The VPC is logically isolated from other virtual networks in the cloud. As indicated in (Abdul et al., 2014), VPC is regarded as the most prominent approach to Trusted Computing technology. However, organizations must implement the security measures based on the requirements of the business. For instance, organizations and users have control to select the IP address range, create a subnet, route tables, network gateway and security as illustrated in Figure 4.
In summary, security measures must be implemented to protect the cloud environment. Virtualization imposes threats to the Cloud
environment. The hypervisor is a major component of Virtualization. It is recommended that the Hypervisor should
be inbuilt with the firewall security and disable access console (USB, NIC) to
prevent unauthorized access. The access
to the Role Based Access Control (RBAC) should
be effective to control Hyper jacking of VMs.
The role and responsibilities should be defined to the users of the VMs
to check the access authorization. Virtual
Private Cloud as a trusted deployment model of the Cloud Computing provides a
more secure cloud environment than the Public Cloud. The Security
Implementation must follow certain standards. The organization
must comply with these standards to protect
organizations and users.
References
Abdul, A. M., Jena, S., Prasad, S.
D., & Balraju, M. (2014). Trusted Environment In Virtual Cloud. International Journal of Advanced Research
in Computer Science, 5(4).
Abernathy, R.,
& McMillan, T. (2016). CISSP Cert
Guide: Pearson IT Certification.
Ali, M., Khan, S.
U., & Vasilakos, A. V. (2015). Security in cloud computing: Opportunities
and challenges. Information Sciences, 305,
357-383. doi:10.1016/j.ins.2015.01.025
Chaturvedi, D.
A., & Zarger, S. A. (2015). A review of security models in cloud computing
and an Innovative approach. International
Journal of Computer Trends and Technology (IJCTT), 30(2), 87-92.
Gupta,
M., Srivastava, D. K., & Chauhan, D. S. (2016). Security Challenges of Virtualization in Cloud Computing. Paper
presented at the Proceedings of the Second International Conference on
Information and Communication Technology for Competitive Strategies, Udaipur,
India.
The purpose of this discussion is to analyze risk management and risk assessment and the techniques to incorporate them into the information security plans and programs. The discussion also discusses the purpose of the risk analysis, the benefits, and the techniques to address emerging threats. The vulnerability analysis, and techniques to mitigate risks are also discussed in this discussion.
Risk Management and Risk Assessment
Risk management must be implemented to identify, analyze, evaluate and monitor the risk. Management should evaluate and assess the risk associated with the business. Incorporating risk management and risk assessment into information security program is critical for the business. For instance, the organization should have business continuity plan which is part of the risk management (Abernathy & McMillan, 2016).
Understanding various security topics such as
vulnerability, threat, threat agent, risk, exposure, and countermeasure must be
understood to incorporate risk
management, and risk assessment into the information security program of the
organization. For instance, the absence
of access control is causing a threat which the attacker can take advantage of,
the attacker, in this case, called the threat agent. The implementation of
appropriate Access Control is a countermeasure.
The risk is the probability or likelihood which the threat agent can
exploit a vulnerability and the impact of such a risk if the threat is carried
out. The exposure, on the other hand,
happens when assets such as private data get lost or manipulated. The exposure
can be caused by the lack of appropriate security
measures such as access control. In
summary, the security concept cycle involves a threat which can exploit a vulnerability which can develop a risk. The risk can damage assets, which can cause
exposure. The exposure needs safeguards,
which affects the threat agent to discover any threats (Abernathy & McMillan, 2016).
Risk management process must be
considered and committed by the top management such as CEO (Chief Executive
Officer), CFO (Chief Financial Officer), CIO (Chief Information Officer), CPO
(Chief Privacy Officer), and CSO (Chief Security Officer), as they have the
ultimate responsibility for protecting the data of the organization. The
policy of the risk management provides the guidelines and the direction to
implement and incorporate the risk management into the business security plan. The risk management policy must include all
players such as risk management team, and risk analysis team (Abernathy & McMillan, 2016).
The risk assessment is a very critical tool in risk management to determine the
vulnerabilities and threats, to evaluate their impact, and to identify the
security measures to control them. There
are four main components of risk
assessment which should be considered by the risk management team. In the first step, the assets and the asset
value must be identified. The vulnerabilities and threats must be determined in the second step of the risk
assessment process. The threat
probability and the impact on business must be
calculated in the third step. The
last step in the risk assessment involves the threat impact which must be balanced with the countermeasure cost (Abernathy & McMillan, 2016).
Risk Analysis
Risk Analysis helps organizations identify the most important security resources. Due to the dynamic nature of the security function of the organization, it is highly recommended that the risk analysis process must be revisited on a regular basis to identify any areas for improvements. The role of the CSO is to implement and manage all security aspects of the business, including risk analysis, security policies and procedures, incident handling, security awareness training, and emerging technologies (Abernathy & McMillan, 2016).
The risk analysis team plays a significant role in the
risk management and assessment. The threat
events which can occur, the potential impact of the threats, the frequency of
the threats, and the level of confidence in the information gathered must be
determined and analyzed by the risk analysis team. During the risk analysis process, the risk analysis
team collects information in accordance with
NIST SP 800-30 using automated risk assessment tools, questionnaires,
interviews, and policy document reviews.
During this analysis, the assets and their values are identified.
The threat and the vulnerabilities are
also identified during this process.
The probability and likelihood of
threat events are determined. The risk analysis team must determine the impact of
such a threat. The last step in
this process involves the determination of the risk as a combination of
likelihood and impact. When the
likelihood is high, the impact becomes high, and
the priority is raised (Abernathy & McMillan, 2016).
There are two types of Risk Analysis; Quantitative and
Qualitative. The quantitative risk
analysis involves monetary and numeric values to every aspect of the risk
analysis process, including asset value, threat frequency, vulnerability
severity, impact, safeguard costs and so forth.
The quantitative risk analysis has the advantage over the qualitative
risk analysis because it uses less guesswork than the qualitative which uses
guesswork. However, the disadvantage of the quantitative risk analysis involves
the time, the effort to collect the data, and the mathematic equations which
can be difficult (Abernathy & McMillan, 2016).
The qualitative risk analysis, unlike the quantitative
risk analysis, does not use monetary and numeric values to every aspect of the
risk analysis process. The qualitative
risk analysis process involves intuition, experience, and best practice
technique such as brainstorming, focus group, surveys, questionnaires, interview,
Delphi method, and meetings. The
advantage of the qualitative risk analysis includes the prioritization of the
risks, the identified areas for immediate improvement in addressing the
threats. All results of the qualitative
risk analysis are subjective which reflects the disadvantage of the qualitative
risk analysis. Moreover, the dollar value is not provided for cost-benefit
analysis or for budget help which also reflects
another disadvantage of the qualitative risk analysis (Abernathy & McMillan, 2016).
Giving the advantages and
disadvantages of both risk analyses, most organizations
implement both the quantitative risk analysis for tangible assets, in
conjunction with the qualitative risk analysis for intangible assets (Abernathy & McMillan, 2016).
Threat Modeling
Organizations must implement a threat model to identify threats and the potential attacks, and to implement the appropriate mitigations accordingly against these attacks and threats. The threat modeling is used to not only identify the threats but also to rate the threats and their impact on the organization. The basic steps for the threat modelling involve six major steps; the identification of the assets, the identification of the threat agents, the research of the existing countermeasure in use, the identification of any vulnerabilities, the prioritization of the identified risks, and the identification of the countermeasures to reduce the risk (Abernathy & McMillan, 2016).
Threat modeling has three approaches; the application-centric, asset-centric, and
attacker-centric. The application-centric threat modeling focuses on the
use of the architecture diagram of the application to analyze threats. The asset-centric threat modeling focuses on the assets of the
organization and classifying them according to the sensitivity of the data and
their value to the attacker. This threat
modeling of asset-centric uses attack
trees, attack graphs, or displaying patterns to determine the method which can
be used to attack the asset. The
attacker-centric threat modeling focuses
on profiling the characteristics, skills, and
motivation of the attacker to exploit vulnerabilities. A mitigation strategy is
implemented based on the attacker’s profile. The tree diagrams are used in this modeling (Abernathy & McMillan, 2016).
Vulnerability Analysis
The business impact analysis development (BIA) relies heavily on vulnerability analysis and risk assessment. Both the vulnerability analysis and risk assessment can be performed by the Business Continuity Plan (BCP) committee or a separate appointed risk assessment team. It is critical for organizations to perform vulnerability analysis to identify the vulnerability. The vulnerability assessment and penetration tests are part of the security control assessment process to mitigate risk. The vulnerability analysis involves hardware, software or personnel to identify any weakness or absence of countermeasure. The analysis of this weakness in or the absence of countermeasure can help in reducing the risk and the probability of an attacker or threat agent to exploit such vulnerability. The compensative type of access control as a countermeasure for the identified vulnerabilities must be implemented as it acts as mitigation to risks. The organization can minimize the risk to a more manageable level by applying the compensative access control. Example of the compensative control can be the requirement for two authorized signature to release sensitive data (Abernathy & McMillan, 2016).
References
Abernathy, R., & McMillan, T.
(2016). CISSP Cert Guide: Pearson IT
Certification.
The
purpose of this project is to discuss and analyze
scalable and intelligent security analytics tools from different vendors. The discussion begins with an overview of SIEM
and the essential capabilities of an analytics-driven SIEM. The first generation of SIEM has been
overwhelmed with the volume of the data and the rise of the advanced
threats. The second generation of SIEM was
introduced because Big Data tools have the potential to provide significant
enhancement in the action of the security intelligence by minimizing the time
for correlating, consolidating, and contextualizing diverse security event
information and also for correlating long-term historical data for forensic
purposes. The discussion and the
analysis are limited to only four scalable and intelligent security analytics;
AlienVault, QRadar, Splunk, and LogRhythm.
The Unified Security Management (USM) of AlienVault includes SIEM,
vulnerability assessment, asset discovery, flow and packet capture, network and
host detection, and file integrity monitoring.
The USM is utilized to improve the security visibility throughout the
organization and to detect security incidents in the real-time. IBM Security QRadar provides management for
log and events, behavioral and reporting analysis for the network and
applications. Splunk Enterprise Security
enables the search of the data and the implementation of visual correlation to
identify malicious events and collect data about the context of those events. LogRhythm supports log management and has the
capabilities of the network forensic.
They can be deployed in smaller environments. The discussion and the
analysis include the key design considerations for scalability of each
tool. The project also discusses the
advantages and the disadvantages of these four intelligent security analytic
tools.
The
advanced threats which often called advanced persistent threats or APTs are the
primary reason which drives the organization to collect and analyze
information. The APTs are sophisticated
attacks (Oprea, Li, Yen, Chin, & Alrwais, 2015), and can involve
multiple events occurring across the organization which would otherwise not be connected without the utilization of
advanced intelligence (SANS, 2013). APTs cause severe risks and damages to
organizations and governments because they target the confidential propriety
information (SANS, 2013).
Advanced
threats require advanced intelligent and analytic tools. Recent years have witnessed the rise of more
sophisticated attacks including the APTs (IBM, 2013; Oprea et al., 2015; SANS, 2013).
In a survey conducted by (SANS, 2013), the APTs
affected two-thirds of the respondents in the past two years. The result showed that one respondent is seeing almost 500 attacks in a
nine-month period, and many seeing somewhere between one and twenty. Thus, the number of the APTs is
increasing. As indicated in (Radford, 2014) “Frequently, a victim of these attacks [APTs] do not even know
that their perimeter security has been
penetrated for an average of 243 days.
They all have up-to-date anti-virus software, and 100% of breaches involve
stolen credentials.” The APTs is in
every organization’s mind (Radford, 2014). The Cloud
Service Providers (CSP) is the prime target for cyber-attacks. Organizations
who utilize the Cloud technology must consider the myriad access points to
their data when hosted in a Cloud environment and should consider at length the
solutions available to them, and pertinently that all data access points are
covered (Radford, 2014). The PAT is one of various security threats
against which organizations must protect their information.
Various
security techniques are offered to deal with security threats including the
APTs. This project discusses and analyzes four
scalable and intelligent security tools from different vendors. The analysis covers the main functions such
as anomaly detection capability, event
correlation capability, real-time analytics capability, and so forth of each
tool. The analysis includes an
assessment if the tool is suited or used in the Cloud Computing, and an
identification of the targeted applications.
The analysis also includes the
pros and cons of each tool. The project
begins with an overview of Security Information and Event Management, followed
by the scalable and intelligent security tools from different vendors.
Security Information and Event
Management (SIEM)
The
traditional and conventional model for protecting the data often focused on
network-centric and perimeter security, using devices such as firewalls and
intrusion detection systems (Oprea et al., 2015; Shamsolmoali & Zareapoor,
2016).
However, this conventional approach is not adequate when dealing with
Big Data and Cloud Computing technologies.
The conventional approach does not provide enough protection against the
advanced persistent threats (APT), privileged users or any other malicious
security attacks (Oltsik, 2013; Oprea et al., 2015; Shamsolmoali &
Zareapoor, 2016).
Thus,
many organizations deploy other techniques such as the database audit and
protection (DAP), and security information and event management (SIEM) to
collect information about the network activities (Shamsolmoali & Zareapoor, 2016). Examples of the SIEM techniques include RSA
envision and HP ArchSight using a standardized approach to collecting information and events, storing and querying and
providing correlation degrees driven by rules (Pearson, 2013). However, SIEM provides inputs which need to
be properly analyzed and translated into a certain format to be utilized by
senior risk evaluators and strategic policymakers.
This manual process is not adequate when dealing with security issues. Moreover, the standards of the risk
assessment such as ISO2700x and NIST operate at a macro-level and usually do
not fully leverage information coming from
logging and auditing activities carried out by the IT operations (Pearson, 2013). Thus, SIEM lacks
a solution for the business audit and strategic risk assessment (Pearson, 2013).
In (Splunk, 2017), SIEM which is analytic-driven has six essential capabilities, (1) Real-Time Monitoring, (2) Incident Response, (3) User Monitoring, (4) Threat Intelligence, (5) Advanced Analytics, and (6) Advanced Threat Detection. Table 1 summarizes these six essential capabilities of an analytics-driven SIEM (Splunk, 2017).
Table 1. Six
Essential Capabilities of an Analytics-Driven SIEM. Adapted from (Splunk, 2017).
In a survey implemented by (SANS, 2013), the result showed 58% of the respondents were using dedicated log management and 37% were using a SIEM system. The result also showed that nearly half of the respondents have dedicated log management platforms, SIEM, and scripted searches a part of their data collection and analysis processes. The result showed that less than 10% utilizes unstructured data repositories and specific Big Data framework for analysis and search. Figure 1 illustrates the types of Security Data Analysis Tools in Use.
Figure 1. Types of Security Data Analysis Tool in Use. Adapted from (SANS, 2013).
As indicated in (SANS, 2013), the dedicated log management platform does not meet the needs created by Big Data collection. SIEM is anticipated to step up and meet the needs created by Big Data collection (SANS, 2013). The survey results also showed that 26% of the respondents rely on analytics tools to do the heavy lifting of threat intelligence, and others leverage SIEM platforms and manual techniques as illustrated in Figure 2.
Figure 2.
Threat Intelligence Capabilities. Adapted from (SANS, 2013).
In the same survey, the result showed that 51% of the respondents indicated they are currently using third-party intelligence services as illustrated in Figure 3.
Figure 3. The Use of Third Party Intelligence Tools. Adapted from (SANS, 2013).
The result of the survey
also showed that the most organizations are
still focused on the fundamentals such as better SIEM, more training to
detect patterns of malicious activity, vulnerability management, and network protection tools, and endpoints
visibility for future investments in security analytics and Big Data platform
as illustrated in Figure 4.
Figure 4. Future Investments in Analytics/Intelligence. Adapted from (SANS, 2013).
Scalable and Intelligent Security
Analytic Tools
The management of
alerts from different intrusion detection sensors and rules was a big challenge
in the settings of the organizations.
The first generation of SIEM was able to aggregate and filter alarms
from many sources and present actionable information to security analysis (CSA, 2013). However, the first generation of SIEM has
been overwhelmed with the volume of the data (Glick, 2014), and cannot keep
up with the rate and complexity of the current wave of the cyber attacks and advanced threats (Splunk, 2017).
The second generation of SIEM was
introduced because Big Data tools have the potential to provide a
significant enhancement in the action of the security intelligence by
minimizing the time for correlating, consolidating, and contextualizing diverse
security event information, and also for correlating long-term historical data
for forensic purposes (CSA, 2013).
Most of the current SIEM systems provide the same basic features, except for those features proposed by vendors. However, the basic feature of the SIEM systems contains Server, Database, FrontEnd, Probes, and Agents (Di Mauro & Di Sarno, 2018). The Server is the core component of the whole deployment for collecting and processing the log coming from the external sources on behalf of the correlation engine. The Database stores the data for analysis and runtime configuration of SIEM. The FrontEnd is the user interface to the Server, while the Probe is the collection of sensors deployed within the monitored infrastructure. The typical examples of the Probes include perimeter defense systems such as firewalls and intrusion prevention systems, host sensors such as Host IDSs, or security applications such as web firewalls and authentication systems (Di Mauro & Di Sarno, 2018). The Agent represents the counterparts of probes embedded in the server and can convert heterogenous logs generated by different probes, in logs with the same syntax and a specific semantic (Di Mauro & Di Sarno, 2018). Figure 5 illustrates the classical framework of the SIEM systems.
Figure 5. A Classical Framework of a SIEM System. Adapted from (Di Mauro & Di Sarno, 2018).
The
security analytics market is rapidly evolving as vendors are merging,
developers are adding new features, and tools once deployed exclusively
on-premises which are also offered as a
Cloud service (Dan Sullivan, 2015). As
indicated in (Dan Sullivan, 2015), there are three
reasons for organizations to deploy security analytics software. These reasons are the compliance, security
event detection and remediation, and forensics.
The compliance is regarded to be the key driver of security requirement
for more organizations. It is imperative
to verify the compliance because the government
and industry regulations, organizations implement their security policies and
procedures (Dan Sullivan, 2015). The
security analytics tools should alert organizations to significant events which
are defined by rules such as trigger (Dan Sullivan, 2015). The tools can help in minimizing the time and
effort required to collect, filter, and analyze event data (Dan Sullivan, 2015). The attacks can occur at high speed; these tools should also be at high speed to
detect the malicious attacks. In case
the attack took place, the organization should be able to block any future
attacks through the forensics, as the forensic analysis can reveal
vulnerabilities in the network of the organization or desktop security controls
which were not known prior the attack (Dan Sullivan, 2015).
Thus,
organizations must consider deploying security analytics software. Various security analytics tools are introduced with the aim to detect and block
any malicious attacks. Examples of these security analytics tools include AlienVault,
QRadar, Splunk, LogRhythm, FireEye, McAfee Enterprise Security Manager, and so
forth. This project discusses and
analyzes four of these security analytic tools, their features, the pros, and cons.
AlienVault
AlienVault
started in 2007. During 2010 the company
received an initial round of venture capital funding and relocated the
headquarters from Spain to United States (Nicolett & Kavanagh, 2011). The AlienVault’s Unified Security Management
(USM) provides SIEM, vulnerability assessment, network and host intrusion
detection, file integrity monitoring functions via software or appliance
options (Mello, 2016; Nicolett & Kavanagh, 2011), asset discovery,
and the capture of the flow and packets (Mello, 2016). The AlienVault Unified SIEM contains the
proprietary and the Open Source Security Information Management (OSSIM). OSSIM has been available since 2003 and is an
open source security management platform. AlienVault integrates OSSIM into the
SIEM solution to offer enhanced performance, consolidated reporting and
administration, and multi-tenanting for most managed security service providers
(Nicolett & Kavanagh, 2011). AlienVault added the real-time feature in
2010. AlienVault’s long-term plan features
to solve existing competitive gaps in areas such as application, monitoring feature
for data and users, while the short-term plan includes dynamic monitor system
to rule-based correlation (Nicolett & Kavanagh, 2011).
AlienVault’s
USM offers three major advantages.
AlienVault offers SIEM solution, monitoring system for file integrity,
assessment system for vulnerability, control for the endpoint, and intrusion
detection system. AlienVault is based on open source. It is regarded to be less expensive than the
corresponding product sets from most competitors in the SIEM domain (Nicolett & Kavanagh, 2011). A more recent review of AlienVault Unified
SIEM by (Vanhees, 2018), indicates that the
AlienVault Unified Security System incorporates various technologies such as
vulnerability scanning, NetFlow, host intrusion detection system, and network
intrusion detection system. Moreover, it is easy to scale up and down, and it scores very high in that aspect (Vanhees, 2018). In another
review by (Morrissey, 2015), AlienVault can identify risks and vulnerabilities on
systems, to provide log consolidation and analysis, and to correlate threats
between different systems. As cited in (Mello, 2016), Gartner
recommends AlienVault’ USM for organizations which require a broad set of
integrated security capabilities, either on-premises or in AWS environment.
AlienVault
has disadvantages and limitations. AlienVault lacks support for Database
Activity Monitoring (DAM). Moreover,
there is no feature to integrate third-party DAM technologies (Mello, 2016; Nicolett & Kavanagh, 2011). AlienVault lack supports the integration of Identity and Access Management (IAM)
beyond Active Directory Monitoring (Mello, 2016; Nicolett & Kavanagh, 2011). Report
on AlienVault Unified Security Management by (Morrissey, 2015; Vanhees, 2018) indicates that it
is not easy to develop custom plugins compared to other products, and to setup
correlation rules. Moreover, it is difficult
to deal with static data as it deals only with dynamic data like syslogs, NetFlow, data captures, and so
forth. The custom reporting is very
limited, and the task of creating a bar
chart to visualize most common attacked ports is not possible. Organizations which requires high-end reporting, advanced correlation rules
or complex use case scenarios, should not consider AlienVault (Vanhees, 2018).
Although
AlienVault’s USM has this limitation, in
a review by (Vanhees, 2018), AlienVault’s USM
is described as a “huge value” as it does not require any additional setup and
is baked into the tool nicely and smoothly compared with other vendors. It helps detect suspicious traffic (Morrissey, 2015; Vanhees, 2018), and it
automatically syncs with other intelligence feeds which are regarded to be handy.
The correlation rules are used to spot unwanted behavior proactively (Vanhees, 2018).
QRadar
QRadar
is IBM’s SIEM platform (Mello, 2016). It is composed of QRadara Log Manager, Data
Node SIEM, Risk Manager, Vulnerability Manager, QFlow, and VFlow Collectors,
and Incident Forensics (Mello, 2016). It uses the capabilities of Big Data to keep
up with the advanced threats and prevent attacks before they occur (IBM, 2013). QRadar has the capabilities to reveal hidden
relationships within massive amount of security data using proven analytics to
minimize billions of security events to a manageable set of prioritized
incidents (IBM, 2013).
The
platform of IBM Security QRadar can be deployed
as a physical or virtual or as a cloud service solution (Mello, 2016). It can be installed using various options
such as “all-in-one” implementation option or scaled. QRadar provides capabilities such as
collection and processing of the event
and log data, NetFlow, deep-packet inspection of network traffic, and
full-packet capture and behavior analysis (McKelvey, Curran, Gordon, Devlin, & Johnston,
2015; Mello, 2016). IBM added more enhanced features to QRadar
to support IBM X-Force Exchange for sharing threat intelligence and IBM
Security App Exchange for sharing applications, and security app extensions (Mello, 2016). After the acquisition of Resilient Systems in
2016 (Rowinski, 2016), IBM developed an
integrated end-to-end security operations and response platform offering a quick response to cyber incidents. It also enhanced the multi-tenant feature,
and the capabilities of the search and system administration (Mello, 2016).
The
IBM’s QRadar utilizes a distributed data management system providing horizontal
scaling of data storage (D. Sullivan, 2016). While organizations can utilize distributed
USIM to access local data in some cases, they may also require searching across the distributed platform in
some other scenarios. QRadar
incorporates a search engine which enables
searching locally as well as across platforms (D. Sullivan, 2016). QRadar is a big data SIEM utilizing data
nodes instead of storage area network (SAN), which help in reducing the associated
cost and the complexity of the management (D. Sullivan, 2016). QRadar is a distributed storage model based on data nodes and can scale to petabytes
of storage and can meet the requirement of organizations for a large volume of long-term storage (D. Sullivan, 2016). QRadar has a vulnerability management
component which is designed to integrate data from various vulnerability
scanners and enhance that data with context-relevant information about network
usage (D. Sullivan, 2016). It has been used to process a large volume of events per second in real-world
applications (D. Sullivan, 2016). QRadar
can be deployed in the Cloud to reduce the infrastructure management (D. Sullivan, 2016)). The Security QRadar Risk Manager add-on offers the capability of the automated
monitoring, provides support for
multiple vendor product audits, and assessment of compliance policy, and threat
modeling (D. Sullivan, 2016). QRadar platform can meet the requirement of
mid-size and large organization with general SIEM needs. It is also a good fit for mid-size
organizations which require a solution with flexible implementation, hosting,
and monitoring options. QRadar is also
good for organizations which look for a single security event and response
platform for their security operation centers (Mello, 2016). In recent reviews by (Verified-User2, 2017), IBM QRadar was
the preferred option for the clients of the organizations across all departments
for fast deployment and instant log visibility to meet security and compliance
requirements.
QRadar
has various areas of strength and advantages. It provides an integrated view of log and
event data and the correlation of network traffic behavior across NetFlow and
event logs. It also supports security
event and monitoring feature for the log in IaaS Cloud Service Model, including
the monitoring for AWS CloudTrail and SoftLayer (Mello, 2016). The security platform of QRadar is
straightforward to implement and maintain (Mello, 2016). In more recent reviews of IBM QRadar by (Verified-User2, 2017), IBB QRadar was described as simple, flexible framework,
easy deployment and out of the box content good enough to have quick wins. In another review by (Verified-User1, 2017), the creation of
rules is intuitive and fast helping in emergency scenarios. The maintenance of IBM QRadar is light, and the appliance has nearly flawless uptime (Verified-User1, 2017). The generation of the reports is very
functional and efficient (Verified-User1, 2017). It was
described as a positive return on investment in a recent review by (Verified-User2, 2017). Moreover, third-party capabilities can be plugged into the framework through the
Security App Exchange (Mello, 2016). This capability of third-party support is
useful because QRadar has limitation for the endpoint monitoring for threat
detection and response and the integrity of the basic
file (Mello, 2016).
QRadar
has additional limitations besides the endpoint monitoring. In recent reviews by (Verified-User1, 2017), the limitations
of IBM QRadar includes the steep learning curve compared to other
platforms. QRadar does require training
and homework. Moreover, there is a lack
of threat feed utilization of STIX (Structured Threat Information Expression)/TAXII
(Trusted Automated Exchange of Indicator Information), which remains very
limited (Verified-User1, 2017). It may require a considerable amount of tuning
during the deployment with very little “out of the box” offense information (Verified-User1, 2017). In another recent review by (Verified-User2, 2017), IBM QRadar is
limited in event log parsing, and the correlation engine needs to be more
flexible and dynamic (Verified-User2, 2017).
Splunk
The
Splunk Enterprise (SE) is the core product of the company (Mello, 2016). It provides log and event collection, searches and visualization with the query
language of Splunk (Mello, 2016). The Splunk Enterprise Security (SES) provides
security features including correlation rules, reports and pre-defined
dashboards (Mello, 2016). SES supports the real-time monitoring and
alerts, incident response and compliance reporting (Mello, 2016). SE and SES can be deployed locally on-premise,
or in public, private or hybrid Cloud, or as a service using the Cloud Service
Models (Mello, 2016). Splunk acquired Caspida in 2015 (Mello, 2016; Tinsley, 2015). After the acquisition, Splunk added the
native behavioral analytics to its repertoire (Mello, 2016). Moreover, it provided support to third-party
UEBA products (Mello, 2016). Additional features have been added to SES to
integrate with other behavioral products.
More capabilities such as improved incident management and workflow
capabilities, lower data storage requirements, better visualizations have been implemented. The expansion of monitoring system to
additional infrastructure and software-as-a-service
provider (Mello, 2016).
Splunk has the capabilities of broad data ingestion,
offering connectors to the data sources and allowing custom connectors (D. Sullivan, 2016). Splunk systems stores data in the schema-less database and uses indexes based on
ingestion enabling various data types with rapid query response (D. Sullivan, 2016). Splunk provides flexible SIEM platform which
can handle various data sources and has the analytic capabilities or a single
data analysis platform (Mello, 2016). Splunk was found gaining “significant” visibility
across the client base of Gartner (Mello, 2016). Splunk has strong advanced security analytics
for combating advanced threat detection and insider threats (Mello, 2016).
Splunk has various advantages. In a review by (Taitingfong, 2015), Splunk was found
to be flexible and extensible. It can
ingest logs from disparate systems using disparate formats and disparate file
types (Murke, 2015; Taitingfong, 2015). Splunk was found to be flexible in parsing,
formatting, and enhancing the data (Taitingfong, 2015). Splunk was found to scale very well in large environments adding more indexers as needed
with the expanded environment (Kasliwal, 2015; Murke, 2015; Taitingfong, 2015).
Splunk can do multi-site
clustering and search head clustering providing load balancing and redundancy (Taitingfong, 2015). In another review by (Murke, 2015), Splunk has the
capability of the real-time analysis. Splunk
provided the best results amongst its competitors as indicated in the review of
(Murke, 2015). It
provided fast results on large datasets and is easy to manage as indicated in the review by (Murke, 2015).
Splunk system has limitations. SES product provides only the basic
pre-defined correlations for user monitoring and reporting (Mello, 2016). The licensing model for high volume data
costs more than other SIEM products, although Splunk offers a new licensing scheme for high-volume data users
(Mello, 2016). In recent reviews by (Kasliwal, 2015; Taitingfong, 2015), the search
language of Splunk, the more advanced formatting or statistical analysis were
found very deep and required a learning
curve. The dashboard of Splunk may
require more visualization which requires development using simple XML, Java
Scripts and CSS (Taitingfong, 2015). Moreover, Splunk releases minor revisions
very quickly due to the increased number of bugs (Taitingfong, 2015). In another review by (Murke, 2015), Splunk was found limited in providing optimized
results with the smaller size of
data. Moreover, it was found by (Kasliwal, 2015; Murke, 2015) as costly.
LogRhythm
The
SIEM of the LogRhythm supports n-tier-scalable,
decentralized architecture (Mello, 2016). LogRhythm’s SIEM is composed of various tools
such as Platform Manager, AI Engine, Data Processors, Data Indexers, and Data
Collectors (Mello, 2016). The
deployment of LogRhythm can be consolidated
as an all-in-one. The implementation options can be appliance,
software or virtual instance format (Mello, 2016). The behavioral analytics of the user and
entity, an integrated incident response workflow and automated response capabilities can be combined with other capabilities such as event, endpoint and
network monitoring (Mello, 2016). The log processing and indexing capabilities
of the LogRhythm’s SIEM are divided into
two components of the systems. Moreover,
more features to the system are added such as the capabilities of the
unstructured search through a new storage backend based on Elasticsearch (Mello, 2016). More features
are added such as the clustered full data replication; more parsers for
applications and protocols, an improved risk-based prioritization (RBP),
support for the Cloud services such as AWS, Box, and Okta. The integration with the Cloud Access
Security Broker solutions such as Microsoft’s Cloud App Security and Zscaler are also added to enhance the LogRhythm’s SIEM
system (Mello, 2016).
LogRhythm
has various advantages. LogRhythm can
integrate the capabilities of the advanced threat monitoring with SIEM (Mello, 2016). It
offers effective out-of-the-box content and workflows which can help in the
automation process (Mello, 2016). It provides highly interactive and
customizable and automated response capabilities for performing actions on
remote devices (Mello, 2016). It is praised for offering straightforward SIEM
systems implementation and maintenance (Mello, 2016). LogRhythm is also described to be very
visible in the SIEM evaluations of the Gartner’s clients (Mello, 2016). In a review for LogRhythm by (Eng, 2016), it was described as a great SIEM which is easy to
implement. Building blocks using
LogRhythm is intuitive and easy using drag and drop building block technique
which can be easily manipulated (Eng, 2016). It offers statistical building blocks with
powerful anomaly detection capabilities which are
found to be more difficult or not possible in other SIEMs products (Eng, 2016). LogRhythm provides better event
classification than any other SIEM products (Eng, 2016). In another review by (Ilbery, 2016), LogRhythm has
the capabilities to import log files from hundreds of devices into one, and it
is easy to search the database (Ilbery, 2016). It also has the capabilities to send alert
messages to the network activities using
emails. It provides a good view of the
network equipment, traffic, and the
servers.
LogRhythm some
limitations. More enhancement and
improvement are required for the custom report engine included (Mello,
2016). In a review by (Eng,
2016),
there is a requirement and need for LogRhythm to provide back-end support for
threat intelligence lists. There is a
proposal by (Eng,
2016)
for log rhythm to replace the code with hash tables to avoid the excessive cost
associated with referencing lists in the rule.
The reporting of LogRhythm was described by (Eng,
2016)
as the worst of all SIEM systems because it is not intuitive and needs
improvement. In another review for
LogRhythm by (Ilbery,
2016), the upgrade process was described as a not
easy process.
Conclusion
This
project discussed and analyzed scalable and intelligent security analytics
tools from different vendors. The
discussion began with an overview of SIEM and the essential capabilities of an
analytics-driven SIEM. The first
generation of SIEM has been overwhelmed with the volume of the data and the
rise of the advanced threats. The second
generation of SIEM was introduced because Big Data tools have the potential to
provide significant enhancement in the action of the security intelligence by
minimizing the time for correlating, consolidating, and contextualizing diverse
security event information and also for correlating long-term historical data
for forensic purposes.
Examples
of these security analytics tools include AlienVault, QRadar, Splunk, LogRhythm,
FireEye, McAfee Enterprise Security Manager, and so forth. The discussion and the analysis were limited
to only four scalable and intelligent security analytics; AlienVault, QRadar,
Splunk, and LogRhythm. The Unified
Security Management (USM) of AlienVault includes SIEM, vulnerability
assessment, asset discovery, flow and packet capture, network and host
detection, and file integrity monitoring.
The USM is utilized to improve the security visibility throughout the
organization and to detect security incidents in the real-time. IBM Security QRadar provides management for
log and events, behavioral and reporting analysis for the network and
applications. Splunk Enterprise Security
enables the search of the data and the implementation of visual correlation to
identify malicious events and collect data about the context of those events. LogRhythm supports log management and has the
capabilities of the network forensic.
They can be deployed in smaller environments. The discussion and the
analysis included the key design considerations for scalability of each
tool. The project also discussed the advantages
and the disadvantages of these four intelligent security analytic tools.
References
CSA. (2013). Expanded Top
Ten Big Data Security and Privacy Challenges. Cloud Security Alliance, Big Data Working Group.
Di
Mauro, M., & Di Sarno, C. (2018). Improving SIEM capabilities through an
enhanced probe for encrypted Skype traffic detection. Journal of Information Security and Applications, 38, 85-95.
McKelvey,
N., Curran, K., Gordon, B., Devlin, E., & Johnston, K. (2015). Cloud
Computing and Security in the Future Guide
to Security Assurance for Cloud Computing (pp. 95-108): Springer.
Nicolett,
M., & Kavanagh, K. M. (2011). Magic Quadrant for Security Information and
Event Management.
Oltsik,
J. (2013). The Big Data Security Analytics Era Is Here.
Oprea,
A., Li, Z., Yen, T.-F., Chin, S. H., & Alrwais, S. (2015). Detection of early-stage enterprise infection
by mining large-scale log data. Paper presented at the Dependable Systems
and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on.
Pearson,
S. (2013). Privacy, security and trust in cloud computing Privacy and Security for Cloud Computing (pp. 3-42): Springer.
Radford,
C. J. (2014). Challenges and Solutions Protecting Data within Amazon Web
Services. Network Security, 2014(6),
5-8. doi:10.1016/S1353-4858(14)70058-3
Shamsolmoali,
P., & Zareapoor, M. (2016). Data Security Model In Cloud Computing.
Proceedings of 2nd International Conference on Computer Science Networks and
Information Technology, Held on 27th – 28th Aug 2016, in Montreal, Canada.
The purpose of this discussion is to identify two advantages of applying Big Data Analytics to solve known security issues such as malware detection, network hacking, spam, and so forth. The discussion and the analysis include the reasons and rationale for utilizing Big Data Analytics to solve security issues. The discussion begins with a brief overview of Big Data Analytics and the Security Threats.
Big Data Analytics
Big Data (BD) is the major topic across some domains and fields such as management and marketing, scientific research, national security and government (Vivekanand & Vidyavathi, 2015). BD enables making an informed decision as it shifts the reasoning from logical and causality-based to the acknowledgment of correlation links between events (De Mauro, Greco, & Grimaldi, 2015). The public and private sectors are increasing their use of the Big Data Analytics (BDA) in different areas (Vivekanand & Vidyavathi, 2015). The process of very large amounts of data is the main benefit of Big Data Analytics (Emani, Cullot, & Nicolle, 2015). Big Data Analytics is defined in (Emani et al., 2015) as the use of advanced analytics techniques on Big Data. As elaborated by (Gupta & Jyoti, 2014), BDA is the process of analyzing Big Data to find hidden patterns, unknown correlations and other useful information which can be extracted to make a sound decision. In (CSA, 2013), BDA is described as the process of analyzing and mining Big Data and can produce operational and business knowledge on an unprecedented scale and specificity. The massive volume of semi-structured, unstructured data can be mined using the BDA (Gandomi & Haider, 2015; Gupta & Jyoti, 2014). The need and the requirement to analyze and leverage trend data which are collected by organizations is one of the main drivers for BDA tools (CSA, 2013). The value of BDA is increasing as the cash flow is increasing. Figure 1 illustrates the graph for the value of BDA with dimensions of time and cumulative cash flow. Thus, there is no doubt that BDA provides great benefits to organizations.
Figure 1. The Value of Big Data Analytics. Adapted from
(Gupta & Jyoti, 2014).
Big Data Analytics for Security
BD is changing the analytics landscape (CSA, 2013). BDA can be leveraged to enhance the information security and situational awareness (CSA, 2013). For instance, BDA can be utilized to analyze financial transactions, log files, and network traffic to identify anomalies and suspicious activities, and to accelerate multiple sources of information into a coherent view (CSA, 2013). The malicious attacks have been increasing lately. Thus, the increasing security threats come along with increasing use of BD, BDA, and Cloud Computing technologies. The malicious attacks have become the major topic of government, organization, and industry (Gupta & Jyoti, 2014). Big Data Security Analytics is used for the increasing practice of organization to gather and analyze security data to detect vulnerabilities and intrusions by attackers (Gupta & Jyoti, 2014). The Advanced Persistent Threats (APT) is a subset of the malicious attacks and threats which are well-resourced and trained attacks which conduct multi-year intrusion campaigns targeting highly sensitive economic, proprietary or national security information (Gupta & Jyoti, 2014). The aim APT is to maintain the persistent attack without getting detected inside their target environment (Gupta & Jyoti, 2014).
Thus, the main purpose of using BD techniques to
analyze the data and apply same to implement enhanced data security techniques (Gupta & Jyoti, 2014). Big Data technologies facilitate
a wide range of industries to develop affordable infrastructures for security
monitoring (Cardenas, Manadhata, & Rajan, 2013). Organizations can use various systems with a range of Security
Analytics Sources (SAS). These systems
can generate messages or alerts and transmits them to the trusted server for analysis and action (Gupta & Jyoti, 2014). The system can be Host-based
Intrusion Detection System (HIDS), an antivirus engine which writes a Syslog or interfaces
reporting events to remove service such as Security and Information Event
Monitoring (SIEM) system (Gupta & Jyoti, 2014).
There are very good reasons for BD to enter the
security domain. In (Gupta & Jyoti, 2014) three main reasons for BD to enter the enterprise security
mainstream. The first reason is the
continuing problems with detection and response of threats because the existing security analytics tools are found
inadequate to handle advanced virus, malware, stealthy attack techniques, and
the growing army of well-organized global cyber attacks. The second reason is the Moore’s Law and Open
Source. Security vendors are increasing
the development cycles by customizing open source tools like Cassandra, Hadoop,
MapReduce and Mahout for security analytics purposes which can help accelerate
innovation to protect systems from threats.
The third reason is the tons of activity on the supply side (Gupta & Jyoti, 2014). Organizations want security alerts
from new vendors aside from HP, IBM, McAfee, and RSA Security. Some vendors such as Hexis Cyber Solutions,
Leidos, Narus, and Palantir will move beyond the government and extend into the
private sector. Others like Click
Security, Forescale, and Netskope have intelligence backgrounds to deal with
the malicious attacks (Gupta & Jyoti, 2014).
Fraud Detection is one of the most visible
utilization of BDA (Cardenas et al., 2013; CSA, 2013). Although Credit Card companies
have conducted fraud detection for decades, the custom-built infrastructure to
mine BD for fraud detection was not economical to adapt for other fraud
detection uses (CSA, 2013). The off-the-shelf BD tools and
techniques are now attracting the attention to analytics for fraud detection in
healthcare, insurance and other fields (CSA, 2013). Examples of using BD for
Security purposes include (1) Network Security, (2) Enterprise Event Analytics,
(3) Netflow Monitoring to Identify Botnets, and (4) Advanced Persistent Threats
Detection (APT). The APT has two
categories of (1) Beehive: Behavior Profiling for APT Detection, and (2) Using
Large-Scale Distributed Computing to Unveil APTs. For this discussion, the Network Security and
Netflow Monitoring to Identify Botnets are the two examples for taking
advantages of BDA for the security purposes (CSA, 2013).
Network Security
The case study by Zions Bancorporation is a good example for using BD for security purposes (Cardenas et al., 2013; CSA, 2013; McDaniel & Smith, 2013; Raja & Rabbani, 2014). The traditional SIEM could not handle the volume of the data generated for security purposes (Cardenas et al., 2013; CSA, 2013; McDaniel & Smith, 2013; Raja & Rabbani, 2014). Zions Bancorporation reported that using Hadoop clusters and business intelligence tools lead to parsing more data faster than the traditional SIEM tools (Cardenas et al., 2013; CSA, 2013; McDaniel & Smith, 2013; Raja & Rabbani, 2014). While the traditional SIEM system takes between twenty and one hours, the Hadoop system provides the result in a minute (Cardenas et al., 2013; CSA, 2013; McDaniel & Smith, 2013; Raja & Rabbani, 2014). The system enables users to mine meaningful security information from sources such as firewalls and security devices, website traffic, business processes and other transactions (Cardenas et al., 2013; CSA, 2013; McDaniel & Smith, 2013; Raja & Rabbani, 2014). The incorporation of unstructured data and multiple disparate datasets into a single analytical framework is one of the main promising features of BD (CSA, 2013; Raja & Rabbani, 2014).
Netflow Monitoring to Identify Botnets
Botnets are a major threat to the current Internet (Francois, Wang, Bronzi, State, & Engel, 2011). The traffic of botnet is mixed with a large volume of benign traffic due to ubiquitous high-speed networks (Francois et al., 2011). These networks can be monitored using IP flow records. However, their forensic analysis forms the major computational bottleneck (Francois et al., 2011). The BotCloud research project by (Francois et al., 2011) leveraging Hadoop and MapReduce technology is a good example of taking advantage of BDA for security purpose. In this project of (Francois et al., 2011), a distributed computing framework leveraging a host dependency model and adapted PageRank algorithm were proposed. Moreover, the Hadoop cluster including MapReduce was utilized to analyze and detect densely interconnected hosts which are potential botnet members. The large volume of Netflow data collected for data analysis was the reason for using MapReduce framework (CSA, 2013; Francois et al., 2011). The project showed a good detection accuracy and a good efficiency based on Hadoop cluster.
Conclusion
Big Data means Big Value for organizations at various levels including the security. BD is changing the analytics landscape. BDA can be leveraged to enhance the information security and situational awareness to detect any abnormal activities. For instance, BDA can be utilized to analyze financial transactions, log files, and network traffic to identify anomalies and suspicious activities, and to accelerate multiple sources of information into a coherent view. Organizations can benefit greatly from BDA tools such as Hadoop and MapReduce for security purposes. There are various reasons for using BD and BDA for security discussed in this DB. In this discussion, the Network Security and Netflow Monitoring to Identify Botnets are the two examples for taking advantages of BDA for the security purposes.
References
Cardenas, A. A., Manadhata, P. K.,
& Rajan, S. P. (2013). Big data analytics for security. IEEE Security & Privacy, 11(6),
74-76.
CSA, C. S. A.
(2013). Big Data Analytics for Security Intelligence. Big Data Working Group.
De Mauro, A.,
Greco, M., & Grimaldi, M. (2015). What
is big data? A consensual definition and a review of key research topics.
Paper presented at the AIP Conference Proceedings.
Emani, C. K.,
Cullot, N., & Nicolle, C. (2015). Understandable big data: A survey. Computer science review, 17, 70-81.
Francois, J.,
Wang, S., Bronzi, W., State, R., & Engel, T. (2011). Botcloud: Detecting botnets using MapReduce. Paper presented at the
Information Forensics and Security (WIFS), 2011 IEEE International Workshop on.
Gandomi, A.,
& Haider, M. (2015). Beyond the hype: Big data concepts, methods, and
analytics. International Journal of
Information Management, 35(2), 137-144.
Gupta, B., &
Jyoti, K. (2014). Big data analytics with Hadoop to analyze targeted attacks on
enterprise data.
McDaniel, P.,
& Smith, S. (2013). Big Data Analytics for Security. The University of Texas at Dallas.
Raja, M. C.,
& Rabbani, M. A. (2014). Big Data analytics security issues in a data-driven
information system.
Vivekanand,
M., & Vidyavathi, B. M. (2015). Security Challenges in Big Data: Review. International Journal of Advanced Research
in Computer Science, 6(6).
The purpose of this discussion is to discuss and analyze two security issues associated with the Cloud Computing system. The analysis includes the causes for these two security issues and the solutions. The discussion begins with an overview of the Security Issues when dealing with Cloud Computing.
Security Issues Associated with Cloud Computing
Cloud Computing and Big Data are the current buzz words in IT industry. Cloud Computing does not only solve the challenges of Big Data, but also offers benefits for businesses, organizations, and individuals such as:
Cost
saving,
Access
data from anywhere anytime,
Pay per
use like any utility,
Data
Storage,
Data
Processing,
Elasticity,
Energy
Efficiency,
Enhanced
Productivity, and more (Botta,
de Donato, Persico, & Pescapé, 2016; Carutasu, Botezatu, Botezatu, &
Pirnau, 2016; El-Gazzar, 2014).
Despite the tremendous benefits of the Cloud Computing, the emerging
technology of the Cloud Computing is confronted
with many challenges. The top challenge is the Security, which is expressed by
executives as number one concern for adopting Cloud Computing (Avram, 2014; Awadhi, Salah, & Martin, 2013;
Chaturvedi & Zarger, 2015; Hashizume, Rosado, Fernández-medina, &
Fernandez, 2013; Pearson, 2013).
The security issues in Cloud Computing environment are distinguished from the security issues of the traditional distributed systems (Sakr & Gaber, 2014). Various research studies, in an attempt, to justify this security challenge in the Cloud Computing environment, provide various reasons such as the underlying technologies of Cloud Computing have security issues, such as virtualization, and SOA (Service Oriented Architecture) (Inukollu, Arsi, & Ravuri, 2014). Thus, the security issues that are associated with these technologies come along with the Cloud Computing (Inukollu et al., 2014). The Cloud Computing Service Model of PaaS (Platform as a Service) is a good example because it is based on SOA (Service-Oriented Architecture) Model. Thus, the Cloud Computing Service Model PaaS inherits all of the security issues that are associated with SOA technology (Almorsy, Grundy, & Müller, 2016). In (Sakr & Gaber, 2014), factors such as multi-tenancy, trust asymmetry, global reach and insider threats contribute to the security issues associated with the Cloud Computing environment. In (Tripathi & Mishra, 2011), eleven security issues and threats associated with the Cloud environment are identified (1) VM-Level attacks, (2) Abuse and Nefarious Use of Cloud Computing, (3) Loss of Governances, (4) Lock-IN, (5) Insecure Interfaces and APIs, (6) Isolation Failure, (7) Data Loss or Leakage, (8) Account or Service Hijacking, (9) Management Interface Compromise, (10) Compliance Risks, and (11) Malicious Insiders. In the more recent report of (CSA, 2016), twelve critical issues to the Cloud security are identified and ranked in the order of severity. Data Breaches is ranked at the top and regarded as the most severe security issue of the Cloud Computing environment. The Weak Identity, Credential, and Access Management is the second severe security issue. The Insecure APIs, System and Application Vulnerabilities, and Account Hijacking are the next ranked security issues. Table 1 lists the twelve security issues associated with the Cloud Computing as reported by (CSA, 2016).
Table
1. Top Twelve Security Issues of Cloud
Computing in Order of Severity. Adapted from (CSA, 2016).
The discussion and analysis are limited to the top two security issues, which are the Data Breaches, and the Weak Identity, Credential and Access Management. The discussion and analysis cover the causes and the proposed solutions.
Data Breaches
The data breach occurs when the sensitive and confidential information or any private data not intended for the public is released, viewed, stolen or used by unauthorized users (CSA, 2016). The data breach issue is not unique to the Cloud Computing environment (CSA, 2016). However, it is consistently ranking as the top issue and concern for the Cloud users. The Cloud environment is subject to the same threats as the traditional corporate network and new attack techniques due to the shared resources. The sensitivity degree of the data determines the extent of the damage. The impact of the Data Breach on users and organization is devastated. For instance, in a single incident of a data breach in the USA, 40 million credit card numbers and about 70 million addresses, phone numbers and other private and personal information details were compromised (Soomro, Shah, & Ahmed, 2016). The firm spent $61 million in less than one year of the breach for the damages and the recovery, besides the cash loss, the profit which dropped by 46% in one quarter of the year (Soomro et al., 2016). The “BitDefender,” the anti-virus firm, and the British telecom provider “TalkTalk” are other good examples of the Data Breaches. The private information such as username and passwords of the customers of “BitDefender” was stolen in mid-2015, and the hacker demanded a ransom of $15,000 (CSA, 2016; Fox-Brewster, 2015). Multiple security incidents in 2014 and 2015 were reported by “TalkTalk” resulting in the theft of four million users’ private information (CSA, 2016; Gibbs, 2015).
The organization is obliged to exercise certain
security standards of care to ensure that sensitive information is not released to unauthorized users. The Cloud providers have certain
responsibilities in certain aspects of the Cloud Computing, and they usually
provide the security measures for these aspects. However, the Cloud users also
have certain aspects when using the Cloud Computing, and they are responsible for these aspects to protect their data in
the Cloud. The multi-factor
authentication and encryptions are the two techniques that are proposed to
secure the Cloud environment.
Insufficient Identity, Credential, and Access Management
Data Breaches and the malicious attacks happen due to various reasons. The lack of scalable Identity Access Management Systems can cause Data Breach (CSA, 2016). The failure to use Multi-Factor Authentication, weak password use and a lack of ongoing automated rotation of cryptographic keys, passwords, and certificates can cause Data Breach (CSA, 2016). Malicious attackers, who can masquerade as legitimate users or developers, can modify and delete data, issue control, and management functions, and snoop on data in transit or release malicious software which appears to originate from a legitimate source. The insufficient identity, credential or key management can allow these malicious attackers or non-authorized users to access private and sensitive data and cause catastrophic damage to the users and the organizations as well. The GitHub attack and the Dell root certificate are good examples of this security issues. The GitHub is a good example of this security issue as the attackers scrape GitHub for Cloud service credentials, hijacked account to mine virtual currency (Sandvik 2014). Dell is another example which releases a fix for root certificate failure because all dell systems used the same secret key and the certificate which enables creating a certificate for any domain, which is trusted by Dell (Schwartz, 2015).
The security issues require Cloud
Computing systems to be protected so that
unauthorized users should not have access to the private and sensitive
information. Various solutions are proposed to solve this security issue of
insufficient identity and access management.
A security framework in a distributed system to consider public key
cryptography, software agents and XML binding technologies was proposed as indicated in (Prakash & Darbari). The credential and cryptographic keys should not be embedded in
source code or distributed in public repositories such as GitHub. The keys should be properly secured using
well-secured public key infrastructure (PKI) to ensure key-management (CSA, 2016). The Identity Management Systems
(IMS) should scale to handle the lifecycle management for millions of users and
cloud service providers (CSP). The IMS
should support immediate de-provisioning of access to resources when events
such as job termination or role change.
The Multi-Factor Authentication System (MAS) such as a smart card, phone authentication, should be
required for user and operator of the Cloud service (CSA, 2016).
References
Almorsy, M., Grundy, J., &
Müller, I. (2016). An analysis of the cloud computing security problem. arXiv preprint arXiv:1609.01107.
Avram, M. G.
(2014). Advantages and Challenges of Adopting Cloud Computing from an
Enterprise Perspective. Procedia Technology,
12, 529-534. doi:10.1016/j.protcy.2013.12.525
Awadhi, E. A.,
Salah, K., & Martin, T. (2013, 17-20 Nov. 2013). Assessing the Security of the Cloud Environment. Paper presented at
the GCC Conference and Exhibition (GCC), 2013 7th IEEE.
Botta, A., de Donato,
W., Persico, V., & Pescapé, A. (2016). Integration of Cloud Computing and
Internet Of Things: a Survey. Future
Generation computer systems, 56, 684-700.
Carutasu, G.,
Botezatu, M., Botezatu, C., & Pirnau, M. (2016). Cloud Computing and
Windows Azure.
Chaturvedi, D.
A., & Zarger, S. A. (2015). A review of security models in cloud computing
and an Innovative approach. International
Journal of Computer Trends and Technology (IJCTT), 30(2), 87-92.
CSA.
(2016). The Treacherous 12: Cloud Computing Top Threats in 2016. Cloud Security Alliance
Top Threats Working Group.
El-Gazzar, R. F.
(2014). A literature review on cloud
computing adoption issues in enterprises. Paper presented at the
International Working Conference on Transfer and Diffusion of IT.
Hashizume, K.,
Rosado, D. G., Fernández-medina, E., & Fernandez, E. B. (2013). An analysis
of security issues for cloud computing. Journal
of internet services and applications, 4(1), 1-13.
doi:10.1186/1869-0238-4-5
Inukollu, V. N.,
Arsi, S., & Ravuri, S. R. (2014). Security issues associated with big data
in cloud computing. International Journal
of Network Security & Its Applications, 6(3), 45.
Pearson, S.
(2013). Privacy, security and trust in cloud computing Privacy and Security for Cloud Computing (pp. 3-42): Springer.
Prakash, V.,
& Darbari, M. A Review on Security Issues in Distributed Systems.
Sakr, S., &
Gaber, M. (2014). Large Scale and big
data: Processing and Management: CRC Press.
Soomro, Z. A.,
Shah, M. H., & Ahmed, J. (2016). Information security management needs more
holistic approach: A literature review. International
Journal of Information Management, 36(2), 215-225.
Tripathi,
A., & Mishra, A. (2011, 14-16 Sept. 2011). Cloud computing security considerations. Paper presented at the
2011 IEEE International Conference on Signal Processing, Communications and
Computing (ICSPCC).
The purpose of this discussion is to discuss and analyze two security issues associated with Big Data. The analysis includes the causes for these two security issues and the solutions. The discussion begins with an overview of the Security Issues when dealing with Big Data.
Security Issues Associated with Big Data
As indicated in (CSA & Big-Data-Working-Group, 2013), the velocity, volume and variety characteristics of Big Data magnify the security and privacy issues. These security and privacy issues include issues such as the large-scale infrastructures in the Cloud, various data sources and formats, the acquisition of the data using streaming techniques, and the high-volume migration inside the Cloud (CSA & Big-Data-Working-Group, 2013). Thus, the traditional security techniques which tended to be for small-scale, static data are found inadequate when dealing with Big Data (CSA & Big-Data-Working-Group, 2013). Storing the organizations’ information, customers and patients in a secure manner is not a trivial process, and it gets more complicated in a Big Data environment (Al-Kahtani, 2017). CSA identified the top ten Big Data Security and Privacy challenges illustrated in the Big Data Ecosystem in Figure 1, adapted from CSA. These ten security challenges are categorized in four main categories in Big Data Ecosystems: (1) The infrastructure Security, (2) Data Privacy, (3) Data Management, and (4) Integrity and Reactive Security.
Figure 1. Top Ten Security Challenges in Big Data
Ecosystem, adapted from (Al-Kahtani,
2017).
Tremendous efforts from the researchers, practitioners and the industry
are exerted to address the security issues associated with Big Data. As indicated in (Arora & Bahuguna), Security of the Big Data is
challenging due to two main vulnerabilities.
The first vulnerability includes the information leakage which gets
increased by Big Data because of its characteristics of high volume and velocity
(Arora & Bahuguna).
The second vulnerability reflects the privacy and prediction of people’s
behavior risk get increased by the
development of intelligent terminals (Arora & Bahuguna).
In (Al-Kahtani, 2017), the general security risks
associated with Big Data environments are identified to include six security
risk elements. The first security risk is
associated with the implementation of a new technology, which can lead
to new vulnerability discovery. The second security risk can be associated with the open source tools which
can contain undocumented vulnerabilities and lack of update options such as
backdoors. The third security risk
reflects the large cluster node attack surfaces which organizations are not
prepared to monitor them. The fourth
security risk reflects the poor
authentication of users and the weak remote access policies. The fifth security
risk is associated with the organizations
is unable to handle large processing of audit and access logs. The sixth element includes the lack of data validation looking for malicious data
input which can become lost in the large volume of the Big Data (Al-Kahtani, 2017).
With regard to the infrastructure,
the common attacks can include false data injections, Denial of Service (DoS),
worm and malware propagation, and botnet attacks (Al-Kahtani, 2017).
In (Khan et al., 2014), the security issues associated with the Big Data are categorized into privacy, integrity, availability,
confidentiality, and governance. Data
leakage is a major privacy concern in Big Data.
The data integrity is a particular challenge for large-scale
collaborative analysis, where data frequently changes
(Khan et al., 2014).
The availability is critical when dealing with Big Data in the
cloud. It involves threats to data
availability such as Denial of Service (DoS)
and Mitigation of DoS attacks. The
confidentiality security issue refers to the distorted data from theft (Khan et al., 2014).
In (Mehta, 2017), the security issues associated
with Big Date involves granular access, monitoring in real-time, granular
audits, preserve privacy in data mining and analytics, encrypted data-centric
security, data provenance and verification, and integrity and reactive
security. These security issues are
similar to the ones discussed in (CSA & Big-Data-Working-Group, 2013; Sahafizadeh
& Nematbakhsh, 2015; Yadav, 2016).
For this discussion, only two security issues
associated with Big Data are discussed and analyzed with the proposed solutions
to overcome them. These two security
issues are categorized under the
Integrity and Reactive Security category of (CSA & Big-Data-Working-Group, 2013), which involves (1) End-point
validation and filtering, and (2) Real-time Security Monitoring. The End-point validation and filtering are categorized in (Demchenko, Ngo, de Laat, Membrey, & Gordijenko,
2013-15) under the
Infrastructure Security category, while the Real-time Security Monitoring is categorized under the Data Management.
End-Point Validation and Filtering Security Issue and Proposed Solutions
The end-points are the main components for Big Data collection (Yadav, 2016). They provide input data for storage and processing. Security is very important to ensure the use of the only authentic end-points, where the network is free from other end-points including the malicious ones (Yadav, 2016). The data collected from various sources including end-point devices is required when dealing with Bid Data (CSA & Big-Data-Working-Group, 2013). The security information and event management system (SIEM) is an example of collecting logs from millions of software applications and hardware devices in an enterprise or organization network. The input validation and filtering process during this data collection are very challenging and critical to the integrity and the trust of the data due to threats of the untrusted sources especially with the “bring-your-own-device” (BYOD) model which allows employees to bring their own devices to the workplace (CSA & Big-Data-Working-Group, 2013). There are four threat models when dealing with validation and filtering security issue. The malicious attacker may tamper with any of these devices such as the smartphone from where data is collected and retrieved with the aim of providing malicious input to a central data collection system is the first threat model. The malicious attacker may perform ID cloning attacks such as Sybil attacks on the collected data with the aim of providing malicious input to a central data collection using the faked identities. The malicious attacker can manipulate the input sources of sensed data is the third threat model when dealing with the validation and filtering security issue. The last threat model for this security issue involves the malicious attacker compromising data in transmission from a benign source to the central collection system such as by performing a man-in-the-middle attack or a replay attack (CSA & Big-Data-Working-Group, 2013).
A use case scenario for this issue is the data which gets retrieved from
weather sensors and feedback votes and are sent by a smartphone such as iPhone
or Android applications have the similar validation and filtering problem (CSA & Big-Data-Working-Group, 2013).
The security issue of the validation and filtering of this example gets
further complicated when the volume of the data collected gets increased (CSA & Big-Data-Working-Group, 2013).
The algorithm is required to
validate the input for large data sets to validate and filter the data from any
malicious and untrusting data (CSA & Big-Data-Working-Group, 2013).
The solutions to the validation security issue are categorized into two categories. The first category is to prevent the
malicious attacker from generating and sending malicious input to the central
collection system (CSA & Big-Data-Working-Group, 2013).
The second category is to detect and filter malicious input at the
central system in case the malicious attacker was successful sending the
malicious data to the central collection system (CSA & Big-Data-Working-Group, 2013).
The first solution to prevent malicious attacks requires tamper-proof
software and defense against the “Sybil” attacks. The researchers and industry have exerted
tremendous efforts to design and implement tamper-proof secure software and
tools. The security for PC-based
platforms and applications have been widely
studied. However, the mobile
devices and the application security still an active
area for research (CSA & Big-Data-Working-Group, 2013).
Thus, the determined malicious attacker may succeed in tamping the
mobile devices. Trusted Platform Module
(TPM) was proposed to ensure the integrity of raw sensor data, and data derived
from raw data (CSA & Big-Data-Working-Group, 2013).
However, the TPM solution is not found in mobile devices universally. Thus, the malicious attacker can manipulate
the sensor input such as GPS signals (CSA & Big-Data-Working-Group, 2013).
Various defense techniques against the fake ID using the ID cloning
attacks and Sybil attacks have been proposed such as P2P (Peer-To-Peer)
systems, Recommender Systems (RS), Vehicular Networks, and Wireless Sensor
Network (CSA & Big-Data-Working-Group, 2013).
Many of these defense techniques propose the Trusted Certificates and
Trusted Devices to prevent Sybil attacks.
However, in large enterprise settings and organizations with millions of
entities, the management of certificates become an additional challenge. Thus,
additional solutions for resource testing are proposed to provide minimal defense
against the Sybil attacks by discouraging Sybil attacks instead of preventing
it (CSA & Big-Data-Working-Group, 2013).
The Big Data analytical techniques can be used to detect and filter
malicious input at the central collection system. Malicious input from the malicious attacker
may appear as outliers. Thus,
statistical analysis and outlier detection techniques can be used to detect and
filter out the malicious output (CSA & Big-Data-Working-Group, 2013).
Real-time Security Monitoring
The Real-Time Security Monitoring is described as one of the most challenging Big Data Analytics issues (CSA & Big-Data-Working-Group, 2013; Sakr & Gaber, 2014). This challenging issue is a two-dimensional issue including the monitoring of the Big Data infrastructure itself, and the use of the same infrastructure for Big Data Analytics. The performance monitoring and the health of the nodes in the Big Data infrastructure is an example of the first side of this issue. A good example of the other side of this issue is the health care provider using monitoring tools to look for fraudulent claims to get a better real-time alert and compliance monitoring (CSA & Big-Data-Working-Group, 2013). The Real-Time Security Monitoring is challenging because the security devices send some alerts which can lead to a massive number of false positives, which are often ignored due to the limited human capacity for analysis. This problem becomes further challenging with Big Data due to the characteristics of the Big Data volume, the velocity of the data streams. However, the technologies of the Big Data can provide an opportunity to process and analyze different types of data rapidly, and real-time anomaly detection based on scalable security analytics (CSA & Big-Data-Working-Group, 2013).
A use case scenario for this issue is the health industry by reducing the
fraud related to claims (CSA & Big-Data-Working-Group, 2013).
Moreover, the stored data are extremely sensitive and must comply with
the patient privacy regulations such as HIPAA, and
it must be carefully protected. The Real-Time detection of the anomalous
retrieval of private information of the patients enables the healthcare
provider to rapidly repair damage and prevent further misuse (CSA & Big-Data-Working-Group, 2013).
The security of the Big Data infrastructure and platform must be secured
which is a requirement for the Real-Time Security Monitoring. Big Data infrastructure threats include (1)
rogue admin access to applications or nodes, (2) web application threats, and
(3) eavesdropping on the line. Thus, the
security of Big Data ecosystem and infrastructure must include each component
and the integration of these components.
For instance, when using Hadoop cluster
in a Public Cloud, the security for the Big Data should include the Security of
the Public Cloud, the Security of Hadoop clusters and all nodes in the cluster,
the Security of the Monitoring Applications, and the Security of the Input
Sources such as devices and sensors (CSA & Big-Data-Working-Group, 2013). The threats also include the attack on the
Big Data Analytics tools which are used to identify the malicious attacks. For instance, evasion attacks can be used to
prevent from being detected; the data
poisoning attacks can be used to minimize the trustworthiness and integrity of
the datasets which are used to train Big Data analytics algorithms (CSA & Big-Data-Working-Group, 2013).
Moreover, barriers such as legal regulations become important when
dealing with the Real-Time Security Monitoring challenges of the Big Data. Big Data Analytics (BDA) can be employed to
monitor anomalous connection to the cluster environment and to mine the logging
events to identify any suspicious activities (CSA & Big-Data-Working-Group, 2013).
When dealing with the Real-Time Security Monitoring, various factors
such as technical, legal and ethical must be
taken into consideration (CSA & Big-Data-Working-Group, 2013).
Conclusion
This discussion focused on two security issues associated with Big Data. The discussion and the analysis included the cause of these two security issues and the solutions. The discussion began with an overview of the Security Issues when dealing with Big Data. The categories of the threats are described not only by (CSA & Big-Data-Working-Group, 2013) but also by researchers. CSA identified the top ten challenges when dealing with Big Data. Various researchers also identified the threats and security challenges associated with Big Data. Some of these security challenges and threats include the secure computations in distributed programming framework. The security of data storage and transactions logs, the End-Point input validation and filtering, and Real-Time Security Monitoring. The two security issues chosen for this discussion are the End-Point input validation and filtering, and the Real-Time Security Monitoring. Various solutions are proposed to reduce and prevent the attacks and threats when dealing with Big Data. However, there is no perfect solution yet for the threats and security issues associated with Big Data due to the nature of the Big Data and the fact that the mobile devices are still active research areas for security.
References
Al-Kahtani, M. S. (2017). Security and
Privacy in Big Data. International
Journal of Computer Engineering and Information Technology, 9(2).
Arora, M., &
Bahuguna, H. Big Data Security–The Big Challenge.
CSA, &
Big-Data-Working-Group. (2013). Expanded Top Ten Big Data Security and Privacy
Challenges. Cloud Security Alliance.
Demchenko, Y.,
Ngo, C., de Laat, C., Membrey, P., & Gordijenko, D. (2013-15). Big security for big data: addressing
security challenges for the big data infrastructure. Paper presented at the
Workshop on Secure Data Management.
Khan, N., Yaqoob,
I., Hashem, I. A. T., Inayat, Z., Mahmoud Ali, W. K., Alam, M., . . . Gani, A.
(2014). Big Data: Survey, Technologies, Opportunities, and Challenges. The Scientific World Journal, 2014.
Mehta, T. M. P. M.
P. (2017). Security and Privacy–A Big Concern in Big Data A Case Study on
Tracking and Monitoring System.
Sahafizadeh, E.,
& Nematbakhsh, M. A. (2015). A Survey on Security Issues in Big Data and
NoSQL. Int’l J. Advances in Computer
Science, 4(4), 2322-5157.
Sakr, S., &
Gaber, M. (2014). Large Scale and big
data: Processing and Management: CRC Press.