All posts by Steven M. Leath

Software Engineer/Security Engineer with over a decade of experience in a variety of industries. Owner of Clevr Software and founding member of GorillaInfoSec.

The future of Cybersecurity Technology and Policy (IoT)


The future of Cybersecurity Technology and Policy



This paper addresses the emerging cybersecurity technologies primarily related to (IoT) internet of things.  How these new technologies can show hope for change and innovation in the field.  Also, looking at government policy that has been lagging in its ability to step in and catch up with the dynamic change in technology and cybersecurity policy.  Understanding the technology and satisfying the initial need is completely two different things.  Also, we look at the overall impact that the government policy that is being used in cases against a hotel company and mobile device vendor is taking a toll on the innovation of IoT in this field.

Countering cyber-attacks at all levels

One of the fastest growing areas in technology is the introduction of the concept (IoT) Internet of things.  IoT is a very broad area.  It ultimately encompasses everything connected.  In fact, (Forbes & Morgan, 2004) says, “that by 2020 there will be over 26 billion connected devices… That’s a lot of connections (some even estimate this number to be much higher, over 100 billion)” As many attempts to try and define IoT there hasn’t been much of a great definition until the past year.  (Gartner Research, n.d.) defined it by saying, “The Internet of Things (IoT) is the network of physical objects that contain embedded technology to communicate and sense or interact with their internal states or the external environment.”  Forbes went to greater lengths to simplify IoT as, “Simply put, this is the concept of basically connecting any device with an on and off switch to the Internet (and/or to each other).”  This includes but not limited to smartphones, smart electrical grids, toasters, and Fitbit’s and other wearables to show the range that we’re discussing.  Much like the definition which can be slightly vague, cybersecurity policy and mitigation is also heavily undefined in this area.  The upside to IoT is that it reduces human involvement along with improving accuracy and efficiency, resulting in economic benefit, (CHALLA et al., 2017, p. xx). According to the (IEEE) the institute of electrical and electronic engineers there are emerging technology that show positive signs of hope in this fast-growing new area, which are application authentication and key management practices, computed trust nodes, and lightweight security protocols for cloud-based Internet-of-Things (IoT) applications for battery-limited mobile devices.


Benefits to Cybersecurity

Each of these emerging technologies offer a different approach in establishing a level of trust in cybersecurity.  One emphasizes a solution built around secure authenticated key establishment scheme, another improves on a trust system or creation of trusted nodes within a network, and the last dives deeper into creating a lightweight protocol concentrating on cloud based cybersecurity.

Signature Based Authenticated Key Establishment Scheme

The basic premise for this new technology is that IoT as a concept has a high potential for invalid security and privacy.  Largely due to the inability to establish security at the design level for each connected object.  This is where most of the security challenges come into play.  Key contributing features that makes this a very promising emerging methodology or practice are:

  • An authentication model for IoT to follow. This model defines a term of mutual authentication.  Where a user authenticates through a gateway node and the IoT device authenticates through the gateway node as well.  Through this mutual authentication the users are then authenticated on the IoT device by proxy.
  • A secure signature based authentication and key agreement scheme. A legal user can access the information from a sensing device in the IoT applications if both mutually authenticate each other, (CHALLA et al., 2017, p. xx). After their mutual authentication, a secret session key is established between them for future communication.

Ultimate benefits of the wide use of this technical methodology have

concluded very efficient in communication and computational costs.  Which helps to solve the problem of identity on IoT devices.  The proposed scheme also protects itself from replay attacks by using random number generators as well as current timestamps.  The assumptions are that all users in the IoT environment are synchronized with their clocks.  There are eight phases to implementation:

  • System setup
  • Sensing device registration
  • User registration
  • Login
  • Authentication and key agreement
  • Password and biometric update
  • Smart card revocation
  • Dynamic sensing device addition

This new best practice can be applied to many different industries in regard to IoT much like the cybersecurity frameworks established by NIST for its categorizations of authentication in web based applications.  This could potentially be incorporated to help satisfy some of the “reasonable security measures” that FTC a government agency which has been known to uphold.  More on this later in the paper.  Establishing standard frameworks for cybersecurity in IoT may allow some businesses that are on the fence to moving to this technology to start implementing and eventually start innovating in the area.


Optimal Trust System Placement in SCADA Networks

Privacy and trust are also a large concern to the US smart grid system.  Mainly because the smart grid network itself highly depends on information and communication technology (ICT).  Supervisory control and data acquisitions (SCADA) are integral part of the modern day smart grid system.  Its primary function is control messages and measurements.  At the current moment, the system is in its fourth generation of architecture, which introduced two key new advanced technologies, (Hasan & Mouftah, 2016, p. xx).  The first would be cloud computing and the second IoT making the smart grid more susceptible to complete outage.  Slight modifications of these systems may cause a complete outage across the entire grid.  Smart grid operators use trust systems to monitor network traffic to and from different nodes.  These nodes are called trust nodes.  The nodes themselves include both a firewall and intrusion detection system.  Within making the decision of which nodes are the best to deploy these trust systems in a network there are two factors which need to be considered capital expenditures and operational expenditures, (Hasan & Mouftah, 2016, p. xx).  To deploy the trust system properly considering operational expenditures and capital expenditures.  Nodes can house only a fixed number of trust systems deployed to them due to budgetary constraints.  The SCADA networks need to be segmented to minimize the amount of cyber-attack traffic and for the trust nodes to be more effective.   There are some potential risks that these SCADA systems need to watch out for.  There are three main types of attacks that are at risk in the current SCADA network.

  • Targets power plants. Disrupts operation or generation.
  • Targets power distribution and control systems. Disrupts state information that may lead to instability.
  • Targets consumer premises. It could potentially cause an increment in the load that could damage the grid.

The focus of the new emerging technology is on the optimal placement of the trust nodes on the SCADA network.  The ultimate solution was producing an algorithm where minimum spanning trees (MST) would represent the smaller segments and then would determine the least expensive method of determining these segments and deploying the trust systems to these trust nodes.  Thereby segmenting the electrical grid enough to protect in from cyberattacks and in the most cost-efficient way possible.  The emerging technology directly effects not only the US smart grid and its efficiency, but also on a local level being able to apply this algorithm to other industries where cost is an issue possibly in the automotive and more factory related industries with clearly large systems that need to be segmented for better protection.  With this new technology and the high priority to moving towards smaller micro grids, this technology is essential and the energy industry globally should be able to benefit from this.

CP-ABE Scheme for Mobile Devices

The last emerging technology is the development of the CP-ABE Scheme for battery limited mobile devices.  In the IoT world many new applications have an emphasis on one device in general that’s the smartphone.  The ability to create secure applications is a must.  This emerging tech focuses on the encryption mechanisms of (CP-ABE) Ciphertext Policy Attribute Based Encryption.  The problem is that most CP-ABE schemes are based on bilinear maps and require long decryption keys, ciphertexts and incur significant computational costs, (Odelu, Das, Khurram Khan, Choo, & Jo, 2017, p. xx).  These limitations prevent the CP-ABE scheme from being deployed on mobile battery limited devices.  The new emerging technology is the ability to create RSA based CP-ABE that has a constant length of secret key.  The ultimate key decryption and encryption times are O (1) of time of complexity which is ground breaking as other solutions have failed to be this efficient up until this point.

CPE-ABE has been around for years but the efficiency that this new method has brought has now made this more applicable to modern IoT technologies primarily the smartphone but not limited to this.  The implementation of the RSA based CPE-ABE is broken down into four main algorithms:

  • Setup – This algorithm takes a security parameter and the universe of attributes as inputs and then outputs a master public key and its corresponding master secret key
  • Encrypt – This algorithm takes an access policy the master public key and plaintext as inputs. The encryption algorithm outputs a ciphertext
  • KeyGen – The inputs are an attribute set, the master public key and the master secret key. The key generation then outputs a user secret key corresponding to the attributes.
  • Decrypt – It takes a ciphertext generated with an access policy, the master public key and the secret key and outputs plaintext using the decryption algorithm, (Odelu, Das, Khurram Khan, Choo, & Jo, 2017, p. xx).

Real world usage for this kind of technology isn’t limited to mobile phones.  Since this is an attribute based encryption system this can be used almost anywhere where attribute based encryption is used.  Which includes token based authentication in JSON Web Token and the creation of JWE or an encrypted JSON Web Token which is used in OAuth system all over the internet in almost every authenticated application.  JSON Web Tokens are used currently right now as an attribute based system.  Instead of attributes the RFC calls them claims where claims are encrypted and sent with a token to the user trying to authenticate.  The claims are then evaluated and the user is given a long-lived token for subsequent requests until the token is expired.  This creates a stateless session for any web application user experience.  OAuth is a security framework that is widely used to authenticate a user across multiple services.  With the emergence of this new technology businesses will be able to use this new RSA based system much like the current systems that are using claims in JWT’s.  The entire online web community will take advantage of this new emerging technology in the coming years.

Federal Government Nurturing the Technologies

Cooperative efforts between the government community and the technology community is needed when discussing the new technology concepts such as IoT.    There is still a lot of work to be done.  A good place to start would be the Federal Trade and Commission’s (FTC).  In an Act, there is a requirement “reasonable security measures” which the agency uses to regulate unfairness.  (IEEE & Loza de Siles, n.d.) says, “Under the Act, this agency regulates conduct involving the Internet and otherwise as that conduct relates to consumers and competition.”    In this act, there are three main components that categorizes unfair or deceptive acts:

  • The act or practice results in substantial consumer injury
  • The consumer cannot reasonably avoid that injury
  • The harm caused by the act or practice is outweighed by countervailing benefits to consumers or to competition.

An actor’s unfair act or practice may not be the cause of consumer injury for the actor to be liable under the Act, (IEEE & Loza de Siles, n.d.).     The FTC prosecuted several Whyndam companies for unfair acts or practices as to the Cybersecurity risks to hotel guests’ personal information where hackers ended up exploiting those risks on three separate occasions, injuring 619,000 consumers.  (IEEE & Loza de Siles, n.d.) continues, “Under the FTC’s unfairness authority, IoT and other companies must use “reasonable security measures” to protect consumers’ data.”  This is very promising that consumers are being protected in this manner as this is long overdue.  However, the vagueness again much like the definition of IoT is still the issue.  There needs to be more policy writing that will foster more concrete laws that move with the dynamic changing landscape.  This does show the overall support of the government agency in the protection of this newly emerging field.


HTC is another example of how the FTC was willing to go after offenders in this grey area of this Act.  The FTC alleged that HTC failed to implement reasonable security measures where HTC, among other illegal conduct, introduced permission re-delegation vulnerabilities in its customized, pre-installed mobile applications on Android-based phones and thereby undermined the operating system’s more protective security model, (IEEE & Loza de Siles, n.d.).  This shows how even though the policy is archaic there is still a government entity looking to look out for consumers. Accordingly, the important take-away regarding the FTC’s Tried and True Guidance is that what constitutes “industry-tested and accepted methods” of data security is dynamic and a constantly moving target, (IEEE & Loza de Siles, n.d.).   But when does this “reasonable security measures” end.  One could clearly see how this may deter innovators from pursuing such areas of interest.  In the end, there needs to be more capable policy writers to keep up with the times. It looks as though there are severe re-writes that need to happen in the next five to ten years.  Only then will innovators and security experts truly see eye to eye.


One of the fastest growing areas in technology is the introduction of the concept (IoT) Internet of things.  However, a very exciting time.  There is a some very important new emerging technologies to take note of.  That will allow for more innovation in the IoT field.  As the field continues to grow there will allows be more potential risks.  The emerging security solutions and methodologies are grossly behind.  The policy is even more behind the technology to help combat some of the threats that IoT faces.  For this field to get the growth it needs cyber policy needs to be written to allow for innovators in the field to have comfort in developing in this space.  Until this is done there will not be enough significant innovation to elevate all the security threats due to the inability to in fuse a startup in this space without thinking an investment is going to go directly to liability issues in a few years or even worse in its first year.  The ability to see the government take initiative to protect is however very refreshing.



CHALLA, S., WAZID, M., KUMAR DAS, A., KUMAR, N., REDDY, A., YOON, E., & YOO, K. (2017). Secure signature-based authenticated key establishment scheme for future iot applications. IEEE Access5, 3028-3043. Retrieved from

Forbes, & Morgan, J. (2004, May 13). A simple explanation of ‘the internet of things’. Retrieved from

Gartner Research. (n.d.). Internet of things defined – tech definitions by gartner. Retrieved from

Hasan, M. M., & Mouftah, H. T. (2016). Optimal trust system placement in smart grid scada networks. IEEE Access4, 2907-2919. doi:10.1109/access.2016.2564418

IEEE, & Loza de Siles, E. (n.d.). Cybersecurity Law and Emerging Technologies Part 1 – IEEE Future Directions. Retrieved from

Odelu, V., Das, A. K., Khurram Khan, M., Choo, K. R., & Jo, M. (2017). Expressive cp-abe scheme for mobile devices in iot satisfying constant-size keys and ciphertexts. IEEE Access5, 3273-3283. doi:10.1109/access.2017.2669940

RFC 7516 – JSON Web Encryption (JWE). (n.d.). Retrieved from

RFC 7519 – JSON Web Token (JWT). (n.d.). Retrieved from


Digital Forensics Comparison of Data Source Relevance per Investigations

Digital Forensics Comparison of Data Source Relevance

Many different sciences are grounded in the fact that certain information will never change.  For instance, gravity never changes, water molecules can be a liquid, solid, and a gas, and DNA can help match identity in human beings.  In digital forensic this is different because the medium in which they work is technology and technology changes all the time.  Keeping up on the latest in technological advances and their data sources which are common places to get specific information can be the difference between winning and losing a case.  Knowing where to look and in which order can change based on the type of investigation that a digital forensic investigator is working on.  We will look at the collection and examination of data sources based on the more common investigation that have been seen.

Network Intrusion Investigation

Network intrusions are a continual problem and will be for some time.  There won’t be a shortage of network intrusion investigations happening anytime soon.  (Fung 2013) says, “The Pentagon reports getting 10 million attempts a day.”  Which is scary and incredible statistic on its own.  But this isn’t just at the government agency level.  BP the energy company has been experiencing 50,000 attempts of cyber intrusion per day, (Fung 2013). In a recent report from Verizon not only are network intrusions steadily moving up, but it shows the time to compromise decreasing, (Verizon, 2016, p. xx).  This puts a large amount of pressure on the digital forensics community to speed their time for discovery.

Some of the different types of data that would need to be collected in a network intrusion investigation would be:

  • IDS and Firewall logs
  • HTTP, FTP, SMTP logs
  • Network Applications logs
  • Backtracking Transmission with TCP connections
  • Artifacts and remnants of network traffic on hard drives of seized systems
  • Live traffic captured by packet sniffer
  • Individual systems ARP tables, SNMP messages


Collecting data from these different areas are more challenging than other data in other areas of the system.  The data given will differ in all investigation but the object is to find any time of consistency in network intrusion investigations.  Many of the network intrusion investigations deal with network state.  Discovering the network state allows forensic experts to find possible entry points.  One of the first things that needs to be done is painting a picture of the network configuration.  Knowing a blue print of external facing applications and or api’s.  A beneficial tool in this scenario will be the ability to create an accurate timeline of events.  So, the number one priority of this investigation would be obtaining system and application logs.  This will allow a forensic expert to formulate a timeline. In Table 1 we can see that there are numerous types of data sources to pull data from.  However, the internal network and system logs which include Firewall, IDS, and Active Directory logs proves the most viable data sources to look for in this specific type of investigation.  There is also a very high probability of collection since most of the information is obtained by taking a snapshot of the logs from a cooperative network administrator.

Table 1. Shows the different data sources in a network intrusion investigation

In a network intrusion investigation, a forensic expert wants visibility at the packet level.  Both in bound and out bound.  The below prioritization of data sources is as follow:

  1. Internal Network System Logs
  2. ISP Service Logs
  3. Computer and or server hard drives


Examining the data that was found is a separate story.  Internal logs will contain the information that a forensic expert needs to build the important event timeline, however there will be could be large amount of data to examine.  Thanks to tools like encase this becomes slightly easier for the forensic expert.  This is where IDS systems play a huge role.  Intrusion Detection Systems can capture anomaly based events or statistical based events.  These will be flagged by an alert.  Focusing on the alerts that were presented can give a great starting point in the examination of a network intrusion investigation.  This is not the end all be all data source to look at in a network intrusion investigation in fact many things could change the type of data that a forensic expert gets back.  (Forensic Mag, 2013) says “any number of activities or events might influence or affect the collected data in unknown ways, including TCP relaying, proxy servers, complex packet routing, Web and e-mail anonymizers, Internet Protocol (IP) address or e-mail spoofing, compromised third party systems, session hijacking and other person-in-the-middle attacks, and domain name system (DNS) poisoning.”  Also, if there is a sophisticated network intrusion logs have the potential in being deleted or cleared.  The examination of the internal network logs is invaluable in this type of investigation.

ISP server logs also pose a great data source primarily because they can give you a general location of where the network intrusion came from.  Ultimately leading to an arrest.  Obtaining this session data can be done by obtaining a warrant for a specific customer.  This will give a forensic expert all pertinent data that an ISP has to a specific investigation, (Forensic Mag, 2013).

Malware Intrusion Investigation

Malware intrusion investigations include but not limited to worms, Trojans, botnets, rootkits and ransomware.  Malware is a huge problem in the United States and abroad.  (Panda Labs, 2016) says, “18 million new malware samples were captured in this quarter alone, an average of 200,000 each day.”  As seen below in Figure 1.  The most unbelievable part of this statistic is that this is based on just one quarter.  Malware investigations are on the rise.  Understanding how malware enters a computer and how it communicates gives the forensic expert a huge advantage in locating the exact places on a compromise system to look.  Which in turn increases the efficiency of the investigation.

Figure 1. Malware identified over the years.


Malware investigations unlike the network intrusion investigation predominantly looks at the malware itself.  Understanding how the malware was introduce may lead to a conviction.  Understand the level of complexity, damage and data leakage will be found on the hard drive of the infected computer or server itself.  More importantly at the RAM level.  As a matter of fact, (SANS Digital Forensics and Incident Response Blog, 2016), says “Investigators who do not look at volatile memory are leaving evidence at the crime scene.” Much like the data collected for the network intrusion investigation forensic experts need to understand a basic knowledge of what the operating system considers normal behavior.  For this network, golden images and IDS solutions may help identify normal behavior.  But the volatile memory on disk will be the number one for this type of investigation.  (SANS Digital Forensics and Incident Response Blog, 2016), continues by saying “It is this evidence that often proves to be the smoking gun that unravels the story of what happened on a system.”

Table 2. Depicts the order of data sources in a Malware installation investigation.



The examination of the volatile memory on the compromised computer or server will yield user actions, as well as evil processes and furtive behaviors implemented by malicious code, (SANS Digital Forensics and Incident Response Blog, 2016).  As RAM, would be the top data source that a forensic expert would be looking at, the Registry if this is a windows machine would also be of interest.  Time zone information, audit policy, wireless SSIDs, locations of auto-start programs, user activities and mounted devices can all be obtained from the windows registry, (Nelson, Phillips, & Steuart, 2010, p. xx).  As demonstrated in figure 2 below.  In figure 3 there is usb device information that can be obtain from the registry.  This would all be valuable information when studying if the malware moved from computer to computer on the internal network and it behaves in general.  Also, studying network logs to see if the malware is communicating with an external server would also be a data source to examine.  The prioritized list of all of the data sources for the malware installation investigation would like as followed:

  1. Computer / Server HD
  2. Internal Network System Logs
  3. ISP Server Logs

Figure 2. Shows the history obtained from a Windows 7 registry.

Figure 3. Depicts a registry value where USB device that was plugged into the computer


Figure 4. Shows the created date and last access date of a wireless network


Insider File Deletion Investigation

One of the biggest threats to a business is the insider threat. Insiders include anyone authorized beyond the authority of the public.  (Cohen, 2012, p. xx) says, “Specifically, 76% of disloyal insiders were identified after being caught to have taken steps to conceal their identities, actions, or both, 60% compromised another’s user’s account to carry out their acts, and 88% involved either modification or deletion of information.”  This includes a disgruntled employee that has possibly turned or a possible hired employee planted in the company working on behalf of another company.  One of the main reasons that this is such a difficult threat to detect is largely because the employee is given regular access to a company’s network.  Which allows for them to know where sensitive data is kept.


In this insider deletion investigation access to an offender’s hard drive of their computer would be a great first step.  Collection of this would more than likely show nothing since the insider more than likely would try and cover his or her tracks.  But using the person’s hard drive would give a forensics expert the ability to see if there are more devices that need to be considered in the investigation such as removable devices and remote storage.  In the event of file deletion, access to the computers that the data was deleted from can tell information about what account deleted the file.  (Cohen, 2012, p. xx) continues by saying, “While it is possible that an insider might use known malicious attack methods typically detected by intrusion detection methodologies and system, doing things that trigger such systems is rarely if ever necessary for an authorized insider.”  So as the network and system logs still might prove useful this would be very difficult to identify.

Figure 4. Shows Active directory of a user and his/her last login.


The data that will be gained from the registry of the insider’s computer HD registry would be the best starting point here.  Allowing a forensic expert to gauging a since of normal computer usage and seeing if there are any anomalies.  Using the data from the network active directory that controls the user accounts for the entire company would allow forensic experts to pin point the account that was used in the deletion.  In an examination combining the physical sensors, key card access, and account access from system logs proves to be invaluable.  In figure 4 above there is useful information that can be gotten from Active directory as well.  Examiner use this to combine this data together to understand consistencies and inconsistencies.  This could also give a forensic expert an approximate time of when this happened allow the examiner to build a potential timeline for the investigation.  As seen below in table 3 the starting point would be the compromised files on the hard drive of the given computer or server.

Table 3.  Data sources ranking in an insider deletion investigation


As we can see there are many different areas where a forensic expert can look for data.  As technology continues to advance these numbers will grow.  The amount of time that it takes to compromise a system versus the amount of time it takes to discover is still very far apart.  Which leads to the ultimate consensus in my findings that to be the forensic investigator on anyone of these investigations one would have to look everywhere.  Having a general understanding of the crime does help in many scenarios but not all.  When certain security measures aren’t put into place there is little an examination can do specifically in the insider threat scenario.  The forensic examination is only as good as the carelessness of the insider and the security that was in place at the time.  Having general guidelines, a clear understanding of the investigation, and a priority list of known data source places can go a very long way.


National Institute of Justice (U.S.). (2004). Special report, forensic examination of digital evidence: a guide for law enforcement (199408). Retrieved from publisher not identified website:

National Institute of Justice (U.S.). (2007). Report, investigations involving the internet and computer networks. Retrieved from website:

SANS Digital Forensics and Incident Response Blog. (2016, October 29). Digital forensics and incident response blog | malware can hide, but it must run. Retrieved from

Cohen, F. (2012). Forensic methods for detecting insider turning behaviors. 2012 IEEE Symposium on Security and Privacy Workshops. doi:10.1109/spw.2012.21

Forensic Mag. (2013, May 28). The case for teaching network protocols to computer forensics examiners: part 1. Retrieved from

Fung, B. (2013, March 8). How many cyberattacks hit the united states last year? Retrieved from

Panda Labs. (2016, October 20). Cybercrime reaches new heights in the third quarter. Retrieved from

Shephard, D. (2015, March 16). 84 fascinating & scary it security statistics. Retrieved from

Verizon. (2016). 2016 data breach investigations report. Author.


Top Places for Malware to hide 2017

With most of the commercial anti-virus software vendors using signature based malware classification methods this becomes a bit of a game of creating code that is obfuscated just enough to change the signature to be undetected.  (Shijo & Salim, 2015, p. xx) say, “In static analysis features are extracted from the binary code of programs and are used to create models describing them.”  This is the most commonly used method of detection and obfuscation is the simple work around. Signatures need to be frequently updated to catch the common malware, while malware makers can simply change the obfuscation of the code.  One never catches up with the other. (Shijo & Salim, 2015, p. xx) continues by saying, “The static analysis fails at different code obfuscation techniques used by the virus coders and also at polymorphic and metamorphic malware’s.”  What also fails is the dynamic analysis due to the behavior of a program that is monitored while in execution.  The problem is malware has to be done in a secure environment for a specific amount of time this is a limitation due to the amount of time that it takes to create this maleware.

The first way that malware tries to hide itself is in the windows registry.(AlienVault, 2016) says, “the Windows registry is quite large and complex, which means there many places where malware can insert itself to achieve persistence.” An simple example is the Poweliks sets a null entry utilizing one of the built-in Windows APIs, ZwSetValueKey, which allows it to create a registry key with an encoded data blob, (AlienVault, 2016).  From this point it can hide out and autostart and maintain persistence of many systems.

The second way malware will hide itself is process injection.  This is where the malware hijacks a running process and puts bits of code into it.  (AlienVault, 2016) says, “Malware leverages process injection techniques to hide code execution and avoid detection by utilizing known “good” processes such as svchost.exe or explorer.exe.”

A third example would be physical.  This is where the malware could possibly be stored on the slack partition of the drive.  (Berghel, 2007, p. xx)  says, ” At the sector level, any unused part of a partially filled sector is padded with either data from memory (RAM slack) or null characters (sector slack).”  The location is ideal because the Operating System doesn’t have access to this portion of the data normally.  This can lay dormant and resurface based off of specific commands.


AlienVault. (2016, October 3). Malware hiding techniques to watch for: alienvault labs. Retrieved from

Shijo, P., & Salim, A. (2015). Integrated static and dynamic analysis for malware detection. Procedia Computer Science46, 804-811. doi:10.1016/j.procs.2015.02.149

Berghel, H. (2007). Hiding Data, Forensics, and Anti-Forensics. Communications Of The ACM50(4), 15-20. doi:10.1145/1232743.1232761

Browser Attacks and Network Intrusion

Research Synthesis and Analysis of Browser Attacks and Network Intrusion

Browser attacks and network intrusion are drawbacks users face every day for being connected to the internet in one way or another.  One has to access a browser to be served content on the web and one has to be connected to a network to view the web.  We will take a closer look at both in this paper.

Browser Attacks

Browser attacks come in many different forms, making them very difficult to defend against. OWASP, which stands for open web application security project is a nonprofit organization which has made an effort to identify the many types of browser based attacks in the wild.  OWASP is more well-known for its project called OWASP top ten project.  The top ten biggest browser based attacks are as follows:

  1. Injection
  2. Broken Authentication & Session Management
  3. XSS or Cross Site Scripting
  4. Insecure Direct Object Reference
  5. Security Misconfiguration
  6. Sensitive Data Exposure
  7. Missing Function Level Access Control
  8. Cross Site Request Forgery
  9. Using Components with knows vulnerabilities
  10. Invalidated Redirect & Forwards

These are the ten main categories that browser attacks fall into.  An even more daunting task is that even though the list may have been created in 2013, most of these categories are still visible on the internet and can be used in today’s internet landscape.

Major Issues, Problems

The problems with browser attacks are largely due to the overwhelming number of browsers that are available to users.  Not all browsers handle content the same way and not all browsers protect against vulnerabilities in the OWASP top ten in the same manner.  With the five biggest browsers being Chrome, IE, Firefox, Safari, and Opera there are also the problem of versions of these top five.  This enables a vulnerability to remain in the opened to be used to attack until a user gets around to updating their browser.  An even greater issue is that a web application could exist and is made in 2013 and heavily used by a company.  A company may not be able to upgrade the web application because of resources.  However, this ultimately doesn’t work in modern browsers leaving potentially 1000 of computers susceptible to all vulnerabilities since 2013 in this web browser.

If this wasn’t alarming enough users have created frameworks that allow security researchers and engineers to test these web applications in their companies.  One penetration testing framework is the BEEF framework.  This framework has compiled many of the vulnerabilities in the OWASP top ten into a single interface which is used to exploit browsers which they call “hooking”.  Beef was built by a group of developers to explore the vulnerabilities in browsers and test them specifically Beef is an excellent platform for testing a browser’s vulnerability to XSS and other injection attacks, (Null Byte, 2015).

New malware is being developed in the wild which is taking advantage of these browser vulnerabilities and exploiting them for man in the middle browser attacks.  (Khandelwal, 2016) says, “Besides process level restrictions bypass, the AtomBombing code injection technique also allows attackers to perform man-in-the-middle (MITM) browser attacks, remotely take screenshots of targeted user desktops, and access encrypted passwords stored on a browser.”  In a recent article the AtomBombing malware was dubbed to have no patch.  (Khandelwal, 2016) says, “Since the AtomBombing technique exploits legitimate operating system functions to carry out the attack, Microsoft cannot patch the issue without changing how the entire operating system works. This is not a feasible solution, so there is no notion of a patch.”

Analysis, Ideas, and Solutions

Looking at some of the above browser based attacks as you can see in the case of the AtomBombing there is little that can be done.  However, there are some general practices that can help an organization and or a normal computer user to defend against a large portion of these attacks, (How to Geek, n.d.).

  1. Keep your browser updated
  2. Enable Click-to-Play Plug-ins
  3. Uninstall Plug-ins you don’t need
  4. Keep Plug-ins updated
  5. Use a 64-bit Web Browser
  6. Run an Anti-Exploit Program
  7. Use Caution When Using Browser Extensions

In a work scenario, many of the above list will be able to be restricted through a group policy.  Many of these browser attacks have specific signature that can be spotted by a good intrusion detection system like SNORT or Dell SonicWall.  Also with a tool like Dell Kace you can track inventory of all web browsers that are being used within a company’s network to make sure there aren’t any legacy browsers floating around.

Network Intrusion

Network intrusion is something that everyone must deal with when connected to the internet whether it’s a person’s home network or work.  (Moskowitz, 2014) defines, “A network intrusion is any unauthorized activity on a computer network.”  Many believe this could be using the network for something it wasn’t intended to do whether consciously or subconsciously. (Moskowitz, 2014) continues, “In most cases, such unwanted activity absorbs network resources intended for other uses, and nearly always threatens the security of the network and/or its data. “

Major Issues, Problems

The largest problem that we have with network intrusion attacks is the scale of which the network is growing.  With the emergence of internet of things, toasters and thermostats now fall susceptible to old attack vectors in networking.  (Hodo et al., n.d.) says, “Research conducted by Cisco reports there are currently 10 billion devices connected, compared to the world population of over 7 billion and it is believed it will increase by 4% by the year 2020.” At an RSA conference a researcher discussed some very popular attack vectors that come up often when discussing network intrusion these are:

  1. Asymmetric Routing
  2. Buffer Overflow Attacks
  3. Scripts
  4. Protocol-Specific Attacks
  5. Traffic Flooding
  6. Trojans
  7. Worms

Intrusion to a network can come in two main forms whether External Intruders, where these are people that will more than likely use malware or exploits to gain access to a system or Internal Intruders, these are people misuse the system by changing important data or theft of confidential data.

Analysis, Ideas, and Solutions

Intrusion detection systems bring the most hope to the defense from many of these attack vectors discussed.  Whether (HIDS) Host-Based or (NIDS) network based.  There are many different flavors of IDS systems and selecting the right system is very important and unique to budget and normal network usage.  Some use signature based others are using anomaly based systems or pattern recognition.  Recently we’ve seen a rise in hybrid approaches taking the best of both worlds.  The four different techniques which are used are Statistical analysis, Evolutionary algorithm, Protocol verification, and Ruled Based or signature based systems.  Ultimately these systems when used appropriately will catch uncharacteristic traffic.  Some need a baseline of traffic to get started some work directly out of the box like a signature based system.  As the networks continue to get more and more complex so do these IDS systems.  The ability to pool known attacks into a signature share through all companies is a powerful tool but now the landscape is changing and attacks are becoming more targeted in nature.  Anomaly based systems need to be used in conjunction with signature based.  Many companies are faced with a resource issues as anomaly based systems need monitoring since the potential of false positives are a lot higher.





Hodo, E., Bellekens, X., Hamilton, A., Dubouilh, P., Iorkyase, E., Tachtatzis, C., & Atkinson, R. (n.d.). Threat analysis of iot networks using artificial neural network intrusion detection system. Paper presented at the meeting of the International Symposium on Networks, Computers and Communications, Hammamet, Tunisia.

How to Geek. (n.d.). 7 ways to secure your web browser against attacks. Retrieved from

Khandelwal, S. (2016, October 27). This code injection technique can potentially attack all versions of windows. Retrieved from

Moskowitz, R. (2014, December 25). Network intrusion: methods of attack | rsa conference. Retrieved from

Null Byte. (2015). Hack like a pro: how to hack web browsers with beef « null byte. Retrieved from

OWASP. (n.d.). Category:owasp top ten project – owasp. Retrieved from


The Theory (Hashing Functions, Salt, Pepper) – Explained

We need to hash passwords as a second line of defense. A server which can authenticate users necessarily contains, somewhere in its entrails, some data which can be used to validate a password. A very simple system would just store the passwords themselves, and validation would be a simple comparison. But if a hostile outsider were to gain a simple glimpse at the contents of the file or database table which contains the passwords, then that attacker would learn a lot. Unfortunately, such partial, read-only breaches do occur in practice (a mislaid backup tape, a decommissioned but not wiped-out hard disk, an aftermath of a SQL injection attack — the possibilities are numerous). See this blog post for a detailed discussion.

Since the overall contents of a server that can validate passwords are necessarily sufficient to indeed validate passwords, an attacker who obtained a read-only snapshot of the server is in position to make an offline dictionary attack: he tries potential passwords until a match is found. This is unavoidable. So we want to make that kind of attack as hard as possible. Our tools are the following:

  • Cryptographic hash functions: these are fascinating mathematical objects which everybody can compute efficiently, and yet nobody knows how to invert them. This looks good for our problem – the server could store a hash of a password; when presented with a putative password, the server just has to hash it to see if it gets the same value; and yet, knowing the hash does not reveal the password itself.
  • Salts: among the advantages of the attacker over the defender is parallelism. The attacker usually grabs a whole list of hashed passwords, and is interested in breaking as many of them as possible. He may try to attack several in parallels. For instance, the attacker may consider one potential password, hash it, and then compare the value with 100 hashed passwords; this means that the attacker shares the cost of hashing over several attacked passwords. A similar optimization is precomputed tables, including rainbow tables; this is still parallelism, with a space-time change of coordinates.The common characteristic of all attacks which use parallelism is that they work over several passwords which were processed with the exact same hash function. Salting is about using not one hash function, but a lot of distinct hash functions; ideally, each instance of password hashing should use its own hash function. A salt is a way to select a specific hash function among a big family of hash functions. Properly applied salts will completely thwart parallel attacks (including rainbow tables).
  • Slowness: computers become faster over time (Gordon Moore, co-founder of Intel, theorized it in his famous law). Human brains do not. This means that attackers can “try” more and more potential passwords as years pass, while users cannot remember more and more complex passwords (or flatly refuse to). To counter that trend, we can make hashing inherently slow by defining the hash function to use a lot of internal iterations (thousands, possibly millions).

We have a few standard cryptographic hash functions; the most famous are MD5 and the SHA family. Building a secure hash function out of elementary operations is far from easy. When cryptographers want to do that, they think hard, then harder, and organize a tournament where the functions fight each other fiercely. When hundreds of cryptographers gnawed and scraped and punched at a function for several years and found nothing bad to say about it, then they begin to admit that maybe that specific function could be considered as more or less secure. This is just what happened in the SHA-3 competition. We have to use this way of designing hash function because we know no better way. Mathematically, we do not know if secure hash functions actually exist; we just have “candidates” (that’s the difference between “it cannot be broken” and “nobody in the world knows how to break it”).

A basic hash function, even if secure as a hash function, is not appropriate for password hashing, because:

  • it is unsalted, allowing for parallel attacks (rainbow tables for MD5 or SHA-1 can be obtained for free, you do not even need to recompute them yourself);
  • it is way too fast, and gets faster with technological advances. With a recent GPU (i.e. off-the-shelf consumer product which everybody can buy), hashing rate is counted in billions of passwords per second.

So we need something better. It so happens that slapping together a hash function and a salt, and iterating it, is not easier to do than designing a hash function — at least, if you want the result to be secure. There again, you have to rely on standard constructions which have survived the continuous onslaught of vindictive cryptographers.

Good Password Hashing Functions


PBKDF2 comes from PKCS#5. It is parameterized with an iteration count (an integer, at least 1, no upper limit), a salt (an arbitrary sequence of bytes, no constraint on length), a required output length (PBKDF2 can generate an output of configurable length), and an “underlying PRF”. In practice, PBKDF2 is always used with HMAC, which is itself a construction built over an underlying hash function. So when we say “PBKDF2 with SHA-1”, we actually mean “PBKDF2 with HMAC with SHA-1”.

Advantages of PBKDF2:

  • Has been specified for a long time, seems unscathed for now.
  • Is already implemented in various framework (e.g. it is provided with .NET).
  • Highly configurable (although some implementations do not let you choose the hash function, e.g. the one in .NET is for SHA-1 only).
  • Received NIST blessings (modulo the difference between hashing and key derivation; see later on).
  • Configurable output length (again, see later on).

Drawbacks of PBKDF2:

  • CPU-intensive only, thus amenable to high optimization with GPU (the defender is a basic server which does generic things, i.e. a PC, but the attacker can spend his budget on more specialized hardware, which will give him an edge).
  • You still have to manage the parameters yourself (salt generation and storage, iteration count encoding…). There is a standard encoding for PBKDF2 parameters but it uses ASN.1 so most people will avoid it if they can (ASN.1 can be tricky to handle for the non-expert).


bcrypt was designed by reusing and expanding elements of a block cipher called Blowfish. The iteration count is a power of two, which is a tad less configurable than PBKDF2, but sufficiently so nevertheless. This is the core password hashing mechanism in the OpenBSD operating system.

Advantages of bcrypt:

  • Many available implementations in various languages (see the links at the end of the Wikipedia page).
  • More resilient to GPU; this is due to details of its internal design. The bcrypt authors made it so voluntarily: they reused Blowfish because Blowfish was based on an internal RAM table which is constantly accessed and modified throughout the processing. This makes life much harder for whoever wants to speed up bcrypt with a GPU (GPU are not good at making a lot of memory accesses in parallel). See here for some discussion.
  • Standard output encoding which includes the salt, the iteration count and the output as one simple to store character string of printable characters.

Drawbacks of bcrypt:

  • Output size is fixed: 192 bits.
  • While bcrypt is good at thwarting GPU, it can still be thoroughly optimized with FPGA: modern FPGA chips have a lot of small embedded RAM blocks which are very convenient for running many bcrypt implementations in parallel within one chip. It has been done.
  • Input password size is limited to 51 characters. In order to handle longer passwords, one has to combine bcrypt with a hash function (you hash the password and then use the hash value as the “password” for bcrypt). Combining cryptographic primitives is known to be dangerous (see above) so such games cannot be recommended on a general basis.


scrypt is a much newer construction (designed in 2009) which builds over PBKDF2 and a stream cipher called Salsa20/8, but these are just tools around the core strength of scrypt, which is RAM. scrypt has been designed to inherently use a lot of RAM (it generates some pseudo-random bytes, then repeatedly read them in a pseudo-random sequence). “Lots of RAM” is something which is hard to make parallel. A basic PC is good at RAM access, and will not try to read dozens of unrelated RAM bytes simultaneously. An attacker with a GPU or a FPGA will want to do that, and will find it difficult.

Advantages of scrypt:

  • A PC, i.e. exactly what the defender will use when hashing passwords, is the most efficient platform (or close enough) for computing scrypt. The attacker no longer gets a boost by spending his dollars on GPU or FPGA.
  • One more way to tune the function: memory size.

Drawbacks of scrypt:

  • Still new (my own rule of thumb is to wait at least 5 years of general exposure, so no scrypt for production until 2014 – but, of course, it is best if other people try scrypt in production, because this gives extra exposure).
  • Not as many available, ready-to-use implementations for various languages.
  • Unclear whether the CPU / RAM mix is optimal. For each of the pseudo-random RAM accesses, scrypt still computes a hash function. A cache miss will be about 200 clock cycles, one SHA-256 invocation is close to 1000. There may be room for improvement here.
  • Yet another parameter to configure: memory size.

OpenPGP Iterated And Salted S2K

I cite this one because you will use it if you do password-based file encryption with GnuPG. That tool follows the OpenPGP format which defines its own password hashing functions, called “Simple S2K”, “Salted S2K” and “Iterated and Salted S2K“. Only the third one can be deemed “good” in the context of this answer. It is defined as the hash of a very long string (configurable, up to about 65 megabytes) consisting of the repetition of an 8-byte salt and the password.

As far as these things go, OpenPGP’s Iterated And Salted S2K is decent; it can be considered as similar to PBKDF2, with less configurability. You will very rarely encounter it outside of OpenPGP, as a stand-alone function.

Unix “crypt”

Recent Unix-like systems (e.g. Linux), for validating user passwords, use iterated and salted variants of the crypt() function based on good hash functions, with thousands of iterations. This is reasonably good. Some systems can also use bcrypt, which is better.

The old crypt() function, based on the DES block cipher, is not good enough:

  • It is slow in software but fast in hardware, and can be made fast in software too but only when computing several instances in parallel (technique known as SWAR or “bitslicing”). Thus, the attacker is at an advantage.
  • It is still quite fast, with only 25 iterations.
  • It has a 12-bit salt, which means that salt reuse will occur quite often.
  • It truncates passwords to 8 characters (characters beyond the eighth are ignored) and it also drops the upper bit of each character (so you are more or less stuck with ASCII).

But the more recent variants, which are active by default, will be fine.

Bad Password Hashing Functions

About everything else, in particular virtually every homemade method that people relentlessly invent.

For some reason, many developers insist on designing function themselves, and seem to assume that “secure cryptographic design” means “throw together every kind of cryptographic or non-cryptographic operation that can be thought of”. See this question for an example. The underlying principle seems to be that the sheer complexity of the resulting utterly tangled mess of instruction will befuddle attackers. In practice, though, the developer himself will be more confused by his own creation than the attacker.

Complexity is bad. Homemade is bad. New is bad. If you remember that, you’ll avoid 99% of problems related to password hashing, or cryptography, or even security in general.

Password hashing in Windows operating systems used to be mindbogglingly awful and now is just terrible (unsalted, non-iterated MD4).

Key Derivation

Up to now, we considered the question of hashing passwords. A close problem is about transforming a password into a symmetric key which can be used for encryption; this is called key derivation and is the first thing you do when you “encrypt a file with a password”.

It is possible to make contrived examples of password hashing functions which are secure for the purpose of storing a password validation token, but terrible when it comes to generating symmetric keys; and the converse is equally possible. But these examples are very “artificial”. For practical functions like the one described above:

  • The output of a password hashing function is acceptable as a symmetric key, after possible truncation to the required size.
  • A Key Derivation Function can serve as a password hashing function as long as the “derived key” is long enough to avoid “generic preimages” (the attacker is just lucky and finds a password which yields the same output). An output of more than 100 bits or so will be enough.

Indeed, PBKDF2 and scrypt are KDF, not password hashing function — and NIST “approves” of PBKDF2 as a KDF, not explicitly as a password hasher (but it is possible, with only a very minute amount of hypocrisy, to read NIST’s prose in such a way that it seems to say that PBKDF2 is good for hashing passwords).

Conversely, bcrypt is really a block cipher (the bulk of the password processing is the “key schedule”) which is then used in CTR mode to produce three blocks (i.e. 192 bits) of pseudo-random output, making it a kind of hash function. bcrypt can be turned into a KDF with a little surgery, by using the block cipher in CTR mode for more blocks. But, as usual, we cannot recommend such homemade transforms. Fortunately, 192 bits are already more than enough for most purposes (e.g. symmetric encryption with GCM or EAX only needs a 128-bit key).

Miscellaneous Topics

How many iterations ?

As much as possible ! This salted-and-slow hashing is an arms race between the attacker and the defender. You use many iterations to make the hashing of a password harder for everybody. To improve security, you should set that number as high as you can tolerate on your server, given the tasks that your server must otherwise fulfill. Higher is better.

Collisions and MD5

MD5 is broken: it is computationally easy to find a lot of pairs of distinct inputs which hash to the same value. These are called collisions.

However, collisions are not an issue for password hashing. Password hashing requires the hash function to be resistant to preimages, not to collisions. Collisions are about finding pairs of messages which give the same output without restriction, whereas in password hashing the attacker must find a message which yields a given output that the attacker does not get to choose. This is quite different. As far as we known, MD5 is still (almost) as strong as it has ever been with regards to preimages (there is a theoretical attack which is still very far in the ludicrously impossible to run in practice).

The real problem with MD5 as it is commonly used in password hashing is that it is very fast, and unsalted. However, PBKDF2 used with MD5 would be robust. You should still use SHA-1 or SHA-256 with PBKDF2, but for Public Relations. People get nervous when they hear “MD5”.

Salt Generation

The main and only point of the salt is to be as unique as possible. Whenever a salt value is reused anywhere, this has the potential to help the attacker.

For instance, if you use the user name as salt, then an attacker (or several colluding attackers) could find it worthwhile to build rainbow tables which attack the password hashing function when the salt is “admin” (or “root” or “joe”) because there will be several, possibly many sites around the world which will have a user named “admin”. Similarly, when a user changes his password, he usually keeps his name, leading to salt reuse. Old passwords are valuable targets, because users have the habit of reusing passwords in several places (that’s known to be a bad idea, and advertised as such, but they will do it nonetheless because it makes their life easier), and also because people tend to generate their passwords “in sequence”: if you learn that Bob’s old password is “SuperSecretPassword37”, then Bob’s current password is probable “SuperSecretPassword38” or “SuperSecretPassword39”.

The cheap way to obtain uniqueness is to use randomness. If you generate your salt as a sequence of random bytes from the cryptographically secure PRNG that your operating system offers (/dev/urandom, CryptGenRandom()…) then you will get salt values which will be “unique with a sufficiently high probability”. 16 bytes are enough so that you will never see a salt collision in your life, which is overkill but simple enough.

UUID are a standard way of generating “unique” values. Note that “version 4” UUID just use randomness (122 random bits), like explained above. A lot of programming frameworks offer simple to use functions to generate UUID on demand, and they can be used as salts.

Salt Secrecy

Salts are not meant to be secret; otherwise we would call them keys. You do not need to make salts public, but if you have to make them public (e.g. to support client-side hashing), then don’t worry too much about it. Salts are there for uniqueness. Strictly speaking, the salt is nothing more than the selection of a specific hash function within a big family of functions.


Cryptographers can never let a metaphor alone; they must extend it with further analogies and bad puns. “Peppering” is about using a secret salt, i.e. a key. If you use a “pepper” in your password hashing function, then you are switching to a quite different kind of cryptographic algorithm; namely, you are computing a Message Authentication Code over the password. The MAC key is your “pepper”.

Peppering makes sense if you can have a secret key which the attacker will not be able to read. Remember that we use password hashing because we consider that an attacker could grab a copy of the server database, or possible of the whole disk of the server. A typical scenario would be a server with two disks in RAID 1. One disk fails (electronic board fries – this happens a lot). The sysadmin replaces the disk, the mirror is rebuilt, no data is lost due to the magic of RAID 1. Since the old disk is dysfunctional, the sysadmin cannot easily wipe its contents. He just discards the disk. The attacker searches through the garbage bags, retrieves the disk, replaces the board, and lo! He has a complete image of the whole server system, including database, configuration files, binaries, operating system… the full monty, as the British say. For peppering to be really applicable, you need to be in a special setup where there is something more than a PC with disks; you need a HSM. HSM are very expensive, both in hardware and in operational procedure. But with a HSM, you can just use a secret “pepper” and process passwords with a simple HMAC (e.g. with SHA-1 or SHA-256). This will be vastly more efficient than bcrypt/PBKDF2/scrypt and their cumbersome iterations. Also, usage of a HSM will look extremely professional when doing a WebTrust audit.

Client-side hashing

Since hashing is (deliberately) expensive, it could make sense, in a client-server situation, to harness the CPU of the connecting clients. After all, when 100 clients connect to a single server, the clients collectively have a lot more muscle than the server.

To perform client-side hashing, the communication protocol must be enhanced to support sending the salt back to the client. This implies an extra round-trip, when compared to the simple client-sends-password-to-server protocol. This may or may not be easy to add to your specific case.

Client-side hashing is difficult in a Web context because the client uses Javascript, which is quite anemic for CPU-intensive tasks.

In the context of SRP, password hashing necessarily occurs on the client side.


Use bcrypt. PBKDF2 is not bad either. If you use scrypt you will be a “slightly early adopter” with the risks that are implied by this expression; but it would be a good move for scientific progress (“crash dummy” is a very honourable profession).

Network covert timing channels


Network covert timing channels are one way of attackers use to communicate with compromised host computers on the internet.  (Cabuk, Brodley, & Shields, 2004, p. xx) says, “A network covert channel is a mechanism that can be used to leak information across a network in violation of a security policy and in a manner, that can be difficult to detect.”  Network covert timing channels are slightly different.  Out of the two covert channels which are storage and timing.  Timing channels involves a sender process that signals information to another by modulating its own use of system resources in such a way that this manipulation affects the real response time observed by the second process, (Cabuk, Brodley, & Shields, 2004, p. xx).

There are two types of covert timing channels that exist, passive and active. (Gianvecchio & Wang, 2007, p. xx) states, “active refers to covert timing channels that generate additional traffic to transmit information, while passive refers to covert timing channels that manipulate the timing of existing traffic.”  These two types of covert timing channels have proven very effective in concealing data transfer over the internet.

Detection is broken down by two different sets of test shape and regularity.  The shape of traffic is described by statistics, mean, and variance.  The regularity of traffic is described by second or higher order statistics or correlation analysis.  Entropy and conditional entropy have shown as promising ways of detection.  (Gianvecchio & Wang, 2007, p. xx) says, “Entropy rate is the average entropy per random variable, can be used as a measure of complexity or regularity.”  This allows administrators to distinguish between randomness of timing of packets and complexity.




Cabuk, S., Brodley, C. E., & Shields, C. (2004). ip covert timing channels. Proceedings of the 11th ACM conference on Computer and communications security – CCS ’04. doi:10.1145/1030083.1030108

Gianvecchio, S., & Wang, H. (2007). Detecting covert timing channels. Proceedings of the 14th ACM conference on Computer and communications security – CCS ’07. doi:10.1145/1315245.1315284

AT&T and BellSouth Passing Out Routers that enable DDoS Attacks

One of the more interesting TCP-IP vulnerabilities is its ability to guarantee the location of where a packet is coming from.  RIP is an essential component of a TCP/IP network.  RIP is the Routing Information Protocol which is used to distribute routing information within networks, such as shortest-paths, and advertising routes out from the local network, (CHAMBERS, DOLSKE, & IYER, n.d.).  The flaw in RIP is that it doesn’t have built in authentication much like TCP/IP.  This attack is significant because RIP attacks change where the data may go to unlike common attacks that change where data has come from. When an attacker is able to compromise RIP addresses and send them from anywhere in the world this poses a huge security flaw.  Attackers can forge RIP packets claiming that they are another host and they have the fastest route or path out of the network.  This is troubling as there is a higher level DDOS attack that uses the RIPv1 protocol called Reflection Amplification Attacks. (Mimoso, 2015) says, “Reflection attacks happen when an attacker forges its victim’s IP addresses in order to establish the victim’s systems as the source of requests sent to a massive number of machines.”  Because the attacker is in control of the RIP it can send many requests on behalf of a network.  The recipients of the request issue an overwhelming flood of responses back to the victim’s network thus crashing that network, (Mimoso, 2015).

I chose this vulnerability because it’s very current in the landscape of DDOS attacks and Threat post by Kapersky Labs suggest that this is only going to grow into the coming years.  The easiest way to stop this is to use routers with RIPv2 and above.  Unfortunately, a large number of the routers that have been compromised using this deprecated technology comes from AT&T and BellSouth and they are regularly distributed in the United States.


CHAMBERS, C., DOLSKE, J., & IYER, J. (n.d.). tcp/ip security – department of computer and information science. Retrieved from

Mimoso, M. (2015, July 1). ripv1 reflection amplification ddos attacks | threatpost | the first stop for security news. Retrieved from

The latest development in Router Attacks. – What you need to know about people attacking your router.

Router Attacks – DNS Redirect

Routers are vulnerable to different types of attacks.  The first attack is the DNS Rebinding and Cross-Site Request Forgery attack.  This attack was demonstrated at the 2010 DEFCON as a modern attack against home routers.  The attack is fairly intricate in that it uses multiple parts in the actual attack.  The attack works in three parts.  The first part of the attack the attacker needs to be able to modify the DNS records.  Next the attacker must be able to create various pages on the target domain and link these with DNS.  The attack happens when the victim visits the malicious site.  Where the attacker obtains a user’s public IP address.  Then the attacker quickly creates a subdomain on the attack domain with two “A records”.  With one a record pointing to the server and the other points to the public IP address of the victim’s router, the web server redirects the victim’s browser to a page with JavaScript code that will execute the CSRF portion of the attack, (Trend Labs Security, 2010).   After both these steps are done the attacker has control of the Web Server meaning the attacker can send TCP reset (RST) commands on demand.  Finally, the browser begins to execute the JavaScript code which tries to connect to the temp subdomain, the attacking server will reply with an RST command and end the session.  The user’s system will try the other IP address that it knows about for the hostname, which happens to be the external IP address of the victim’s router, (Trend Labs Security, 2010).  Results are then channeled to the attacking server via a portal.  The attacker can then try different credential until they have success and fully connects.


DNS Redirect Prevention

There are a few ways to protect a router from this flavor of attack.  The first and foremost make sure one uses HTTPS and disable the HTTP console if this is a configuration setting.  Always use strong passwords for routers.  Remove factory default passwords always.  Also adding a firewall rule preventing devices on the local network from sending packets to the IP block that your public IP address is a member of.  Also keeping your firmware up to date is a huge help.  Using a No Script plugin can also protect against malicious JavaScript since this is a part of the attack.


CDP Attacks

Another attack happens to be in the Cisco Discovery Protocol which can be used by default with all cisco devices.  First off this protocol is enabled by default.  CDP contains information about the network device such as the software version, IP address, platform, capabilities, and the native VLAN, (Popeskic, 2011).  This information is also sent in complete clear text.  When this information is sniffed off of the VLAN internet traffic an attacker can use this to find other exploits to orchestrate an attack such as Denial of Service (DoS) attack.  CDP is also unauthenticated meaning an attacker can craft fraudulent CDP packets and have them received by the attacker’s directly connected Cisco device.  If an attacker can get access to the router via SNMP or Telnet an attacker can find the entire topology of a network at Layer 2 and Layer 3.  Which also includes IOS levels, router and switch model types, and IP addressing schema.


CDP Prevention

The way of preventing against the CDP attack is to simply disable the default configuration which allows this on the router.  Most administrators need to not just focus on disabling on a single interface which allows the CDP table to stay populated, but to disable on the entire device.  (Redscan, 2013) says, “CDP can be useful and, if it can be isolated by not allowing it on user ports, then it can help make the network run more smoothly.”



Figure 1. Warning message displayed on HTTP website from infected router.



Popeskic, V. (2011, December 16). cdp attacks – cisco discovery protocol attack. Retrieved from

Redscan. (2013, December 19). Ten top threats to vlan security – redscan. Retrieved from

TrendLabs Security. (2010, August 10). trend labs security intelligence blog protecting your router against possible dns rebinding attacks – trend labs security intelligence blog. Retrieved from

TrendLabs Security. (2015, May 20). trend labs security intelligence blog new router attack displays fake warning messages – trend labs security intelligence blog. Retrieved from

Are DDoS attacks about to die? – 3 top projects that might make you think twice.


Denial of service or DoS attacks and distributed denial of service attacks or DDoS are on the rise.  Many of the worlds companies have been forced to deal with DoS attacks or DDoS attacks as a potential threat.  (Zetter, 2016) defines DoS attacks as, “an attack that overwhelms a system with data—most commonly a flood of simultaneous requests sent to a website to view its pages, causing the web server to crash or simply become inoperable as it struggles to respond to more requests than it can handle.”  Since their initial uses an emergence of DDoS attacks have come about.  This is distributed denial of service attacks, largely attacks from multiple computers to one or more. These computers are usually a part of a larger botnet which their location is spread about all over the world.  DDoS attacks are hard to deal with since merely blocking an IP address from malicious traffic is more difficult to identify.  The end result of both DDoS and DoS attacks are legitimate users not being able to use computer systems for their intended purpose.  There are new methods and techniques to deal with these DDoS attacks that are on the rise.

Anomaly Based detection system with Multivariate Correlation Analysis

One of the more promising solutions to DDoS attacks is the Multivariate Correlation Analysis system, or MCA.  This research paper identifies multiple mitigation techniques in the detection process.  The overall solution is built for speed of detection which is a very powerful element in thwarting DDoS attacks.  This system differentiates between old misuse based detection systems and newer anomaly based systems such as itself.  The misuse based detection systems identifies malicious network traffic based on previously known attacks. The problem with this is the ability to identify new DDoS attacks or variations of old attacks.  Also there is trouble in keeping a valid signature database updated which becomes very labor intensive.  Because of the cybersecurity industry was on the lookout for better detection system.  Anomaly based solutions were then sought out heavily since catching the DDoS attacks themselves were hard to identify, it is a lot easier to identify normal traffic on a network and then compare it to the current traffic.

Anomaly based detection system

Anomaly based detection systems have the ability to identify a base line of traffic for a company within its normal usage and then sift out the remaining as malicious traffic.  Unfortunately, anomaly based systems are prone to false positives and false negatives due to lack of training and simplistic models being used.  This new system, multivariate correlation analysis system has proved promising to solve this issue. This solutions framework can be broken up into three distinct levels like the following:

  1. Creates a normalization model record from internet traffic to the internal network. This level takes incoming traffic data to pass to the level 2.
  2. Multi correlation analysis is applied with Triangle area map generation. In this step the normalization model records in level 1 are compared to find correlations.
  3. Decision Making is the final level that determines legitimate record set from DDoS attack or illegitimate records.
    1. Training Phase builds normal profile of traffic.
    2. Test Phase builds profiles of individual observed traffic records.

Triangle area map mitigation technique

The triangle area map technique was used to help speed up the MCA process.  The triangle area map approach allows for quickness in the comparison of two triangle area map records.  If one was to picture the triangle map record as a picture they would be able to tell any differences in the two triangle sides when they weren’t identical because this would be reflected in the bottom part of the triangle.  Allowing the system to focus on inspecting the bottom part of the triangle which will decrease the amount of data needed to analyze and query.  The resulting speed is roughly two thirds faster than running the normal MCA process.


Mahalanobis Distance mitigation technique

The Mahalanobis distance mitigation technique or MD allows the solution to be more accurate when identifying variations.  This model can be well explained in conceptual anology with baking spices in a recipe.  If you have a x and y axis and plot all the different volume levels in the recipe of all the different spices this wouldn’t change the flavor profile of the recipe.  However, if you add more of one ingredient say salt or butter you would definitely taste the difference in recipe.   Mahalanobis distance mitigation technique works in the same way as it allows the variation of critical indicators to not focus predominantly on volume but a more on distinct flavor based off of powerfulness of different ingredients.

While the triangle area map was used to identify similarities in record sets faster, MD is used to identify the dissimilarity between traffic records.  (Tan, Jamdagni, He, Nanda, & Liu, 2011, p. xx), says “This is because MD has been successfully and widely used in cluster analysis, classification and multivariate outlier detection techniques.”


Tracemax DDoS System

The Tracemax system is another project that shows potential.  The Tracemax system takes a slightly different approach in detecting DDoS attacks.  Tracemax is software installed on downstream devices throughout the internet say for instance an ISP.  The Tracemax system can be installed at the customer level, however at the ISP level this would allow for the ISP to blacklist attackers or bots, identifying botnets within the ISPs network or verify malicious ISPs, (Hillmann, Tietze, & Rodosek, 2015, p. xx).   The reason for selection of this research is because it clearly identifies a very important problem in cyber security at the moment which is attribution.  Identifying the potential initiator of attacks allows law enforcement, government, and state officials to take further action.

The devices running the Tracemax software are able to then label each packet and trace its exact path based off a given generated abstract ID. This ID is stored in the options header of a packet. This allows Tracemax to deal with a larger number of hops more than any other existing tracing tool known to the general public to date.  See table 1.  The benefit of using the Tracemax software are as follows:

  1. Single packet traceback. Which allows users to detect sophisticated attackers.
  2. Detecting and differentiating multiple attackers.
  3. Fast path reconstruction, even during an attack. With short attack detection time and fast preventive actions.
  4. Minimal additional network load and performance.
  5. Ability to trace hops or locations of more than 50 plus hops.


Tracemax preventive system

            As a preventive measure Tracemax is installed on all devices which the packets would travel through.  Tracemax allows for the system to detect DDoS for small networks and alert ISP as to malicious packets entering in networks so that an ISP can take necessary steps to deny malicious packets and malicious outside nodes.  This approach could prevent new DDoS attacks from spawning on different internet nodes.  This would also allow a ISP to identify DDoS attacks coming from their own networks.


Tracemax mitigation technique

Tracemax creates its own labeling system which is its mitigation technique.  DDoS attacks for the most part done from spoofed IP addresses and the packets vary from different paths to the target. Dynamic paths and spoofed IP packets aren’t referenced.  Instead Tracemax looks at the options field for the abstract ID’s which were created through packets travel from device to device.  It’s very simple to reconstruct a malicious packets full path at the end by using this method.


Tracemax alternate mitigation technique 

A slightly different mitigation technique is that if the traced IP packet were to fall into the wrong hands and reverse engineered the IP packet doesn’t give up the ISP’s network topology because of the abstract ID system it uses. This is a big concern as many packets can be reversed engineered at some point.  However, because Tracemax not only label each packet with an abstract ID it can also change its entire abstraction method so that users without the software it would render the packet useless in detecting where the trace is coming from.


Hybrid Intrusion Detection System for DDoS Attacks

The solution to DDoS attacks is proving to be extremely difficult.  As the previous projects focused predominantly on DDoS and DoS attacks on general networks.  It isn’t practical to not mention wireless networks.  This next research project focuses a best of both worlds approach.  As the name suggest the Hybrid Intrusion Detection System or (H-IDS) uses the misuse database or signature based approach and combines it with the anomaly based approach.  The joining controlling centralized node is referred to as the hybrid detection engine (HDE). See figure 1. The benefits of using this system is with the low frequency of false positives in signature based IDS systems and combining the flexibility of the pattern recognition this increases speed and improves on efficiency.  The HDE is defined as follows:

  1. Collecting the outputs of anomaly-based detector and signature based detector
  2. Calculating the attack probability
  3. Controlling the security levels of the detectors
  4. Updating anomaly detector’s normal network model
  5. Updating the signature based detectors rule set


Detection method with SNORT

The HDE uses SNORT for an appropriate signature based detection system.  SNORT is widely used among the industry.  SNORT can be run in three modes sniffer, packet logger, network IDS.  For the implementation of H-IDS the periodically updated rules version can be used.  The HDE uses SNORT however the HDE controls the sensitivity levels of SNORT.


Anomaly Mitigation Expectation Maximization Algorithm

The key mitigation strategy which differentiates the HDE system from other mixed model systems is that it uses an algorithm for the maximum likelihood estimate problems.  These are huge problems in mixed model systems.  The algorithm that is used is the Expectation Maximization Algorithm or EM.  This is a mitigation strategy which focuses on using EM over other approaches such as gradient-ascent or Newton.  This EM algorithm enables the HDE to take parameter estimations in a probabilistic model with incomplete data.  It is largely efficient when working with incomplete data.  When taking in models from both signature based and anomaly based detection systems.  This is of high importance.


OR Mitigation Method

            In most multi detector systems the possibility of one detector detecting an intrusion while the other doesn’t.  The mitigation strategy to alleviate this problem HDE uses an OR relation, meaning it will send an intrusion present in the event that one or the other finds an intrusion whether through pattern recognition or through SNORT signature based detection.   This ultimately gives the best of both world approach to the DDoS attack scenario.



With the three different approaches covered in detecting, preventing and mitigating DDoS and DoS attacks.  It’s extremely easy to be excited about all three approaches.  However, Tracemax as a concept is very bold in going after the attribution theory in cyber security.  But the concept of Tracemax falls apart when getting to a realistic implementation of the software.  The adoption rate would need to be accepted globally for this approach to work.  For this reason, we can see that we are very far from Tracemax becoming a reality.  The most feasible are the MCA anomaly based detection system and the hybrid intrusion detection system, H-IDS.  As the H-IDS system works in theory bringing best of both worlds together.  The speed of detecting a DDoS is a critical part of the detection equation and for this reason we would need to compare both H-IDS system versus the MCA anomaly systems.  As both target improving on the speed of detection.  Within H-IDS system research paper the researchers tested against a standard anomaly based detection system.  With the added MCA component to a normal anomaly system, it would be interesting to see the results.  We could only conclude that the speed would be better in the MCA anomaly based detection system and the accuracy is only slightly better in the H-IDS system. (Brox, 2002), “Anomaly testing requires trained and skilled personnel, but then so does signature-based IDS. And, anomaly testing methods can be guaranteed to provide far more effective protection against hacker incidents.”  Ultimately one would have to believe the speed isn’t the only factor and decision has to be based on a company’s line of business and size.  Both solution would catch the DDoS and be able to identify how to block access, but how much maintenance is required due to false positives?  This is the deciding question that needs to be addressed and can only be done on a company by company bases when adopting one of these methods covered.



Table 1. Tracemax compared to other trace programs.


Figure 1. Model of hybrid IDS system


Grace, C. J. C., Karthika, P., & Gomathi, S. (2016). A System for Distributed Denial-of-Service Attacks Detection Based on Multivariate Correlation Analysis. system. American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author.

Hillmann, P., Tietze, F., & Rodosek, G. D. (2015). Strategies for Tracking Individual IP Packets Towards DDoS. PIK – Praxis Der Informationsverarbeitung Und Kommunikation, 38(1/2), 15-21. doi:10.1515/pik-2015-0010

Somani, G., Gaur, M. S., Sanghi, D., Conti, M., & Buyya, R. (2015). DDoS Attacks in Cloud Computing: Issues, Taxonomy, and Future Directions.Somani, G., Gaur, M. S., Sanghi, D., Conti, M., & Buyya, R. (2015). DDoS Attacks in Cloud Computing: Issues, Taxonomy, and Future Directions.

Cepheli, Ö., Büyükçorak, S., & Karabulut Kurt, G. (2016). Hybrid intrusion detection system for ddos attacks. Journal of Electrical and Computer Engineering2016, 1-8. doi:10.1155/2016/1075648

Brox, A. (2002, May 1). Signature-based or anomaly-based intrusion detection: the practice and pitfalls. Retrieved from

ARBOR NETWORKS SECURES PATENTS FOR DDOS DETECTION. (2015). Computer Security Update, 16(7), 4-6.

Zetter, K. (2016, January 16). Hacker lexicon: what are dos and ddos attacks? | wired. Retrieved from

Manohar, R. P., & Baburaj, E. (2016). Detection of stealthy denial of service (s-dos) attacks in wireless sensor networks. International Journal of Computer Science and Information Security14(3), 343-348.



Securing Databases

Securing Databases

Database security is very important to consider in any organization or company.  It’s where an entities most valuable data is stored.  Personal identifiable information has been stolen from databases over and over in the last decade.  (Blackhat, n.d.) says, “By one estimate, 53 million people have had data about themselves exposed over the past 13 months.”  This was in 2006 after large data breaches from Bank of America, Time Warner, and Marriott International.  Today you could only imagine that there are many more.  A few suggested things to consider when securing any database or distributed system.  Separate the database from the web servers.  Encrypt any stored files in the database.  Keep patches current.

Keep the database server’s separate from the web servers is a great help.  Usually software when installed on a server will include a database and install it on the same server.  If an attacker can compromise the administrator account of the webserver he then has access to the database files.  (Applicure Technologies, n.d.) suggests, “instead, a database should reside on a separate database server located behind a firewall, not in the DMZ with the web server.”  Agreed this would increase the complexity of the installation but the benefits on the security are well worth it.

Another factor to consider is the way in which the data will be stored.  Encryption is an option for all data but will decrease performance in certain areas.  Knowing the kind of data like car information color, make, and model versus vin number and license plate number would help in determining the information that needs to be encrypted and does not.  Depending upon the business compliance whether HIPAA, SOX, and PCI may make this decision for us.  Encryption of also website files for instance a web configuration file may contain information to the databases the website needs to connect to.  Many times this is in clear text. (Applicure Technologies, n.d.) says, “WhiteHat security estimates that 83 percent of all web sites are vulnerable to at least one form of attack.”  These types of attacks are very frequent in number.

Lastly keep databases patched regularly.  Many databases have many other third party plugins that create other entry points into databases. At the time of their publication there were 8 DB2, 2 Informix and greater than 50 Oracle 0day vulnerabilities, (Blackhat, n.d.).  So the general consensus would be to keep the need for third party vendors and databases to a minimum.

Overall there is no exact method of database security it’s a practice and everyones implementation will be different based off of the needs of each business and the regulatory requirements that the business is subject to.



Figure 1. Shows the cost of different types of data on the blackmarket.


Figure 2. Shows the top companies with data breaches in 2005.



Applicure Technologies. (n.d.). Best practices for database security. Retrieved from

Blackhat. (n.d.). Hacking databases for owning your data. Retrieved from