Application Denial of Service Vulnerability

26 August 2017

The Denial-of-Service (DoS) attack is deployed by adversaries to render network resources or computer systems unavailable for those who are authorized to have access by sending heavy amounts of unusual traffic. Analyzing the impact of DoS attacks through the three pillars of security which are confidentiality, integrity, and availability the DoS attack mainly focuses on impacting availability, however, the attack can also lead to leaking of sensitive information or resources triggering the impact of confidentiality.

Attacker(s) use many techniques, methods, and tools at their disposal to carry out DoS or DDoS attacks. Some of these methods are what we have come accustomed to reading on security blogs and news sites, encompassing bandwidth consumption that consume all available network bandwidth of the victim or crafted packets that take advantage of existing vulnerabilities such as MAC Flooding on a Switch device.

What we do not often hear on the news are application DoS attacks. These attacks focus on exhausting resources by using disproportionately large consumption of data that overwhelms available memory, disk space, or processor time. Attacks can include inserting many keys with the same code into hash table, triggering worst-case performance or using excessive disc space.

FIO04-J. Release resources when they are no longer needed.

Below is an example from Rules for Oracle Coding Standard for Java on how exhausting resources vulnerability found in java code can be mitigated.

The Java garbage collector ensures that unreferenced but unreleased memory is cleared to create space for used references. However, Java garbage collector cannot free non-memory resources such as open files and database connections. As a result, failing to release such resources can lead to resource exhausting attacks because remnants of used resources to be left in memory. To mitigate java garbage collector resource exhausting vulnerability output streams should be closed promptly after use.

Noncompliant Code Example (File Handler)

public int processFile(String fileName)
                       throws IOException, FileNotFoundException {
  FileInputStream stream = new FileInputStream(fileName);
  BufferedReader bufRead =
      new BufferedReader(new InputStreamReader(stream));
  String line;
  while ((line = bufRead.readLine()) != null) {
    sendLine(line);
  }
  return 1;
}

Compliant Solution Example

try {
  final FileInputStream stream = new FileInputStream(fileName);
  try {
    final BufferedReader bufRead =
        new BufferedReader(new InputStreamReader(stream));
 
    String line;
    while ((line = bufRead.readLine()) != null) {
      sendLine(line);
    }
  } finally {
    if (stream != null) {
      try {
        stream.close();
      } catch (IOException e) {
        // Forward to handler
      }
    }
  }
} catch (IOException e) {
  // Forward to handler
}

Compliant code releases all resources regardless of any exception that might occur. In the case of the code snippet the File Input Stream is closed. If and when an output stream is used after each write, take steps to ensure that the output is reset after each write. The reset clears the output stream of internal object cache migrating memory and resource leaks during serialization.

The Over Trusting Application User and the Need for Cyber Hygiene

27 July 2017

Growing up many of us heard our parents say things like “never get in a car with a stranger” and “look twice before you cross the road”. This was an effort to teach us early in life that the world can be unforgiving for those who lack a sense of awareness. Today the world is at our fingertips needing only a simple push of a button; it follows us to our bathrooms, kitchens and our kids’ bedrooms. In a digital social society, we interact with applications each time we access the internet for online services to conduct banking transactions, shop for the best deals, look for jobs, and stream music or videos. Applications are intertwined with our lives but sadly a large number of us lack basic cyber hygiene; we have become increasingly trusting of people making the applications.

The site Stack Overflow also known as “the world’s programmer community” with roughly 4.7 million users conducted a survey of 45 questions to its users about themselves and technology and jobs etc. The answers of 56,000 people show that self-taught developers dominate technology with 67% of the developers who responded. That insight tells us that there is a high likelihood that applications we use today may be designed, developed, tested and deployed by a developer or developers who have limited application security awareness trusting users to protect themselves.

The simple truth is, this problem of security in the arena of applications stems from our pursuit of convenience. It takes a lot of work to constantly be on the lookout of new software updates, backing-up data and maintaining passwords for the numerous applications we use each day amongst other cyber hygiene practices. On the other side of the coin, it is more convenient for application developers to only be concerned about functionality with the user’s experience, meeting deadlines and deploying applications as quickly as possible being the main priorities. That being said, software security assurance is often hard, long and boring and may present as a hurdle to those priorities.

Cyber Hygiene Tips; Protect Yourself as a Cyber Citizen

Limit Personal Identifiable Information on Social Media.

If you already have a social media account or creating one, only enter the basic information required to get the account activated and never provide excessive information that could put you at risk; this includes photographs of sensitive documents. If you’ve already entered information like date of birth, home address, location details and mobile numbers set it to hidden; or better yet remove it from your profile.

Use $tr0ng3r passwords and change them at least once per year.

When choosing a password make it long, strong and unique to that account. Use uncommon word phrases instead of single words. It’s entirely up to you to protect your account so never use the same password multiple times.

Before clicking anything, stop, think and check if it is expected, valid and trusted.

Stop. Think. Ask yourself if the message was expected. Do you know the person who sent it, and is it really from them? Could it be a phishing email – a message that looks exactly like one you might receive from a familiar organization but is really a set-up to get your information.

Keep computer software up-to-date using auto-update.

Security patches are released often and are essential to your protection. Check to see you have the latest greatest version software available, normally new software versions correspond to new development in the software.

Similar to how secure applications are designed, developed, tested and deployed to never trust you the user, you as a user should never trust the application and the developers who built it. Think about the information you are giving the application, is it more than what is required to carry out an action? What personal information are you giving away on social media? Before downloading software or streaming videos or clicking a link in your email or messaging app etc. take steps to ensure it is from a verified source.

Become Cyber Aware – Start Practicing Cyber Hygiene

Bridge the Gap of Understanding Between Human Endpoints and Information Security Awareness

01 May 2017

The Information Technology industry in last two or so decades has gone through a transition away from an over reliance on physical infrastructure and physical end-points to embrace and cater to the habits of a dynamic always on-demand customer. Emerging technologies such as Cloud Computing Infrastructure and Internet of Things (IoTs) are helping organizations reduce costs and provide flexibility to their operations. The “trust in cloud are going to increase, leading to more sensitive data and processing in the cloud” (McAfee Labs, 2016). Consumer demand are going to drive investments in developing and integrating IoTs in organization’s product portfolio.

Adversaries are now adapting to the new information technology landscape by leveraging emerging technologies, tools and techniques to exploit new weak points in the defense systems. Advanced persistent threats (APT) are now more common and mobile and wireless security is actively targeted as a weak point. DDoS attacks are now cloud-based, leveraging virtual servers to generate ultra-high bandwidth attacks. State-sponsored espionage made it onto the international spotlight during the 2016 U.S. election, increasing heightened awareness to safeguard critical data from politically or financially motivated threats. Until new methods of authentication are introduced, password management is still a major challenge putting in place and enforcing stronger user-controlled passwords.

The over reliance on emerging technologies has introduced various security challenges on securing the confidentiality, integrity and availability of critical infrastructure. Out of all the three pillars of security, confidentiality of data poses to be the greatest security challenge that keeps CISOs up at night. It is important to note that by acknowledging that confidentiality is a major challenge does not answer what the real security challenge is, it only addresses the effect of a cause. In this white paper we make the case that human endpoints are and will remain one of the biggest weakness throughout most technologies for the foreseeable future. We outline that an approach required to combat this weakness is exchanging cyber threat information within a sharing community and organizations to leverage the collective knowledge to increase endpoint situational awareness.

Human Endpoints Are Unaware

People are the biggest point of vulnerability in any organization and the endpoint is where they interact with whatever an attacker is after: intellectual property, credentials, cyber ransom, etc. People by nature are trusting. They enjoy connecting and being up to date on latest trends. They click on links in emails, tweets or Facebook posts all too easy, this including security experts. People are responsible for the policies and procedures that are in place at the enterprise, whether forced upon them by regulatory bodies or voluntarily for security.

Millennials entering the workforce is reshaping how office environments are designed. A desire for open office spaces for creativity and collaboration, increase number of people telecommuting, and Bring Your Own Device (BYOD) mobile devices has reduced control of information and increased the attack surface greatly. Everyday human errors can cause breaches that expose millions of people to potential harm. “Leaving devices unattended, sharing passwords or accidentally emailing or peer-to-peer sharing of information to the wrong people are entry points attackers aim exploit” (Kam, 2016). The Federal Times indicated that “at least 50 percent of breaches and leaks are directly attributed to user error or failure to provide proper cyber hygiene” (Boyd Aaron, 2014).

The biggest risk is a lack of awareness on the part of users. Even if the organization has good security processes and training, and even if people are faithfully following security procedures at a workplace, they are typically unaware that the decisions and actions they make in their private lives can place them and their employers at risk. For instance, if employees bring their own devices to work, their failure to do an OS update with important security patches can place networks at risk. Another instance is if employees use the same password on personal and work accounts or send comments on social media sites. These examples and many more stem from a lack of awareness.

Raise Endpoint Situational Awareness

We need to raise the awareness of human endpoints. This begins by bringing together cyber threat knowledge that organizations already have and exchanging it within a sharing community. The collective knowledge, experience, and capabilities of that sharing would ensure the community has a more complete picture of the threats the organization may face. Leveraging this information and knowledge, “an organization can make informed decisions pertaining to defensive posture, threat detection techniques, and mitigation strategies” (Johnson, Badger, Waltermire, Snyder and Skorupka, 2016). Correlating and analyzing cyberthreat information an organization can tailor that information according to department and organizational role making human endpoints situationally aware about threat landscape. “Until you make a human cyber security aware, no data is fully secure. The idea is to prevent an attack rather than reacting once it happens” – Akshat Jain from Cyware (Ranipeta, 2017).

Call to Action!

The world of information security does not lack for challenges. The never ending updates and patches in response to incremental changes by adversaries and the major software releases that introduce new features but also open unexpected vulnerabilities. It is difficult to keep up with the never ending cyclical nature of information security. Our over reliance on emerging technologies such as cloud computing and IoTs place particular challenges to the confidentiality, integrity and availability of critical infrastructure.

The increase in attack surface has particularly raised great concern on how effective organizations can successfully secure the confidentiality of data both at rest and transit. People are the biggest point of vulnerability in any organization and are the endpoints where attackers interact in hopes to steal intellectual property, credentials, cyber ransom, etc. Adversaries are now adapting to the new information technology landscape by leveraging emerging technologies, tools and techniques to exploit new weak points.

We need to raise the awareness of human endpoints. People should be armed with timely relevant cyber threat information to be situationally aware so that when they make decisions and actions in their private lives and working environment, they understand the implications both to them and their respective employer. The private and public sectors should gravitate closer together to create communities that exchange cyber threat knowledge, experience, and capabilities. Organizations in private and public sectors should also take an active role in ensuring employees are adequately trained throughout the year by allocating time during the year for security training. These steps will ensure the gap of human endpoint, non-technical and technical, understanding of information security is reduced.

Citations

Boyd Aaron. (2014). “The user knows nothing: Rethinking cybersecurity” Federal Times. Retrieved from http://www.federaltimes.com/story/government/cybersecurity/2015/04/14/the-user-knows-nothing/25776507/

Johnson Chris, Badger Lee, Waltermire David, Snyder Julie and Skorupka Clem. (2016). “Guide to Cyber Threat Information Sharing” National Institute of Standards and Technology. Retrieved from http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-150.pdf

Kam Richard. (2016). “The Biggest Threat to Data Security? Humans, Of Course” IAPP. Retrieved from https://iapp.org/news/a/the-biggest-threat-to-data-security-humans-of-course/

McAfee Labs. (2016). “McAfee Labs explores top threats expected in the coming year” McAfee Labs – Intel Security. Retrieved from https://www.mcafee.com/us/resources/reports/rp-threats-predictions-2017.pdf

Ranipeta S Shilpa. (2017). “How human learning, and not just machine learning, can keep us cyber-secure” The News Minute – Cyber Security. Retrieved from http://www.thenewsminute.com/article/how-human-learning-and-not-just-machine-learning-can-keep-us-cyber-secure-58439

Taylor Brian. (2016). “Endpoint security: People are the biggest source of vulnerability” TechRepublic. Retrieved from http://www.techrepublic.com/article/endpoint-security-people-are-the-biggest-source-of-vulnerability/

OWASP Testing Guide - 'Test Early, Test Often'

14 April 2017

The “test early, test often” strategy improves software security because the testing for software vulnerabilities is carried out throughout the entire development life-cycle. The testing of software does not wait until the software is created - waiting to test until the software is created is often ineffective and cost-prohibitive. However, with “test early, test often” the testing is done early to ensure that when a bug is detected early within the Software Development Life-Cycle (SDCL) it can be handled appropriately at that exact moment resulting in cut costs and increased effectiveness.

As OWASP Testing Guide outlines, an effective testing program should have components that test – People, Process and Technology. The implementation of “test early, test often” strategy greatly improves software security because people working on a software product are able to be tested early and often to make sure they have an adequate education and awareness as it pertains to their respective roles. People would also be tested for their understanding of processes to ensure they are able to follow security policies and standards. Testing technology early and often to make sure process are effective in implementation all enhances the improvement of software security (Muller Andrew and Meucci Matteo, 2016).

Sources:

Muller Andrew and Meucci Matteo. 2016. OWASP Testing Guide 4.0. Creative Commons (CC) Attribution Share-Alike. Retrieved from https://www.owasp.org/index.php/OWASP_Testing_Guide_v4_Table_of_Contents (On 20th 2017)

Why a business should consider migrating its enterprise IT systems into the cloud infrastructure

26 March 2017

You work for a startup which in a short amount of time grew into a medium size company because it acquires auctioned vehicles, repairs and rents them out to people for below market rates. The company is called Auto Service LLC. Auto Service LLC is in the automotive service industry and has carved out a niche in the middle eastern region of the country planning to expand into the southern region in the coming months. Since its customers are free to drive the vehicles anywhere in the country, it tracks each fleet by GPS and also gathers and stores vehicle diagnostics data including how fast the vehicle is traveling, gas in the vehicle, where the vehicle is located, condition of the vehicle, and who is driving the vehicle at a given time.

The Auto Service LLC manages all the computing resources in house (i.e networks, servers, storage, applications, and services). It has a dedicated staff around the clock that monitor and enforce security controls on facilities, networks, applications, and storage access controls among other responsibilities. As an example, a physical security perimeter is in place to prevent unauthorized access, in addition to physical entry controls to ensure authorized personnel have access to areas containing sensitive information. Auto Service LLC also conducts extensive background checks on its personnel before granting them access to secure areas.

We prepare a recommendation to board members to integrate cloud computing architecture into Auto Service LLC’s business operations. The objective will be to outline a specific service model and deployment model(s) that fits the needs of the organization discussing possible security concerns, risks and how they can be mitigated in an effort of convincing board members to get on board.

Migrating to Cloud Computing Infrastructure

Auto Service LLC has plans to expand its operations into southern states of the country which is going to introduce challenges as to how critical infrastructure will be securely managed in order to maintain confidentiality, integrity, and availability of data and services. In order to maintain availability cold, warm and hot sites would have to be established which in a long run can be very costly to the organization’s bottom line. In addition, regardless of whether resources are being used or not, computing infrastructure (networks, servers, storage, applications, and services) have to be operational 24/7 once again impacting the organization’s margins.

An integration of cloud computing, in particular, Infrastructure as a Service (IaaS) will successfully turn the organization into a cloud customer. The benefits IaaS bring to the organization will be felt from day one. No capital investments will be required using a service provider because as we discussed before servers, storage and network hardware are located and maintained at the provider’s site. This leads to another benefit which is that Auto Service LLC will only pay for only what is needed providing the necessary flexibility to scale as the company grows and scale back if the company downsizes. IaaS consumers are provisioned with the capabilities to access these computing resources, and are billed according to the amount or duration of the resources consumed, such as CPU hours used by virtual computers, volume and duration of data stored, network bandwidth consumed, number of IP addresses used for certain intervals (Workgroup leaders, 2012). Lastly, business continuity is maintained given the high availability nature of IaaS consolidating disaster recovery infrastructure, further reducing costs and increasing manageability (StateTech Staff, 2014).

Cloud Computing Security Concerns

Migrating to the cloud computing infrastructure brings with it a share of security concerns that have to be discussed. The utilization of IaaS means that security responsibilities are shifted between Auto Service LLC and cloud provider. The cloud provider is responsible for the supply of basic IT resources in regards to physical infrastructure, house machines, networks and disks. However, Auto Service LLC is responsible for providing and maintaining the operating system and the entire software stack needed to run applications. Security concerns particularly arise when we discuss about how the provider will maintain confidentially, integrity and availability of data and services. We trust that security policy controls will be put in place to enforce security best practices, but as we know that effort is never guaranteed. For example, it can be unclear as to the type of access controls a provider implements to restrict unauthorized access. Also how rigorous are personnel background checked to mitigate malicious insider attacks. What steps are being taken to ensure that interrupted availability does not occur.

Security concerns do not only apply to the cloud provider because security responsibilities are shared in a IaaS service model. Security means nothing if the cloud provider has hardened its security posture and the customer has a hands off approach to security. Auto Service LLC has to ensure patch management practices on operating systems and software stack are up to date and follow security best practice. Robust incident management plans should be clearly documented with a dedicated team which has been provided adequate tools and training. Each staff member has to undergo annual security training tailored to their roles in the organization.

How to Address Security Concerns

Steps can be taken to address security concerns of cloud provider’s ability to ensure confidentiality, integrity and availability of data and services. These steps vary in nature but each requires a holistic approach on the customer point of view. Auto Service LLC should be focused on the network, physical environment, auditing, authorization, and authentication considerations when assessing a provider’s capabilities. Questions to ask are listed below -

  1. Is it clear whether responsibility for applications running on cloud infrastructure lies with the consumer or with the provider? 

  2. Where the responsibility lies with the provider, does the SLA make the provider's responsibilities clear and require specific security provisions to be applied to each application and all data? 

  3. Does the service provider have facilities in place to ensure continuity of service in the face of environmental threats or equipment failures?
  4. Can the cloud service provider demonstrate appropriate security controls applied to their physical infrastructure and facilities? 

  5. Does the network provide the consumer with logging and notification? 

  6. Is consumer network access separated from provider network access? 

  7. Is there separation of network traffic in a shared multi‐tenant
provider environment? 


Note: These questions are only a suggestion and that more questions should be asked pertaining to effective governance, risk and compliance, management of people, roles and identities, protection of data and information, privacy controls amongst others.

Cloud Vendor Best Suited for Auto Service LLC

The cloud Infrastructure-as-a-Service space is undergoing a shift in strategies around market leaders who include Amazon Web Services (AWS), Microsoft Azure, Visionaries, Google Cloud Platform, CenturyLink Cloud, vCloud Air from VMWare, and IBM (SoftLayer). These leaders may not be best providers for Auto Service’s specific need, and they may not serve some use cases at all, but they have a track record of successful delivery, significant market share and lots of customers that can provide references (Thor Olavsrud, 2015). For our recommendation, we compared Amazon Web Services and Microsoft Azure and believe AWS is best suited for Auto Service’s fleet management enterprise. Observing the past, we know that AWS’s roots are primarily IaaS, because its storage footprint and the demand of supporting Simple Storage Service, Linux, Firefox and SimpleDB. Azure’s initial focus was primarily PaaS due to the launch of SQL 2008, SharePoint and .Net integration. AWS is VM-first while Azure is services-first. Azure has a clear advantage for customers looking for PaaS but our focus in on IaaS (Jo Peterson and Michael Goodenough, 2015).

Deployment model(s)

In order to understand the deployment model(s) that best suit Auto Service LLC, it is a good idea to first take a step back to understand the different types of deployment models and get a sense of their respective security postures.

A community cloud serves a group of Cloud Consumers which have shared concerns such as mission objectives, security, privacy and compliance policy, rather than serving a single organization as does a private cloud. A community cloud may be managed by the organizations or by a third party, and may be implemented on customer premise or outsourced to a hosting company (NIST Cloud Computing Reference Architecture, 2011). The attack surface is spread out depending on the number of tenants which is a concern because host hijacking vulnerabilities are increased. Special attention on access control and data management has to be clearly documented and readily available to customers.

The public cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider. Customers have no exclusive access to usage of infrastructure and computational resources.

The hybrid cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability.

Private cloud infrastructure gives a cloud customer’s organization the exclusive access to and usage of infrastructure and computational resources. It can be managed by either the cloud customer, a third party, and may be hosted on the organization’s premise. It is important to note that private cloud infrastructure is the most secure and provides the customer the most control. We recommend that Auto Service LLC’s business objectives will best be served by the use of private cloud infrastructure.

Conclusion

Auto Service LLC has successfully managed to grow the into a middle side company and is now in talks to expand into the southern states of the country. Now more than ever there needs to be a close evaluation on the sustainability and scalability capabilities of the company when it comes to its enterprise IT systems. This report lays out current trends in cloud computing security and provides recommendations for a cloud computing architecture for adoption.

Infrastructure as a Service (IaaS) best addresses Auto Service’s current business of fleet management and its desire to cut costs on physical infrastructure housing servers, storage and network hardware. IaaS allows Auto Service to pay for only what is needed providing the necessary flexibility to scale as the company grows and scale back if the company downsizes. IaaS also enables Auto Service to focus on what they do best which is to deliver quality service to customers.

On the other hand, risk considerations have to be migrating to a cloud computing infrastructure. The utilization of IaaS essentially shifts security responsibilities between Auto Service LLC and cloud provider. Concerns particularly arise when we discuss about how the provider will maintain confidentially, integrity and availability of data and services. We trust that security policy controls will be put in place to enforce security best practices, but as we know that effort is never guaranteed. Questions that should be asked include but not limited to the following; Does the service provider have facilities in place to ensure continuity of service in the face of environmental threats or equipment failures? Can the cloud service provider demonstrate appropriate security controls applied to their physical infrastructure and facilities? Does the network provide the consumer with logging and notification? 


Given all the information presented in this recommendation, Amazon Web Service’s Information-as-a-Service with a deployment of private cloud model will ensure Auto Service maintains its lean growth strategy without having to contemplate how enterprise IT systems will be secured or how profit margins will be increased.

Citations

Fang Liu, Jin Tong, Jian Mao, Robert Bohn, John Messina, Lee Badger and Dawn Leaf. (2011). NIST Cloud Computing Reference Architecture: NIST Special Publication 500-292. Gaithersburg, MD. Computer Security Division Information Technology Laboratory National Institute of Standards and Technology.

Mell Peter and Grance Timothy. (2011). The NIST Definition Cloud Definition: NIST Special Publication 800-145. Gaithersburg, MD. Computer Security Division Information Technology Laboratory National Institute of Standards and Technology.

Peterson Jo and Goodenough Michael. (2015). AWS or Azure? 7 Decision Points. Channel Partners. Retrieved from: http://www.channelpartnersonline.com/blogs/peertopeer/2015/12/aws-or-azure-7-decision-points.aspx

StateTech Staff. (2014). 5 Important Benefits of Infrastructure as a Service: The benefits of the cloud extend far beyond cost savings. StateTech. Retrieved from: http://www.statetechmagazine.com/article/2014/03/5-important-benefits-infrastructure-service

Thor Olavsrud. (2015). Top Cloud Infrastructure-as-a-Service Vendors. Chief Information Officers Magazine. Retrieved from: http://www.cio.com/article/2947282/cloud-infrastructure/top-cloud-infrastructure-as-a-service-vendors.html#slide1

Workgroup leaders. (2012). Security for Cloud Computing: 10 Steps to Ensure Success. Cloud Standards Customer Council.

Incident Management and Patch Management

08 March 2017

Building a secure software product by going through the Software Development Life-Cycle (SDLC) processes (requirements, design, develop, test and deploy) following best practices, one can assume that the product can be built robust enough needing minimum attention once it is in service. That reality does not exist for public actively used software products; even for privately used software products for that matter. Steps have to be taken early in the SDLC to address worse case scenarios such beaches into the product, vulnerabilities found in code etc.

Incident management and patch management plans developed early in the SDLC of a software product aim to establish controls to ensure damage to the product is mitigated or is restored as quickly as possible. Incident management outlines policy guidelines that an incident team must follow which can include for example the duration a software product can be offline (depending on the tolerance level of the organization), how incidents are archived, a team of incident responders is identified, trained and provided necessary tools etc. These guidelines are well documented early in the SDLC before the product goes live.

Patch management carefully outlines policy guidelines on how to approach software patches keeping in mind the time it would take to make patches, how much it would cost to make patches and time and cost of testing. Policies clearly communicate to software engineers what type of patches to prioritize and which tools to use in addressing various patches.

Incident and Patch management are necessary activities that ensure that once the software product is deployed, controls are in place to identify, protect, detect, respond and recover a software product from a breach or vulnerability.

Sources

National Institute of Standards and Technology. (2014). Framework for Improving Critical Infrastructure Cybersecurity. Retrieved from https://www.nist.gov/sites/default/files/documents/cyberframework/cybersecurity-framework-021214.pdf

Incident Management Response

04 March 2017

Scenario of a breach

One day, large sums of money were withdrawn from several networked ATM systems by the same User ID.

Incident Management

A large sums of money was withdrawn from several networked ATM systems by the same User ID. Given that the core existence of a bank is to manage and keep customers money safe and secure, we have to identify this incident as a major incident (high urgency) because of its impact to customers and integrity of the bank’s reputation in community. There is a lot of unknowns in regards to the extent of the breach. We need to know how the attacker managed to exploit our current defenses, does the attacker have access to private and confidential information of a significant number of individuals, how quickly can we restore the ATM system among other questions to be asked.

Once the breach has been made aware to the incident manager, he summons the 1st level support to conduct an immediate incident resolution. Their objective is to use any means possible to stop the damage and restore the ATM system back to its original state. In our case, they would have to patch up the vulnerability that allowed the attacker to exploit the system causing the breach. Since this breach is of high urgency the time allocated to resolve the breach is 1 hour, and if that time exceeds the incident has to be transferred to team within 2nd level support. If hypothetically my role was incident manager, I would immediately bring the issue to 2nd level support to mitigate the time it takes to resolve the security breach instead of summoning the 1st level support because of the urgency of the breach.

Incident Response Processes

The 1st level and 2nd level management of incident response requires response teams to recover and gather as much information about the breach to be recorded and stored for archiving. A collection of incident investigations is archived for future reference to ensure that swift action is taken on recurring breaches or service interruptions in the future. It is important to also note that when an incident is logged, the duration and how the incident was resolved is also recorded. Incident logging is also helpful to gain an insight as to the kinds of breaches or service interruptions the organization experiences overtime (ITIL Incident Management, 2016).

As mentioned before, if hypothetically I was an incident manager I would have immediately summoned 2nd level support team. The 2nd level support team would quickly look at archived incident reports to determine if a previous incident or detected vulnerability exploit on the ATM system is similar to our current breach. If the detected vulnerability was not previously archived, the ATM system networked breach would be a “Zero Day Attack”. Zero Day Attacks are difficult to get to the root because they are new to the system. Specialist support groups or third party experts might have to be involved to see that no farther withdrawal of money is made from the ATM system.

When an accurate analysis of the breach is gathered, bank users and staff members would be made aware of the occurring or occurred incident in an effort to get users vigilant and keep an eye out for suspicious activities or to anticipate any service interruptions. The security incident notification message to ATM system users will have instructions with step by step explanations on actions they can take to protect themselves.

Instructions:

  1. Do not use ATM system for 48hrs until further notice
  2. Change ATM pin number
  3. Call customer service on suspicious account transactions

Incident follow-up and additional processes

During the incident management processes, detailed data about the breach was thoroughly documented to be archived. Details such as how the attacker managed to break the ATM system defenses, how much damage was caused to the bank and ATM users, how long it took to restore the ATM system, which response teams addressed the breach, how much money did it cost the bank, what tools where used to mitigate further withdrawal of money. Before the case is fully closed, a final review is conducted to ensure the incident is actually resolved and that sufficient detail supporting the incident is of quality.

Once the case has gone through quality control and properly archived, an ongoing monitoring process of the system for prior incidents including the ATM system networked breach has to be evaluated to continue implementing counter-measures that address likely weaknesses to the system.

Sources

ITIL Incident Management. (2016). Incident Management. Retrieved from http://wiki.en.it-processmaps.com/index.php/Incident_Management#Incident-Record

National Institute of Standards and Technology. (2014). Framework for Improving Critical Infrastructure Cybersecurity. Retrieved from https://www.nist.gov/sites/default/files/documents/cyberframework/cybersecurity-framework-021214.pdf

Sensitive Data Exposer: Black Box and White Box Test Cases

26 February 2017

Note: ATM Use Cases diagram can be viewed on the following link ATM Example Use Case

The ATM is designed in a way that once a user has successfully inserted a valid bank card and enters a valid four-digit pin, they are presented with four transaction options: withdrawal, deposit, transfer, and account inquiry. While the user navigates the ATM system, one threat that could pose as a danger to personal identifiable information (PII) is sensitive data exposer. This article is going to focus on the transaction feature/function of the ATM design because this is where the vulnerability is likely to occur.

Is threat mitigated by the current design?

It is difficult to determine whether the current design of ATM system mitigates sensitive data exposure because it does not explicitly outline how it addresses or if it recognizes the threat. The assumption will be that the current design does not recognize the sensitive data exposure as a threat. Before we come to a conclusion, we will conduct a black box test to understand how the design reacts to various tests. A high-level design of the ATM system will be used as the basis for tests. The high-level design recognized the threat sensitive data exposure had on PII and took steps to mitigate the vulnerability. These steps included limiting the amount of information displayed on the screen and using generic messages when a transaction is not successful.

User withdrawals

Each time a user wants to withdrawal N amount, the bank system internally determines whether they can or cannot withdrawal N amount by checking bank account records of the user. A message output depends on whether the user can or cannot withdrawal. If the user can withdrawal N amount, they are prompted to confirm that N amount is to be withdrawn. On the other hand, if the user does not have sufficient funds and cannot withdrawal, they are notified that no withdrawal can be made at that time.

User inquiring

When a user requests an inquiry to their account, the bank internally checks the user’s account summary of checking account and savings account. The user is then provided the option of printing out summary on a receipt.

User transferring

Each time a user wants to transfer N amount, internally the bank system determines whether they can or cannot transfer N amount by checking bank account records of the user. A message output depends on whether the user can or cannot transfer. If the user can transfer N amount, they are prompted to confirm that N amount is to be transferred. On the other hand, if the user does not have sufficient funds in the given account, they are notified that transfer cannot be made at that time.

User depositing

Once the user indicates that they want to deposit N amount in account, the ATM system prompts the user to specify which account they want to deposit the amount. The user cannot deposit more than $25,000 in either account. The bank internally updates its records once it has determined that indeed the amount deposited is correct.

Test steps

  1. Total amount in checking account.
  2. Total amount in saving account.
  3. Transfer N amount from checking account to saving account.
  4. Transfer N amount from saving account to checking account.
  5. Deposit N amount in checking account.
  6. Deposit N amount in savings account.
  7. Withdrawal N > $1,000 from checking account.
  8. Withdrawal N < $10 from savings account.

Test tools that to perform testing

The tools that I would employ to test sensitive data exposure vulnerability on ATM system would be Katalon Studio and Hp Fortify on Demand. Katalon Studio has the capability of conducting automated tests as well as manual test with the UI layer of an application.

Hp Fortify on Demand provides security as a service and manages security risk. Fortify on demand also has the capabilities of conducting automated tests with a full audit of results. Fortify differs from Katalon Studio because it focuses more on security as a posed to the overall functionality of an application (Fortify on Demand, 2017).

Tools for Enforcing Secure Code

19 February 2017

A simple Google search for an open source static code analysis tool displays a variety of tools for secure code analysis. Analyzing the various tools in the results menu, you quickly realize two things; one, that there are many open source code analysis tools, and two, the range at which code analysis tools differ from one another. Generally speaking, tools encountered during the survey some were multi-language (ex: VisualCodeGrepper, Zed Attack Proxy, PMD, and YASCA) others were language specific, for instance, only focusing on JAVA (ex: OWASP LAPSE+, FindBugs), PHP (ex: RIPS, and DevBug), or C++ (ex: Flawfinder, and CPPCheck). Another quick distinction between tools is the capabilities they had to conduct analysis. Analysis tools could either do both static and dynamic code analysis, others could only perform either static or dynamic code analysis. Furthermore, within language specific code analysis tools, tools differed further in the types of vulnerabilities they detected.

Recognizing the distinction and differences between open source secure code analysis tools is important and should not be overlooked. A series of discussions among programmers and security team needs to take place in deciding a particular code analysis tool(s). Questions to be asked are; Does the tool support language(s) on the project? What type of vulnerability and code issues they need to look for in code? What’s the rate of false positives associated with the tool(s)? among other targeted questions.

Selected open source tool

FindBugs code analysis tool will be used to demonstrate my understanding and purpose of open source code analysis tools. FindBugs is an open source code analyzer that focuses on detecting possible bugs in Java programs. Since the tool is written in Java, it is a stand-alone GUI application so it can be used on many platforms. Finally, FindBugs records potential errors in four ranks; scariest, scary, troubling and of concern (Markus, 2012).

Summary tool strengths and weaknesses

Strengths

Personally I prefer open source code analysis tools that focus only on a particular language because it allows the tool designer(s) to not only worry about detecting low hanging fruits, but get deeper into higher level coding vulnerabilities. Findbugs has a very low number of false positives. So the tool is generally reliable and finds valid bugs. The option of turning off scanning for certain types of bugs to be a useful feature which would increase the convenience of using FindBugs in different situations.

Weakness

FindBugs is not very customizable. For instance, Findbugs has its own defined coding style (naming conventions of methods etc.). There should be a way to customize the coding style so that it can analyze the code against a custom coding style specified by the user. It will not always be the case that the user will be coding in the style defined in FindBugs (Sandcastle, 2009).

Source code for analysis

   private static void createConnection()
    {
        try
        {
            Class.forName("org.apache.derby.jdbc.ClientDriver").newInstance();
            //Get a connection
            conn = DriverManager.getConnection(dbURL); 
            
        }
        catch (ClassNotFoundException | InstantiationException | IllegalAccessException | SQLException except)
        {
            except.printStackTrace();
        }
        
    }
    
    private static void insertUserInfor(String userName, String userPass)
    {
        try
        {
            createConnection();
            stmt = conn.createStatement();
            stmt.execute("insert into " + tableName + " values ('" +
                    userName + "', '" + userPass + "')");
            
            stmt.close();
        }
        catch (SQLException sqlExcept)
        {
            sqlExcept.printStackTrace();
        }
    }

Results after applying FindBugs to the code

Netbeans makes it easy to apply Findbugs on Java code. The tool is deployed as follows; simply navigate to the navigation bar and click on sources, then click on inspect and select Findbugs. On the Java code that was inspected, Findbugs quickly detected troubling bugs within the code and displayed them on the results console. In random order, the tool grouped bugs in two categories; security and bad practice. On the left hand side of the results console, Netbeans has a feature that allows a user to rank bugs, so in my case, after clicking on the feature security category is placed at the top and bad practice category is placed below.

Under security category, Findbugs detected that the Java code is vulnerable to SQL injection. The tool was able to trace through the folder tree locating the precise method the vulnerability was occurring in the code. Findbugs gives a recommendation to mitigate the bug: “Consider using a prepared statement instead. It is more efficient and less vulnerable to SQL injection attacks”.

Under bad practice category, Findbugs detected one vulnerability at two locations in the code. The vulnerability detected was improper closing of a database resource. Findbugs’ recommendation to mitigate the bug: “Failure to close database resources on all paths out of a method may result in poor performance, and could cause the application to have problems communicating with the database”.

Sources

CyberSecology. “The OWASP Zed Attack Proxy (ZAP) Scanner” CyberSecology-Web Scanner Reviews. Retrieved from http://cybersecology.com/the-owasp-zed-attack-proxy-zap-scanner/ (On 17 February 2017).

InfoSec Institute. (2013). “Which Weapon Should I Choose for Web Penetration Testing? 3.0” Retrieved from http://resources.infosecinstitute.com/which-weapon-should-i-choose-for-web-penetration-testing-3-0/#gref (On 17 February 2017).

Markus, Sprunck. (2012)."Findbugs – Static Code Analysis of Java" Methods and Tools. Retrieved from http://www.methodsandtools.com/tools/findbugs.php (On 18 February 2017).

Sandcastle. (2009). “An Evaluation of FindBugs” Analysis of Software Artifacts. Retrieved from http://www.cs.cmu.edu/~aldrich/courses/654/tools/Sandcastle-FindBugs-2009.pdf (On 18 February 2017).

Threat Modeling

12 February 2017

Building on the previous article where we discussed misuse/abuse case using ATM machine as an example, this week we discuss the threat modeling process. Threat modeling ranks threats during software design identifying which assets or components are most critical to the business and ranks them according to damage a threat would cause to the business. This ranking helps teams prioritize energy and resources on high ranking assets during a breach in an effort to mitigate damage. As a result, security code review of components whose threat modeling has ranked with high risk threat is prioritized.

Threat modeling process encompasses three high level steps which are decompose the application, determine and rank threats, and determine countermeasures and mitigation. We are going to delve deep into each step to get a better understanding how threat modeling is implemented.

Decompose the Application

As the first step in threat modeling process, attention is focused on gaining an understanding of the application and how it interacts with external entities. What does this mean? It means identifying entry points an attacker can use to exploit the application, identify trust levels as it pertains to access levels among others.

Determine and rank threats

It is not hard to guess what this step is all about. Threats are determined and ranked using a threat categorization methodology. OWASP outlines a threat categorization known as STRIDE, which we will be using to rank threats to the ATM machine. STRIDE stands for – Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege.

Determine countermeasures and mitigation

Once a risk ranking is assigned to the threats, it is possible to sort threats from the highest to the lowest risk, and prioritize the mitigation effort, such as by responding to such threats by applying the identified countermeasures.

External Dependences

We also outline external dependences. External dependences are items external to the code of the application that may pose a threat to the application (OWASP, 2015). For our ATM machine, an external dependency would be the machine’s connection to the bank both internally and physically.

Entry Points

Entry points are essentially an application’s attack surface. Meaning that we define the interfaces which potential attackers can interact with the application or supply it with data. For our ATM machine, the entry points are the pin pad, credit card reader, bank and operator.

Assets

Threat models are created during the design of an application to outline threat targets. Threat targets are things that attackers are interested in, this can be items/areas of interest. Assets are the reason threats will exists. Assets can be both physical and abstract. For our ATM machine, an abstract asset might be degrading people’s trust in using ATM machines. A physical asset might be a list of account records and personal information, as well as the actual stealing of money.

Ranking of Threats

Threats are ranked according to potential risk factors. Risk factors are determined by the impact they pose to a business and component and ranked in a list of high, medium, low risk. DREAD is a Microsoft threat-risk ranking model that we will use to rank threat factors. The risk factorization of the model allows the use of values influencing factors of a threat. Questions that threat-risk ranking model aims to answer are:
- For Damage: How big would the damage be if the attack succeeded?
- For Reproducibility: How easy is it to reproduce an attack to work?
- For Exploitability: How much time, effort, and expertise is needed to exploit the threat?
- For Affected Users: If a threat were exploited, what percentage of users would be affected?
- For Discoverability: How easy is it for an attacker to discover this threat?

Mitigation for threats.

Once the ATM has detected the inserted bank card is valid, the customer is prompted to enter a four-digit pin. This user authentication is threatened by brute force authentication and dictionary attacks. To mitigate these threats, the ATM should lock account after N failed login attempts as well as validate the minimum length and complexity of pin. It is important that when user enters wrong pin a generic error message is shown to ensure that an attacker is not provided clues as to which user account is stored in the bank. All these implemented measures collectively in return mitigate broken authentication and session management.

In order to mitigate the threat of sensitive data exposure, it is critical that the ATM displays the minimum number of information for each transaction. To give an example, let’s say a customer wants to withdrawal N amount from checking account. If N amount is more than amount in checking account, the ATM system should display a generic message ensuring PII is protected. To take it further, for each transaction, whether it is withdrawal, deposit, transfer, or inquiry, the customer needs to be prompted with a message asking for a confirmation to carry out an action.

Sources

Bjork C. Russell. ATM Simulation. Retrieved from http://www.math-cs.gordon.edu/courses/cs211/ATMExample/UseCases.html

OWASP. (2015). Application Threat Modeling. Creative Commons 3.0 License. Retrieved from https://www.owasp.org/index.php/Application_Threat_Modeling#STRIDE

Secure Software Design

05 February 2017

Least privilege

Architecture and Design Considerations for Secure Software by Software Assurance says that least privilege is a principle that each component, including components from the outside world and components embedded into the program, and every user of the system, use the least set of privilege necessary to accomplish desired tasks and objectives. The objective of employing least privilege principle is to reduce the number of actors (components and users) in the system who are granted high levels of privilege, and the duration of time each actor can hold onto those privileges.

Benefits

A benefit for employing the least privilege principle manifests in the way project stake holders (programmers, system administers etc..) look at security measures. The military has a saying’ - “need to know bases”. Those few words encapsulate the framework of which stake holders view the project and at the core of how they handle decisions. The unintentional, unwanted, or improper use of privileges are less likely to take place since the minimum amount of privileges are granted to users and components (Langford Jeff, 2003).

A benefit which does not come much as a surprise is that it is fundamentally and practically harder to reduce access privileges for actors when each actor is given a maximum amount of privileges at the beginning, than it when actors are given the least amount of privileges.

Example

We discussed requirements misuse/abuse case and used an ATM machine as an example. This week I will use the ATM machine again to explain how least privilege principle is employed and the reasons for using the principle. An ATM is made to be simple. Customers are allowed access to deposit, withdrawal, transfer, and inquiry. Customers cannot communicate with their respective bank to edit their profiles, edit bank pin, change username, edit account and routing number. Employing this principle limits the damage that can result from an accident or error at the ATM machine.

Don’t expose vulnerable or high-consequence components

When you host a large gathering at your home chances are that you will not leave a large ward of cash laying in the sitting room, or your passport visa along with your social security number on the kitchen table. These are high-consequence items that are being exposed to a large group of people. In software development a program usually hosts sensitive business data, personal identifiable information (PII), sensitive program executables and functions. These high consequence components should not be exposed to non-consequence components, meaning that high-consequence components should be stored separately from non-consequence components. The passport visa, social security number, and the large ward of cash should be placed in a boxed safe stored in a low traffic area such a locked bedroom.

Benefits and Examples

Not exposing vulnerable or high-consequence components means that you are actively and deliberately isolating trusted entities from untrusted entities. For example, program data executables and configuration data would be stored in a different location with the assumption that environment data is not trustworthy. This deliberate isolation of trusted entities from untrusted entities naturally reduces the attack surface and here is why. Going back to our house analogy, the act of removing passport visa and social security number along with the large ward of cash from the kitchen to a safe box in a locked bedroom drastically reduces the attack surface to one point of entry. It is easier to keep a recorded log of activities in a bedroom compared to an open concept kitchen. As a result, the attacker has a harder time locating and gaining access on vulnerable or high-consequence components because the attacker is left guessing where vulnerable/high-consequence components could be held his given limited resources to operate.

Complete mediation

Complete mediation design principle is a mechanism of which a software system systematically checks required access an object has each time that object attempts to access any type of resource. A reference monitor is a piece of software that checks every reference made by subjects to objects (Implementing Complete Mediation, by Schneider). To find out why reference monitor is critical for complete mediation, we use a simple example, if the access control rights of a subject are decreased after the first time rights are granted and the system does not check the next access to that object, security vulnerabilities can occur throughout the system.

Note: When we say object, it can mean the various components that encompass the whole system (including sub-components), outside plugins or software, users etc.

Benefits

Software Assurance (SwA) Pocket Guide Resources in an article titled Architecture and Design Considerations for Secure Software stated it accurately writing that complete mediation is the “primary underpinning of the protection system” (Software Assurance, 2008). The consistent evaluation of subject’s access to objects not only ensures that no permission violations can occur, but also makes it easier to log successful and unsuccessful authentication attempts on resources. A benefit would then be that during a security breach, swift actions can be taken to mitigate damage since access to various components is accurately logged.

Example

This past fall 16’ I was a Software Developer Intern for a startup creating batteries for electric vehicles. My role as a back-end developer exposed me to Amazon Web Services (AWS) cloud ecosystem. AWS employs complete mediation. In AWS ecosystem each user, AWS tool, and outside plugins are given various levels of access. The AWS administrator controls the amount of access each entity possesses. To give an example, for the Lambda Function to filter data coming from our API Gateway and then store the appropriate data in DynamoDB, the Lambda Function had to get appropriate permission from DynamoDB by triggering the two together. In order for the API Gateway to send raw data in Lambda Function, Lambda Function and API Gateway both had to be triggered together with the appropriate access. Lastly, the API Gateway had to establish appropriate access level with an outside entity where the data would be generated which in our case was customer purchase orders using a third party platform. Any change in access level along the chain causes the entire system to break.

Psychological acceptability

Psychological acceptability design principle is another way of saying software system design should be intuitive to any user, particularly the client side. In my view, to achieve an easy to understand or intuitive user interface (UX), the design should also follow established guidelines for supporting accessibility of disabled people. There are more than one billion people with various disabilities all around the world, of whom approximately 285 million are visually-impaired and 360 million hearing impaired (Rezaei A. Yashar, Heisenberg Gernot, and Heiden Wolfgang, 2014). With this approach, impaired users not only are able to easily apply protection mechanisms correctly, regular users are able to do so as well, increasing the security of the system.

Benefit

The overarching benefit that derives from employing psychological acceptability design principle is the drastic reduction of mistakes on the part of the user. The reduction of mistakes in tern translate to a more robust secure system.

Examples

The user must become involved in security decisions at some point. Either when the system is loading or when the system attempts a privileged operation. To use an everyday example, today it seems every company is using an interactive map for something. In order for an application to retrieve a user’s geolocation, by law the application has to prompt the user for access. Another example is whenever a user installs an application from an outside source, they are to be prompted for access of resources to be used and how they are to be used.

Minimize the number of high-consequence

The design principle of minimizing the number of high-consequence components from security risk employs the least privilege principle, separation of privilege, duties, and roles and separation of domains. The key concept of the design principle is to reduce the number of high-consequence components from risk. For example, to separate environments that are not trusted from trusted environments, and reduce the number of entities interacting with the high-consequence components.

Benefits and Example

The implementation of least privilege to minimize the number of high-consequence ensures the number of actors in the system that are granted high levels of privilege is reduced strengthening overall security. Another benefit derived from minimize the number of high-consequence is that the separation of domains makes it easier to implement separation of privileges, duties, and duties (Software Assurance Pocket Guide Series).

Sources:

Gegick Michael and Barnum Sean. (2005-2007). Least Privilege. Cigital [PDF file]. Retrieved from https://www.us-cert.gov/bsi/articles/knowledge/principles/least-privilege

Langford Jeff. (2003). Implementing least Privilege at your Enterprise. SANS Institute InfoSec Reading Room [PDF file]. Retrieved from https://www.sans.org/reading-room/whitepapers/bestprac/implementing-privilege-enterprise-1188

Mahizharuvi and Alagarsamy. A Security Approach in System Development Life Cycle. Dept of MCA, Computer Center (Vol. 2) [PDF file]. Retrieved from http://www.ijcta.com/documents/volumes/vol2issue2/ijcta2011020204.pdf

Micheal C.C, Gegick Michael, and Barnum Sean. (2005-2007). Complete Mediation. Cigital. Retrieved from https://www.us-cert.gov/bsi/articles/knowledge/principles/complete-mediation

Rezaei A. Yashar, Heisenberg Gernot, and Heiden Wolfgang. (2014). User Interface Design for Disabled People Under the Influence of Time, Efficiency and Costs. Institute of Visual Computing Bonn-Rhein-Sieg [PDF file]. Retrieved from https://www.researchgate.net/profile/Gernot_Heisenberg/publication/262601817_User_Interface_Design_for_Disabled_People_Under_the_Influence_of_Time_Efficiency_and_Costs/links/02e7e5384c1486f3ee000000.pdf

Software Assurance Pocket Guide Series. 2012. Architecture and Design Considerations for Secure Software. Building Security in Software Assurance (Vol. V) [PDF file].

Software Assurance. (2008). Enhancing the Development Life Cycle to Produce Secure Software. A Reference Guidebook on Software Assurance (Version 2.0) [PDF file]. Retrieved from http://www.cis.upenn.edu/~lee/10cis541/papers/DACS-358844.pdf

Schneider B. Fred. Implementing Complete Mediation. Retrieved from http://www.cs.cornell.edu/courses/cs513/2004SP/NL05.html

Personal Identifiable Information (PII)

04 February 2017

Credit Card Entry Form

Personal identifiable information or PII is practically any piece of information about someone maintained by an organization that can be used to distinguish or trace an individual’s identity. Information can include but not limited to name, social security number, date of birth as well as any other information that is linked or linkable to an individual.

PII examples are listed below by no means exhaust possibilities of information that could be considered PII. This is only meant to highlight/provide a framework as of what PII information can encompass. Examples include financial transactions, medical history, criminal history, employment history, individual ‘s name, social security number, passport number, driver‘s license number, credit card number, vehicle registration among others.

Figure 1

News Update form

In figure 1, PII data that is being gathered from the form is title, first name, middle name and last name, address, email, and phone number. The purpose for the form is to allow people to subscribe for bi-weekly e-newsletter. This form asks people to enter (optionally) their addresses which is not necessary for a bi-weekly e-newsletter. If I worked for vendor and was tasked to improve security, I would first assess the impact level of storing and maintaining address information. I would conclude that the impact level of storing and maintaining address information is moderate since the expected loss of confidentiality, integrity, or availability could have serious adverse effect on individuals. As a result, I would lean on the side of cation by removing address from the form.

Figure 2

Win $10,000,000

In figure 2, PII data that is being gathered from the form is name, address and data of birth in order to register to win $10,000,000 dollars. The fact that the form is asking for address is a slight over reach, and here is why, secure software development best practices show that a minimum number of information required to accomplish a particular task is needed - nothing more nothing less. If I worked for the vendor and was tasked to improve security, a possible way I would mitigate this is issue is by removing address from the form and instead wait until the participant has qualified to win the prize to solicit address information if it is needed.

Sources:

Bjork C. Russell. ATM Simulation. Retrieved from http://www.math-cs.gordon.edu/courses/cs211/ATMExample/Interactions.html#Startup

Erika McCallister, Tim Grance and Karen Scarfone (April 2010). Guide to Protecting the Confidentiality of Personally Identifiable Information (PII). Recommendations of the National Institute of Standards and Technology, NIST Special Publication 800-122.

“Requirements Analysis for Secure Software”. Software Assurance Pocket Guide Series, Development, Volume IV Version21, May 18, 2012.

Requirements Misuse and Abuse Cases

29 January 2017

We attempt to explain briefly what misuse/abuse cases are and why applying the concept in the development stage of software requirements results in a more robust secure product. Programmers generally create use case diagrams to demonstrate functions, flow and actions that the end-user and the application will perform, this ensures the program functions as it should and meets all the desired requirements. Misuse or abuse cases are similar to use cases, only that with misuse/abuse cases the developer has to place themselves in the shoes of an attacker. The programmer walks through functions, flow and actions of the program with the lenses of an attacker – how will the application or user handle the application in a way that is not intended. Misuse cases provide opportunities to investigate and validate security requirements.

ATM abuse/misuse cases

Atm System

Note: The ATM System use cases diagram is displayed above. Use cases diagrams can also be viewed on the following links:

ATM Example Use Case

ATM Operator Startup Use Case

Start-up Use Case

Before bank customers can begin using the ATM machine, an ATM system operator first has to Start Up the system following this sequence:

1.  Switch on operator panel
2.  Get initial cash in the ATM 
3.  Set initial cash into cash dispenser 
4.  Then open connection to the bank

Start-up Misuse/Abuse Case

The first sequence, operator panel being switched on, does not appear to have contingencies in place that authenticates the operator. As a result, what will most likely happen is broken authentication and session management allowing a malicious user to compromise the entire system placing customer’s personal identifiable information (PII) in danger. To mitigate this threat, operator panel has to prompt user to enter username and password for user authentication. To mitigate brute force authentication, guess valid user accounts and dictionary attacks, a generic error message is shown when authentication fails as well as locking the system after N failed login attempts.

Lastly, at the moment once the operator set initial cash into cash dispenser a connection to the bank is established. A vulnerability that can occur is missing function level access control. This is when a request is not verified, enabling an attacker to forge requests in order to access functionality without proper authorization. To mitigate this vulnerability, operator should be required to enter pin before a connection is established with bank.

Session Use Case

After start-up sequence is successful customers are free to begin sessions. Sessions should follow the following sequence:

1.  Valid bank card inserted into ATM
2.  Session is created. Reading card. If card is valid request to enter pin, if card is not valid terminate session.
3.  User enters valid pin 
4.  If pin valid, user can perform following transactions
        a. Withdrawal
        b. Deposit
        c. Transfer
        d. Inquiry
5.  User cancels session (this can be done at any stage of session)
6.  Card ejected

Session Misuse/Abuse Case

Once the ATM has detected the inserted bank card is valid, the customer is prompted to enter a four-digit pin. This user authentication is threatened by brute force authentication and dictionary attacks. To mitigate these threats, the ATM should lock account after N failed login attempts as well as validate the minimum length and complexity of pin. It is important that when user enters wrong pin a generic error message is shown to ensure that an attacker is not provided clues as to which user account is stored in the bank. All these implemented measures collectively in return mitigate broken authentication and session management.

Transaction Use Case

After a successful pin validation, customer is presented with four transaction options: withdrawal, deposit, transfer, and inquiry.

Transaction Misuse/Abuse Case

In order to mitigate the threat of sensitive data exposer, it is critical that the ATM displays the minimum number of information for each transaction. To give an example, let’s say a customer wants to withdrawal N amount from checking account. If N amount is more than amount in checking account, the ATM system should display a generic message ensuring PII is protected. To take it further, for each transaction, whether it is withdrawal, deposit, transfer, or inquiry, the customer needs to be prompted with a message asking for a confirmation to carry out an action. This will ensure that missing function level access control is mitigated to some extent.

Sources:

OWASP Top Ten Project. Top 10 2013-Top 10. 2002-2013 OWASP Foundation. Retrieved from https://www.owasp.org/index.php/Top_10_2013-Top_10

Software Development Life Cycle (SDLC) Training

21 January 2017

This article allows me to demonstrate the importance of secure training in preparation of software development life-cycles. I will be discussing the importance of security training in preparation for secure software development, providing five reasons for the importance of security training prior to executing the SDL process.

Meet current and evolving business and compliance needs

Organizations face new threats each year that aim at causing damage or compromise critical assets or degrade public trust through data dumps among others. The most effective way to keep on par with evolving threats is to prioritize business security training in preparation for secure software development. Security training ensures that as business requirements change, employees are in position to conceptualize and focus on employing secure software development practices and techniques, assuring that critical assets are adequately protected. Hence, through security training, a business requirements approach to developing software is taken as a posed to adversarial approach.

Keep pace with emerging security technologies

Above we mentioned that each year brings an evolution of new threats, this is largely due to the emergence of new technologies, such as Web 2.0 and Internet applications. For this reason, existing skill sets often fall behind the evolution weakening the capabilities to stay on par with security threats. Annual security training ensures that employees are trained on security technologies that work to make sure there is no unexpectedly unprotected product. Statistics from the United States Computer Emergency Response Team (CERT) show a rapid progression in total software vulnerabilities catalogued, hovering at about 7,000–8,000 per year during 2006 through 2008, up from about 1,100 in 2000.

Deepen stakeholder security specialized knowledge

Not only does annual security training ensure that employees are keeping pace with emergence of new threats, technology and security technologies, the education also facilitates a deeper understanding of software security allowing for further mitigation of security vulnerabilities. An employee is able to react appropriately when situations arise that are outside training. Furthermore, through training those employees that are excited by security can serve as mentors for others helping in secure code development or use of a specific code review tool.

Organizational emergence of security first culture

When an organization elects to focus on providing annual security training to its employees, overtime a security first culture naturally emerges permeating from executive directors all the way down to customer service representatives. An annual security training tailored to specific positions in the company facilitates this security first culture. So as you can see, security training does not begin and stop with software development, it encompasses the whole organization.

Future return from security trained workforce

A security trained workforce ensures that software vulnerabilities are mitigated overtime. It is well documented that the cost savings of finding and fixing vulnerabilities very early in the development cycle is significant - we cannot overlook this fact. We have to note, however, that there is no parallel correlation between software vulnerabilities and material loss, but with an emergence of new technologies the best way to ensure a reduction of vulnerabilities is by a security trained workforce.

Software Development Life Cycle (SDLC) Process Comparison

14 January 2017

This article allows me to demonstrate my current understanding of the various software development life-cycles. I will be selecting two traditional Software Engineering Phases and compare and contrast each of them focusing on: How they differ? How they are similar? And define one advantage and one disadvantage for each SDLC process.

Software Development Life Cycle (SDLC) is a well-defined, structured sequence of stages in software engineering to develop an intended software product. SDLC provides as series of steps to be followed to design and develop a software product efficiently.

These steps include:

  1. Communication

    This is the step that a user (customer) approaches a software developer or software company requesting for a desired software product. The user submits the desired software product in a document explaining how he/she wants it to function etc.…

  2. Requirement Gathering

    From now onwards the software developer or software company takes over. A brainstorming process takes place. Each stakeholder of the product is interviewed to learn as much as possible about the requirements of the product. Questionnaires are handed out among others. The objective in this phase is to gather as much information as possible.

  3. Feasibility Study

    After gathering required information from end users, the software developer(s) or software company analyzes whether a software can be made that address all required requirements. And if such a software can be made; are algorithms available? Is the technology feasible? How many developers are needed? Etc.

  4. System Analysis

    At this stage, developers decide how they are to execute the project. They understand the product’s limitations, the system’s related potential problems identifying and addressing the impact of project on organization and people using software.

  5. Software Design

    This step brings down gathered requirements and system analysis and design the software product.

  6. Coding

    The implementation of software design starts in terms of writing program code in a suitable programming language.

  7. Testing

    Software testing is done while coding by the developers and thorough testing is conducted by testing experts at various levels of code such as module testing, program testing, product testing, in-house testing and testing the product at user’s end.

  8. Integration

    Software may need to be integrated with the libraries, databases and other program(s). This stage of SDLC is involved in the integration of software with outer world entities.

Waterfall Model

Waterfall model is a sequential model. Software development is divided into separate phases and each phase is dependent on the success of the previous phase. The output of the one phase is the input of the other, so it is mandatory for the first phase to be completed before moving on to the next phase. The model assumes that everything produced in the previous step has no issues whatsoever because the model does not allow to go back and undo or redo our actions. In short, the development of one phase starts only when the previous phase is complete.

Iterative Model

Iterative model allows software developers to receive quick feedback at every stage of the development process. To give a simple illustration on how the model works; let’s say we are developing a small feature on our product, once we finish that feature we immediately test it to analyze how it functions, then using results gathered the test we go back to designing the feature making improvements. From there we continue coding and maybe expanding the feature. The process repeats until a large-scale software product is produced. As you can see, this model leads the software development process in iterations.

Similarities between waterfall model and iterative model:

In all fairness, there are few similarities between waterfall model and iterative model. One similarly that comes to mind is the ease at which both models can be implemented. Waterfall model being systematic by nature requiring that an output of the one phase is the input of the other - first phase to be completed before moving on to the next phase. Iterative model is cyclical – design, code, test and verify.

Differences between waterfall model and iterative model:

One of the most prominent differences between waterfall model and iterative model is the framework for software development process. As outlined above, waterfall model declares that the first step must be completed before moving onto the next step. Going further, it also assumes that everything produced in the previous step has no issues whatsoever and so it does not allow you to undo or redo actions. The waterfall model goes against the core structure of iterative model, in that unlike waterfall model, iterative model is designed to provide constant feedback by way of constant testing, evaluating, designing, verifying and repeating the whole process.

One advantage for waterfall:

The waterfall model is simple and easy to understand and use because stages are clearly defined and done one at a time. As a result, it is easy to make a prediction regarding the size, cost, and timeline of the project, as well as knowing how the finished product will turn out at the end of development. This model is great for small projects.

One advantage for Iterative:

One key advantage for implementing iterative model for software development process is that it allows products to be built and improved step by step. As a result, defects are able to be detected in early stages due to quick reliable feedback. This model is perfect for technology start-ups.

One disadvantage for waterfall model:

As we briefly touched on earlier in the post, waterfall model assumes that everything produced in the previous step has no issues whatsoever because the model does not allow to go back and undo or redo our actions. The development of one phase starts only when the previous phase is complete which is a major disadvantage. In software development, without adequate testing and feedback from end users, the likelihood of undetected defects in the software is high. It becomes very costly and difficult to go back and fix changes in large programs.

One disadvantage for iterative model:

Iterative model allows for quick feedback, which means projects are ever changing. For instance, the end of project may not be known until the project is fully compete. This uncertainty could mean project budget costs are unknown, on top of that, attention to risk analysis is higher compared to other models.

Software Vulnerability - SQL Injection

08 January 2017

In this article I create a simple form to demonstrate insecure interaction between a java based component
and the outside world explaining why the form poses as a vulnerability to the overall application.
Finally, I show how to properly secure the application.

A simple Java application is created with a simple login form that retrieves username and password using
two JTextFields and storing them into the Derby Database.

import java.sql.*;
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;


public class JavaAppInsecureSQL implements ActionListener {
    
    private static String dbURL = "jdbc:derby://localhost:1527/derbyDB;create=true;";
    private static String tableName = "appusers";
    // jdbc Connection
    private static Connection conn = null;
    private static Statement stmt = null;
    
    JPanel totalGUI = new JPanel();
    JPanel textPanel, panelForTextFields, completionPanel;
    JLabel titleLabel, usernameLabel, passwordLabel, userLabel, passLabel;
    JTextField usernameField, loginField;
    JButton loginButton;
    
    String usernameText;
    String passwordText;
    
    public JPanel createContentPane(){
        // create bottom panel as a base for pane content     
        totalGUI.setLayout(null);        
        titleLabel = new JLabel("Login Screen");
        titleLabel.setLocation(0,0);
        titleLabel.setSize(290, 30);
        titleLabel.setHorizontalAlignment(0);
        totalGUI.add(titleLabel);
        
        // Creation of a Panel to contain the JLabels
        textPanel = new JPanel();
        textPanel.setLayout(null);
        textPanel.setLocation(10, 35);
        textPanel.setSize(70, 80);
        totalGUI.add(textPanel);

         // Username Label
        usernameLabel = new JLabel("Username");
        usernameLabel.setLocation(0, 0);
        usernameLabel.setSize(70, 40);
        usernameLabel.setHorizontalAlignment(4);
        textPanel.add(usernameLabel);

        // Login Label
        passwordLabel = new JLabel("Password");
        passwordLabel.setLocation(0, 40);
        passwordLabel.setSize(70, 40);
        passwordLabel.setHorizontalAlignment(4);
        textPanel.add(passwordLabel);

        // TextFields Panel Container
        panelForTextFields = new JPanel();
        panelForTextFields.setLayout(null);
        panelForTextFields.setLocation(110, 40);
        panelForTextFields.setSize(100, 70);
        totalGUI.add(panelForTextFields);

        // Username Textfield
        usernameField = new JTextField(8);
        usernameField.setLocation(0, 0);
        usernameField.setSize(100, 30);
        panelForTextFields.add(usernameField);

        // Login Textfield
        loginField = new JTextField(8);
        loginField.setLocation(0, 40);
        loginField.setSize(100, 30);
        panelForTextFields.add(loginField);
        
        // Creation of a Panel to contain the completion JLabels
        completionPanel = new JPanel();
        completionPanel.setLayout(null);
        completionPanel.setLocation(240, 35);
        completionPanel.setSize(70, 80);
        totalGUI.add(completionPanel);

        // Username Label
        userLabel = new JLabel("Wrong");
        userLabel.setForeground(Color.red);
        userLabel.setLocation(0, 0);
        userLabel.setSize(70, 40);
        completionPanel.add(userLabel);

        // Login Label
        passLabel = new JLabel("Wrong");
        passLabel.setForeground(Color.red);
        passLabel.setLocation(0, 40);
        passLabel.setSize(70, 40);
        completionPanel.add(passLabel);

        // Button for Logging in
        loginButton = new JButton("Login");
        loginButton.setLocation(130, 120);
        loginButton.setSize(80, 30);
        loginButton.addActionListener(this);
        totalGUI.add(loginButton);

        totalGUI.setOpaque(true);    
        return totalGUI;  
    }
    
    public void actionPerformed(ActionEvent e) {

        if(e.getSource() == loginButton)
        {
            if(!(usernameField.getText().trim().isEmpty()))
            {
                usernameText = usernameField.getText().trim();
                userLabel.setForeground(Color.green);
                userLabel.setText("Correct!");
            }
            else
            {
                userLabel.setForeground(Color.red);
                userLabel.setText("Wrong!");
            }

            if(!(loginField.getText().trim().isEmpty()))
            {
                passwordText = loginField.getText().trim();
                passLabel.setForeground(Color.green);
                passLabel.setText("Correct!");
            }
            else
            {
                passLabel.setForeground(Color.red);
                passLabel.setText("Wrong!");
            }

            if((userLabel.getForeground() == Color.green) 
			&& (passLabel.getForeground() == Color.green))
            {
                
                try{
                    titleLabel.setText("Storing into database....");
                    // insert user information into database
                    insertUserInfor(usernameText, passwordText);
                    
                    titleLabel.setText("Information stored!");
                }
                catch(Exception ex){
                    ex.getStackTrace();
                }finally{
                    loginButton.setEnabled(false);
                    // shutdown database connections
                    shutdown();
                }
                
            }
        }
    }
    
    private static void createAndShowGUI() {

        JFrame.setDefaultLookAndFeelDecorated(true);
        JFrame frame = new JFrame(" Insecure SQL Java App ");

        JavaAppInsecureSQL demo = new JavaAppInsecureSQL();
        frame.setContentPane(demo.createContentPane());
        
        frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        frame.setSize(310, 200);
        frame.setVisible(true);
    }
   
    public static void main(String[] args){
     
        //Event-dispatching thread:
        //creating and showing this application's GUI.
        SwingUtilities.invokeLater(new Runnable() {
            public void run() {
                createAndShowGUI();
            }
        });
        
    }
    
    private static void createConnection()
    {
        try
        {
            Class.forName("org.apache.derby.jdbc.ClientDriver").newInstance();
            //Get a connection
            conn = DriverManager.getConnection(dbURL); 
            
        }
        catch (ClassNotFoundException | InstantiationException | IllegalAccessException | SQLException except)
        {
            except.printStackTrace();
        }
        
    }
    
    private static void insertUserInfor(String userName, String userPass)
    {
        try
        {
            createConnection();
            stmt = conn.createStatement();
            stmt.execute("insert into " + tableName + " values ('" +
                    userName + "', '" + userPass + "')");
            
            stmt.close();
        }
        catch (SQLException sqlExcept)
        {
            sqlExcept.printStackTrace();
        }
    }
    
   
    private static void shutdown()
    {
        try
        {
            if (stmt != null)
            {
                stmt.close();
            }
            if (conn != null)
            {
                DriverManager.getConnection(dbURL + ";shutdown=true");
                conn.close();
            }           
        }
        catch (SQLException sqlExcept)
        {
            sqlExcept.getStackTrace();
        }

    }
}

Once login button is pressed, the username and password is sent to be stored in the database.

Note: The simple login application does not take into account other software vulnerabilities.
We’re only demonstrating SQL Injection vulnerability.

The application works great, but when we take a look under the hood we find that this application is vulnerable
to SQL Injection attacks because SQL statements are not prepared and are dynamically storing data directly into
the database.

Insecure Code


 private static void insertUserInfor(String userName, String userPass)
    {
        try
        {
            createConnection();
            stmt = conn.createStatement();
            stmt.execute("insert into " + tableName + " values ('" +
                    userName + "', '" + userPass + "')");
            
            stmt.close();
        }
        catch (SQLException sqlExcept)
        {
            sqlExcept.printStackTrace();
        }
    }

SQL Injection attacks are the most prevent attacks to an application and inflicts the most damage exposing
sensitive data. Injection attacks attempt to break into application databases by injecting malicious code,
oftentimes in a form of SQL statements. They are often injected through form fields similar to the small
application I created, but are also injected through uploads, 3rd party APIs, configuration files, input files etc.

Secured Code


The code snippet above shows the application retrieving username and password directly placing it into the SQL
statement without validating or sanitizing the data. Essentially, the developer trusts the user not to
inject malicious code. The user should never be trusted. Below I demonstrate how to properly prepare SQL statements
in java.

private static void insertUserInfor(String userName, String userPass)
    {
        try
        { 
            // establish connection
            createConnection();
            String qTxt = "INSERT INTO " + tableName + "VALUES (?,?)";
            prepStmt = conn.prepareStatement(qTxt);
            prepStmt.setString(1, userName);
            prepStmt.setString(1, userPass);
            
            prepStmt.close();
        }
        catch (SQLException sqlExcept)
        {
            sqlExcept.printStackTrace();
        }
    }

When SQL statements are prepared information received from the user is placed in a prepared statement and not directly
into the SQL query, thus mitigating dynamic queries.

In this demonstration, I have alluded to mention another measure to take in order to mitigate SQL Injection attacks,
which is to validate and sanitize data retrieved from the user. As we’ve mentioned, the application cannot trust data
from the outside world. Information must be validated for content, length, format, and other factors before use.

Database Comparison: Oracle 12c vs. Amazon DynamoDB

17 December 2016

You successfully applied and landed one of your biggest contracts that expects you to design, manage and scale a database for a medium size manufacturing and distribution company. Business requirements state that the database must be hosted in the cloud. After conducting research, you’ve landed on two of the most popular databases Amazon DynamoDB and Oracle 12c.

Dynamo is Amazon’s managed NoSQL service known for its strong consistency and predictable performance. Its design feature is quite different from Oracle 12c, a Relational Database. Instead of the relational model, Dynamo uses alternate models for data management, such as key-value pairs or document storage. Comparisons of health, storage, and network connectivity between Oracle and Dynamo was a challenge since Dynamo is a managed service, meaning that Amazon Web Services provisions and running of infrastructure are handled for users.

On the other hand, Oracle 12c is a version of the Oracle Database which can be referred to as an Oracle Relational Database Management System, an object-relational database management system. Oracle 12c is a cloud enabled database, the ‘c’ on the 12 indicates ‘cloud’. The database features include a multi-tenant option that help users bring together databases into a private or public clouds with the added capacity to share the same hardware, platform, and network making it ideal for the distribution side of the company.

Regardless of which database should be implemented for the business, whether that is Dynamo or Oracle 12c, a database system must have scalable and robust mechanisms for its basic functions which include consistent storing, modifying, and retrieving data while efficiently handling failure detection, failure recovery, overloading handling, system monitoring among others. I find that discussing every detail of each mechanism to make a comparison between the two databases will eat up a lot of paper, and frankly would be missing the mark. Instead the discussion will focus on the three core system features used in Oracle 12c and Dynamo and compare the differences in the conclusion.

Physical Storage Structures

Oracle

Unlike NoSQL ecosystems, relational databases such as Oracle 12c have to allocate space for tablespace which are physically stored in data files. Each tablespace consists of one or more data files which mold to the host’s operating system where the database is running. The allocated disk space is formatted but contains no user data, so as the data grows space is used to allocate extents for the segment. Another distinction is that in Oracle 12c every data file is either online or offline. The administrator has ability to determine when data files are available.

Another note is that Oracle 12c leans on control files that serve multiple important roles in the overall functionality of managed data files. They contain data files, online redo files, and others needed to open the database. Structural changes to the database are also monitored and tracked. Metadata must be accessible when the database is not open, so it is the control file’s job to provide checkpoints where instance recovery would be required to begin.

To protect against data loss, Oracle 12c maintains online redo log files so that data not yet written to data files is able to be retrieved after an instance failure. It is able to do this because server processes write every transaction synchronously to the redo log buffer. Online redo log always contains the undo data for permanent objects. Unlike Dynamo, with Oracle 12c the administrator is able to configure the database to store all undo data.

Dynamo

Dynamo fundamentally differs from Oracle storage system in terms of the aim of its requirements. Dynamo was created with the mindset that applications using it will always be “writeable”, and to achieve this, data store mechanisms were made to always update regardless of server failures or concurrent writes. This is a common requirement for many NoSQL ecosystems including of course Amazon applications, bringing us to another distinction. Dynamo is built for an infrastructure within a single domain where all nodes are assumed to be trusted.

Two of the requirements that make Dynamo unique is its ability to scale incrementally and the use of vector clocks. To allow for this mechanism of scaling incrementally, its storage structures dynamically partition data over a set of nodes in the system. In particular, Dynamo’s partitioning relies on consistent hashing to distribute the load across multiple storage hosts which allows it to scale extremely fast at short amount of time. Dynamo uses vector clocks to capture events between different versions of the same object. A vector clock is essentially a list of node pairs and each clock is associated with version of every object.

Logical Storage Structures

Oracle

Data in Oracle is stored in data block, extent, segment and tablespace; however, some of these storage hierarchy can be found inside one another. For example, a segment is a set of allocated specific database table and a table is a storage unit that contains one or more segments. Each segment belongs to one tablespace.

As we’ll find later, Dynamo’s partition settings are dynamically increased depending on the throughput. Oracle has a similar mechanism for keeping track and allocating additional storage in tablespace or retracting storage when an object no longer requires it. To achieve this, Oracle uses bitmaps in the tablespace which are aside for a bitmap. Automatic Segment Space Management method uses these bitmaps to manage space in a tablespace.

Each Oracle user is assigned a default permanent tablespace. Tablespaces control disk space allocation of data, control tablespace online and offline, perform backup and recovery and export and import application data. Temporary tablespace is also deployed when a permanent tablespace is operational, however its data lasts only for the duration of a session. They serve to improve the efficiency of space management operation.

Dynamo

Dynamo is a fully managed document database service running in the AWS cloud that provides very fast and predictable performance. The database is optimized to retrieve and store semi-structured data as documents, formatted in JSON or XML.

Data in Dynamo is stored in partitions which are basically an allocated storage for a table, backed by solid-state drives and automatic replication across multiple available zones in AWS region. When a table is being created the database automatically allocates enough partitions so that the table can handle user set provisioned throughput requirements. And since Dynamo is designed to scale super-fast, increases in table’s provisioned throughput settings beyond what currently exists at a moment’s notice can allocate additional partitions.

Moving forward with our discussion on logical storage structures, it is important to understand how data is handled under Dynamo’s key/value interface. Tables usually have a partition key only or a partition key and sort key. Dynamo handles the two slightly differently to optimize efficiency. Under a partition key only table, the key has to be referenced when writing and reading items. A table with partition key and sort key similar to partition key, only now items are grouped and ordered by sort key value.

Data processing

Oracle

Oracle being a Relational Database Management System adheres to Structured Query Language which provides the interface. All operations of the data in Oracle are performed using SQL statements. For instance, SQL statement could be used to create tables, query, and modify data in tables. A procedural extension named PL/SQL embedded with Oracle database allows developers to use all Oracle Database SQL statements, functions and data types. What is unique about this procedural extension is its ability to provide control to the flow of SQL program, variables and deployment of error-handling procedures.

Dynamo

As noted earlier, Dynamo relies on provisioned throughput model to process data. A user is able specify read and writes via documents, such as JSON, XML declaring number of input operations that a table is expected to achieve. A user is also able to declare consistency characteristics for each read request within an application. The number of input operations can be determined by Item size, expected read and write request rates, has local or global secondary indexes etc.

Oracle 12c has a domain-specific language to query data, however Dynamo provides access with a simple application programming interface to create, read, update and delete data. Dynamo also has batch operations for reading and writing multiple items/rows across multiple tables. Another feature available is an atomic item and attribute operation, which basically allows for updating, adding, deleting and item only if a certain value is present or not present.

Dynamo’s impressive capabilities of taking in large amounts of data at a given notice means that it should also be able to read millions of data efficiently. To achieve this, Dynamo and many NoSQL databases support scan capabilities to address large-scale analytical processing. Dynamo’s Query API, filters and parallel scanning could be applied to narrow down result set.

Metrics Comparison

Data model

Oracle 12c

Oracle’s relational structure organizes data into tables, which consist of rows and columns. The schema defines the tables, columns, indexes, relationships between tables, and other database elements.

Amazon DynamoDB

A partition key is used to retrieve values, column sets, or semi-structured JSON, XML or other documents containing related item attributes.


Performance

Oracle 12c

Performance is generally dependent on the disk subsystem. Optimization of queries, indexes, and table structure is required to achieve peak performance.

Amazon DynamoDB

The performance is generally a function of the underlying hardware cluster size, network latency, and the calling application.


Scale

Oracle 12c

Oracle’s structure makeup makes it easiest to scale up with faster hardware.

Amazon DynamoDB

Dynamo was designed with the purpose of scaling out using distributed clusters of low-cost hardware to increase throughput without increasing latency.


APIs

Oracle 12c

Structured Query Language (SQL) is used to store and retrieve data requests. Queries are parsed and executed by relational database management systems (RDBMS).

Amazon DynamoDB

It is easy to retrieve and store data structures with object-based APIs. Partition keys let applications look up key-value pairs, column sets, or semi-structured documents.


Data Storage

Oracle 12c

Partitions.

Amazon DynamoDB

Automatic Segment Space Management (ASSM) is used in conjunction with bitmaps.


Conclusion

Amazon Dynamo and Oracle 12c are all excellent databases because they are capable of scaling extremely fast in a short amount of time. Deciding on what database to implement for a business setting ultimately depends on the type of data you’ll be storing, the volume of data, among others. Each database has its strength and weaknesses. While conducting research, I found that a user willing to be hands on might consider working with Oracle 12c. With Oracle, you have to provide your own hardware, for example to allocate disk space used to create room for the segments. You also have control over whether the database is online or offline, determining when data files are available. Dynamo removes you from having to think about how to set up the infrastructure because the database’s ecosystem is handled by Amazon Web Services.

Each database handles instance failure different as well, I found that Oracle automatically maintains processes to write every transaction synchronously to the redo log buffer so that data not yet written to data files can be retrieved. Dynamo was designed with the expectation that failures are going to occur, so the database is constantly anticipating these anomalies, as a result, data store mechanisms were made to also update regardless of server failure. In this sense, each database handles these basic but critical functions. So in a sense both databases are similar in that they have the capabilities to handle instance failure.

Another similarity to point out is that both databases are capable of scaling extremely fast in a short amount of time, though each accomplishes this in its own unique way. We know that data is stored in partitions in Dynamo, so when a large amount of data is stored partition settings dynamically increase. And for Oracle, an Automatic Segment Space Management (ASSM) is used in conjunction with bitmaps to manage space in tablespace. In a sense, ASSM functions similar to partition settings in that storage is able to be automatically fluctuate to accommodate incoming flows of data.

In conclusion, determining which database to use boils down to your business requirements and expectations of how data should be stored, modified or gathered. I would be remised if I did not end the discussion by mentioning one obvious distinction, which is that Oracle is a relational database management system and adheres to Structured Query Language. On the other hand, as we’ve touch on quite a bit is that Dynamo a NoSQL database relying on a provisioned throughput model to process data via documents, such as JSON and XML.

The optimistic view of 2016

04 January 2016

At this time each year I take a moment to reflect on the year that was. The year 2015 began with me knowing that I was embarking on the next phase in my life given that I had just graduated from college in December 2014. I had somewhat of an idea how the year would turn out; for example, I had secured an internship with MenEngage so I knew I’ll be working in D.C but didn’t quite know how it would pan-out, I wasn’t totally jumping for joy about the opportunity simply glad that I wouldn’t be home searching for jobs. Also I knew that I would be visiting Africa for a period of time and that it would be the first time visiting the continent in roughly 13 years. My visit to Africa was what I looked forward to the most starting the year because I somehow knew it would change the course of my life (my thinking) and indeed my life was forever changed. This brought with it a lot of turbulence both mentally and professionally. The trip allowed me to rethink how I thought about careers, allowed me to look deep inside myself to determine truly what I wanted to do with my life - understanding what path in life would help me reach my burning desires. So as you can imagine, there was a lot uncertainty and confusion during this time. I beat myself up inside for not foreseeing and actually valuing my education before getting to this point, and I vow to never take life and opportunities for granted.

The New Year has approached in quick fashion bringing with it an abundance of optimism. There is no doubt that 2016 is going to be special. Special in terms of not only will it be my second year in the real world, but this year I’ll consume more information that I have consumed last year and the year before. For starters, I’ll continue with course work at UMUC, in fact, in total I’ll be taking 7 courses in Software Development and Security. On top of that, during the summer I’ll aim to secure an internship to supplement my education and gain real work experience and make connections in the software security field.

Secondly, in 2016, I’ll continue assisting Mr. Augustine Guma with is Start-Up Gumax International. This year Gumax International is expended to open a brand new restaurant in Woodbridge called Gumax Spicy Pies which is projected to generate a lot of revenue for the company. Before 2015 ended customers were placing in orders for the products so all these positive signs imply that the restaurant will be welcomingly busy. Another thing to look forward to is that Gumax International was picked as one of the vendors at the Super Bowl in San Francisco this coming February. I hope Guma picks me to attend such a massive event. Lastly, I’ll experience first hand how it’s like to work as a staff accountant during tax season. During this period my accountancy skills will truly be enhance expanding my brain further.

I cannot write a first blog without mentioning the people in my life that make my life happen and enjoyable. There is one thing that I’ve learned in this world no one no matter how successful someone is no one goes through life by themselves. Each year as life happens people enter and leave your life some enhance your life making you a better person emotionally, professionally or economically, and some enter your life to only pull you back. On August 16th 2015, I was able to have the girl of my dreams enter my life. She not only makes my heart sing sounds of joy each time I see her, but she stimulates my brain and my thinking each time we talk. I look forward sharing my journey with her this year. My family also plays a massive role; they are always supportive, always patent with me and are always looking out for my best interest even when I cannot see it at the time. I love all of them and cannot wait to make them proud someday.


Older posts are available in the archive.