Friday 12 July 2019

Six Essentials for Your Cloud Security Program

In traditional on-premises systems, organizations have the effect of securing everything - in the physical premises towards the hardware, operating-system, network, and applications.

In cloud deployments, it doesn’t work this way. In public places cloud - both infrastructure like a service and platform like a service - security responsibility is shared between your CSP and also the customer (you). The company owns the safety from the physical layer and infrastructure facets of the cloud along with the facets of the compute, storage, database, and network and application services they provide. You, the client, own the safety configuration of your os's, network traffic, and firewall settings - plus all security by yourself systems that are utilized to connect with the cloud.



Having a broad knowledge of the Shared Responsibility Model, let’s review six cloud security essentials that has to Continually be addressed.

Classify apps and knowledge


Think about which applications and knowledge you've which are important to running your company. Start your security efforts here. Which apps and knowledge would cause executive leadership, stockholders, or people to abandon ship if breached? What data, if leaked, could cripple the opportunity to work or effectively compete? What data would cause regulators to get involved with a whirr and perhaps lead to fines or sanctions? Highly coveted business data and government-controlled data should be considered critical and guarded.

Keep close track of application security


Attackers frequently target vulnerabilities inside your web applications. To make sure your applications have the freedom from software vulnerabilities, you need to positively search for vulnerabilities that induce security risks. When the applications are free or off-the-shelf, make certain to patch regularly and make certain to patch critical security flaws immediately. When building your applications, be sure that your developers are educated to use secure coding practices and continuously check out the apps for potential flaws. The right place to consider guidance regarding how to start a credit card applicatoin security program may be the Open Web Application Security Project (OWASP).

Get user identities and access in check


Put processes in position to handle your user identities. This entails knowing who your users are, what job roles they've, and which applications and sources they will be able to access. It’s vital that you limit use of only individuals who've an acceptable requirement for individuals sources. Once the roles of those people change, change their access. If somebody leaves the organization, for reasons uknown, get their access revoked. This is among the most significant steps you can take to help keep a great security posture, yet it’s one sector that's so frequently overlooked.

Establish and manage policy and configuration


It’s essential to establish policies for security checks, settings, and configuration levels for those systems, workloads, and apps. Just like vulnerability scans, first of all, it’s vital that you find systems which are outdated, after which check to make sure systems are configured and running in compliance with policy.

If it may be automated, automate it


If there's a burglar task that may be automated through scripts or cost-effectively offloaded to some security services provider, it ought to be done. This e-book offers some useful tips. If you're a smaller sized organization, scale the recommendation lower for your size, however the precepts remain similar.

Anticipate to respond


Obviously, standing on a stable lookout for security too little your business is essential, however, many organizations, regrettably, don’t bother to consider what comes next: removal. When you begin searching for security vulnerabilities, what's going to the business do in order to remediate them? Whenever you find violations of policy compliance, how would you rapidly narrow the gap? Make sure to think these through and plan in advance.

Wednesday 10 July 2019

The Greatest Risk Is Not Doing a Risk Assessment

I'd a fascinating discussion using the Nederlander people of Parliament about cybersecurity. The politicians desired to know my thoughts about 5G security and just what I figured in regards to a cybersecurity tender released by a connection of 380 government municipalities.

The tender aimed to get security products for example firewalls, endpoint protection systems, and CASB (Cloud Access Security Broker) products, possibly from three different security vendors.

I said excitedly that this is the wrong manner to approach a cybersecurity tender. Defense against cyber threats isn't just about buying siloed point items that provide discreet methods to single problems. Nor will it rely on simply replacing some products having a slightly cheaper version.

Effective cybersecurity needs a holistic strategy that starts with developing a risk assessment.



The very first task of the risk assessment would be to find out the crown jewels from the business - the important thing assets and knowledge that must definitely be best protected. This may be customers’ ip, charge card details, or your personal data. It may be private medical information or sensitive industrial data.

The next thing is to evaluate the potential risks of cyberattacks that threaten individuals important assets. A practical method of developing a risk assessment would be to gather ten to fifteen employees from departments across your business right into a room and brainstorm the cybersecurity risks in the industry. Simultaneously, the workers should think about how likely these risks will be to materialize.

After I was chief information security guard (CISO) in a webhost, we produced a helpful risk assessment plan through a number of brainstorms where we assigned something to every risk. The probability of a danger was categorised from 1 to 5, one being safe and five to be the greatest. Only then do we evaluated the outcome from the risk occurring, again from 1 to 5. The danger value was calculated simply by multiplying the 2 figures together.

During the period of several workshops, we created as many as 225 cybersecurity risks. A number of them were built with a risk worth of over 20 - these were prone to happen and may badly affect the organization. There have been also less urgent risks.

The threats we identified incorporated items like an worker departing the organization and taking their password together so that they could connect to the network when needed. Or the potential of a loss of revenue of power inside a data center that restricted the supply of information. Another risk might be a misconfiguration from the system resulting in data being left unprotected.

Once individuals risk values happen to be calculated, it can be the board of company directors to determine what sources they are ready to commit to avoiding these threats. That may mean taking measures from the top 15 threats, with less attention compensated to less dangerous threats.

The good thing about creating risk values is it enables their board to consider decisions instead of CISO. Managing risk is, in the end, among the board’s core responsibilities.

We judged that the risk of an worker departing with login details to become extremely high, therefore we set up a stride to make sure that any departing employee needed to go to the IT department first to obtain their password cancelled. They couldn't be signed off by HR without creating a document from this showing they'd carried this out. Although this introduces paperwork in to the system, it will help reduce the specter of hacking. This is actually the type of trade-off that every company’s board of company directors must make.

Another risk-reducing solution might be enforcing two-factor authentication for sensitive data. It has an expense and may slow things lower. Again, it's the job from the board of company directors to judge the potential risks and find out if the solutions are warranted.

Regrettably, in the current fast-moving world, you may still find too couple of organizations that do a decent risk assessment for his or her cybersecurity. Though, to become fair, the concept is progressively increasing in popularity.

The way in which cybersecurity has changed is as simple as taking piecemeal steps to tackle specific problems because they came about. In the last ten years, it has ballooned a lot that every organization has typically 34 security point products in position, each one of these creating its very own little silo. Consequently, CISOs seek individual replacements for his or her firewall or anti-virus software. However this just threatens to help complicate their cybersecurity framework.

Merely a well labored out risk assessment allows all concerned - from CISO also it staff towards the board of company directors - obtain a obvious vision of what’s on the line with regards to protecting their organization from an enormous amount of evolving threats.

Hopefully, the municipalities from the Netherlands - and each other organization - will realize that the finest risk they face is neglecting to perform a risk assessment.

Monday 8 July 2019

Set It and Forget It? Not for Cloud Security

The general public cloud marketplace is scorching almost every other segment from the IT industry. Based on a study from research firm Forrester, the general public cloud market will double from the current size to achieve $236 billion through the year 2020. However that doesn’t mean there aren’t big problems with regards to cloud adoption - especially regarding security and regulatory compliance concerns.

Based on the 2018 Cloud Security Report, while adoption for public cloud-computing is constantly on the surge, security concerns are showing no indications of abating as 91% of organizations today are worried about cloud security. These security concerns are brought by avoiding loss of data and leakage (67 %), threats to data privacy (sixty-one percent), and breaches of confidentiality (53 %) - all up when compared to previous year.

There's even the other extreme: individuals who see the public cloud as inherently secure - like some type of Ronco rotisserie oven, whereby the safety mindset and approach is “set it and end up forgetting it”.

Well, neither of those views is accurate. Cloud security is neither a contradiction, nor a burglar cure all. That stated, you will find distinct variations and challenges, which follow:

The abstracted nature of cloud-computing


This abstraction and insufficient visibility is a vital challenge, specifically for individuals who're a new comer to cloud security out on another always comprehend the responsibility breakdown, in other words, where their security responsibility ends where down to the cloud platform/company begins (or the other way around). Relocating to the cloud needs a transfer of mindset. Leave the information center concepts behind and accept losing natural visibility. (Remember, though, you will find tools like RedLock open to supply the needed degree of visibility to secure your business’ multi-cloud adoption.)



Compliance in cloud versus. on-premises


There’s an impact between what policy and regulatory compliance appears like in public places cloud systems versus what it appears as though in cloud software services and also the data center. The cloud is dynamic, making traditional change control and configuration management efforts deployed on premises very difficult. Add the truth that no compliance standards like PCI, HIPAA, GDPR yet others were written for cloud environments. Which means that someone must physically perform the effort of converting abstract needs to a particular technical controls for every cloud service. Thinking about the a large number of features that CSPs add every year, how long and sources needed to help keep this current is exponential.

The Middle for Internet Security Software helps to map security controls and compliance needs to whichever services are running in cloud. However, it’s crucial that organizations implement tools or ways to provide details and context around what’s compliant and what’s not with regards to regulatory compliance and security compliance controls.

Managing data to the classification


There are lots of who contend that critical data should not be make the cloud. No matter one’s feelings about them, critical information is likely likely to finish in the cloud (if it is not already there). In most of the surveys I see, about 50 % of respondents are putting critical or sensitive data (for their enterprise) in cloud systems. Actually, many enterprises are utilizing cloud providers to carry financial and health-related data. You will find serious questions on how to manage this data within the cloud, in addition to how you can manage SaaS along with other cloud providers who cope with sensitive data.

In fact it’s become fiscally attractive for organizations to make use of the cloud to keep bulk of unstructured data for backup, machine learning, data ponds, etc. But, most occasions, it's impossible for enterprises to understand which kinds of data are kept in these environments, making data classification very important. It’s one factor to reveal an information set that contains nonpublic information, say an advertising and marketing website’s content located with an S3 bucket, for instance.  A company can recover relatively untouched. It’s quite another to reveal a bucket that contains names and account figures for your customers. The negative backlash could be an excessive amount of to beat.

The continual nature of cloud


The cloud is definitely on. And in contrast to the controlled, scheduled and top-lower regimented the past, cloud updates are born from continuously delivered software pipelines in organizations where there's a substantial push for agility and continuous updates.  This involves DevOps teams to construct tools and services that support faster deployment, in addition to more quickly gather system data and feedback to enable them to quickly iterate and improve.

This drive toward continuous computing and continuous software enhancements should play well for security. When it’s contacted properly, enterprises can gather continuous data concerning the condition of the cloud security posture and the kinds of security controls and compliance rules in position plus, identity and file encryption policies can be seen in tangible-time for you to track the way the whole of the security technique is employed in the cloud. As well as for most of the challenges I in the above list, continuous real-time monitoring is definitely an absolute necessity.  If you would like, you are able to give continuous monitoring a go inside your cloud atmosphere.

Saturday 6 July 2019

How Are You Tackling Cloud Compliance?

Within the race towards the cloud, I’ve observed a disturbing trend. Daily, I talk to organizations which have moved production workloads to cloud IaaS providers but haven’t yet addressed the way they will manage, measure and set of regulatory compliance controls. Among all of the concerns over whether public clouds feel at ease, some organizations missed a vital question:

Are we able to demonstrate compliance without overworking our teams along the way?


It isn't surprising it has had an impending PCI or SOC 2 audit for SecOps and risk and compliance teams to possess a reckoning about how exactly they'll appraise the compliance of the cloud infrastructure. Not have a lot of people within an organization had the ability to create changes towards the infrastructure that may potentially go unchecked. To help complicate things, traditional tools which help with compliance within the data center can't be utilized in the API-centric realm of the cloud. Without tools created for the cloud, teams have to navigate tiresome, manual ways to produce proof of technical compliance controls over the dynamic and fast-altering cloud infrastructure. Sure, you are able to prove that sooner or later you passed the controls, what was the problem 24 hrs before or more days after? Point-in-time compliance just doesn’t work any longer.



With tales of cyber risk, cybercrime, online hackers and breaches topping our news feeds every day, organizations need so that you can demonstrate a continuing practice of managing security. Just like DevOps teams now utilize “continuous delivery” and “continuous innovation” making them an element of the everyday IT language, “continuous security” and “continuous compliance” have to be just like frequent discussion topics.

The good thing is, unlike managing compliance in traditional data centers, modern infrastructure provides for us a way to addressing security and compliance programmatically and instantly. The APIs we've available enable another era of security automation. While using APIs, you have access to metadata regarding your infrastructure and continuously monitor and measure if the changes that occur are presenting new risks to your atmosphere. The development of technology particularly made to help streamline and automate the entire process of security assessment and removal for that cloud have advanced how organizations manage their security posture and compliance processes.

Using Automation to handle Compliance


For DevOps teams, using automation to handle security means they may also manage compliance through the entire development lifecycle, instead of accumulating a backlog of compliance debt that needs removal before delivery. The cloud has additionally permitted DevOps to codify both security and compliance, which reduces risk by making certain guidelines are adopted, and changes to infrastructure and also the cloud atmosphere stick to their organization’s security policies.

Automation of compliance also enables teams to streamline the entire process of documenting and certifying the accounts, services and workloads within the cloud once the auditors come knocking. This automation will help you create an abstraction layer to safeguard your operations and development teams from disruption and distraction, which could in addition have a significant negative effect on your timelines and main point here. With the proper cloud security tools in position, you are able to provide auditors read-only use of compliance reports when needed, eliminating the requirement for team people to become in the center of individuals demands.

So, while your senior management may wonder if a cloud provider is FISMA-, HIPAA- or PCI-compliant, you have to raise yet another issue: how can your business demonstrate compliance running in a number of public clouds? You must have a warranty you will get executive support to include new tools for your arsenal that can help your team manage, assess and set of security and compliance without having to stop innovation and creating harmful workloads for the development and processes teams.

Thursday 4 July 2019

Four Cloud Security Concerns (and How to Address Them)

The cloud could be overwhelming. Counter towards the structured and disciplined rigor of old-school, waterfall, data-center-centric database integration, there’s code being deployed inside a nearly continuous fashion. Traditional servers are history. Transmission exams are so outdated when they’re done that CISOs as well as their teams remain wondering when they really acquired everything from the exercise.

I consistently speak with enterprises which are either beginning or speeding up their change from traditional on-premises infrastructure towards the cloud. They anticipate benefits, including elevated agility, lower cost, versatility, and ease-of-use. But in addition to this transition comes new security concerns and a little bit of fear to finish it off. They’ve heard the tales using their colleagues. Most of the security guidelines and tools formerly trusted have become trivialized, like traditional Audio-video endpoint choices and network checking, while API-centric security is quickly gaining traction. Today’s cloud security practices really are a big shift from how we’ve been managing to safeguard the prior 3 decades.

However, almost every organization recognizes the necessity to change and modernize their security policies to carry on to attain corporate goals while benefiting from everything the cloud can provide. Security, as you may know it, could possibly be the ultimate accelerator or even the greatest blocker in cloud adoption and technical innovation.

Many security and development professionals are battling to obtain the right cloud security method of fit their modern IT practices. They worry most about the possible lack of control and visibility that is included with public cloud. They also shouldn't create the opportunity of their organization to begin falling behind competitors because they’ve slowed or blocked the adoption of cloud or any other carefully related emerging technologies for example Docker and Kubernetes.



With regards to cloud security today, there are lots of problems that organizations are attempting to examine. Listed here are a couple of I hear probably the most and just how I would recommend addressing them:

1) Viewing the cloud as the second product


You cannot assess your cloud security today and assume your assessment is true tomorrow. Honestly, it most likely won’t hold true an hour or so from now. The cloud resides, breathing, and quickly altering. Security in this particular constantly altering atmosphere should be continuous, or it will not work. Traditional security approaches weren't produced to suit the quickly altering, elastic infrastructure from the cloud. As attacks become more and more automated, you have to adopt new security techniques and tools to operate effectively within this new ecosystem. Terraform and Ansible are generally great choices for automating your security stack. Listed here are a couple of choices to consider.

2)  Understanding that traditional checking just won’t do


Traditional data center security depends on being deployed inside an application or operating-system, or on traditional network-based IP checking techniques. Within the cloud, this method doesn’t work. Users run application stacks on abstracted services and PaaS layers or leverage API-driven services that render conventional security approaches ineffective. Cloud environments are extremely essentially not the same as their static, on-premises counterparts they require a completely new method of administering security practices. What this means is adopting new cloud security technologies that offer extreme visibility by leveraging a mix of cloud provider APIs and integrations along with other third party tools. Learn on how to get visibility and context for the cloud deployments.

3) Differentiating real security issues from “noise”


Teams employed in the cloud take advantage of speed and acceleration, but it’s vital that you learn how the method of security should be vastly different. A significant challenge is discerning real vulnerabilities from infrastructure “noise.” All of this change and noise create a manual inspection from the infrastructure not fast enough to work. The API-centric cloud world requires a different way for security teams to safeguard their environments, although not all cloud also it teams really understand these security nuances. Security automation is an excellent method to beat the understanding and skills shortfall that exists in lots of development also it shops.  Learn to better automate and let your SOC.

4) Insufficient compliance with API-driven cloud security


The emergence of API-driven cloud services has altered the way in which security must be architected, implemented, and managed. Even though the API is really a brand-new threat surface that we have to defend, additionally, it provides the opportunity to automate recognition and removal. As compliance benchmarks, such as the CIS AWS Foundations Benchmark, are freed, we'll possess the way to assess our security posture against industry-defined guidelines. These make certain we’re using the right steps to help keep our customers, employees, infrastructure, and ip secure. Cloud migrations are happening rapidly, and compliance with quickly-evolving security needs is definitely an ever-growing challenge that must definitely be resolved through automation to be able to claim success.

Tuesday 2 July 2019

6th Annual Cybersecurity Canon Hall of Fame Awards

Last week, Palo Alto Networks hosted the sixth annual Cybersecurity Canon Hall of Fame Awards at the Watergate Hotel in Washington, D.C. The Cybersecurity Canon Project is a network defender community effort to identify the must-read books in the cybersecurity space. It is set up like the Rock and Roll Hall of Fame in that a committee of network defenders read the books and decide which ones should be candidates, and which ones should be inducted into the Hall of Fame.

At this year’s awards ceremony, the committee inducted two authors into the Lifetime Achievement Hall of Fame: Neal Stephenson and Bruce Schneier. Each author had so many books on the candidate list that the selection committee could not pick just one to induct into the Hall of Fame. The unanimous committee consensus was to recognize these two authors as lifetime achievers to the Cybersecurity Canon Project and international treasures to the network defender community.



The committee also inducted four new books into the Hall of Fame:



– “Security Engineering: A Guide to Building Dependable Distributed Systems,” by Ross Anderson


– “Practical Malware Analysis: The Hands-On Guide to Dissecting Malicious Software,” by Michael Sikorski and Andrew Honig


– “The Cathedral & the Bazaar,” by Eric S. Raymond


– “American Spies: Modern Surveillance, Why You Should Care, and What to Do About It,” by Jennifer Stisa Granick


Attendees of the gala were local security luminaries and cybersecurity students from the tri-state area. The awards ceremony was a formal affair where I wore a tux and handed out Academy Award-style trophies to the inducted authors. Afterward, the authors signed their books and gave them to the students.

Sunday 30 June 2019

Introducing Prisma, a New Approach to Cloud Security

Today we introduced Prisma, a brand new cloud security suite. We feel Prisma will transform the cloud journey for the customers by securing access, protecting data, and securing applications.

Right from the start, our method of cloud security continues to be targeted at delivering the very best security while embracing the initial requirements of the cloud. We offer customers with complete visibility in addition to suggested configurations across all of their cloud atmosphere to make sure a powerful security posture from the beginning and consistently prevent attacks.



The Prisma suite gives customers what they desire to control access, safeguard data and secure applications. It's four critical factors:

  • Prisma Access safeguards access assessing branch offices and mobile users all over the world having a scalable, cloud-native architecture, blending enterprise-grade security having a globally scalable network. It'll soon operate on Google Cloud Platform (GCP™), extending the plan to greater than 100 locations to have an even faster and much more localized experience.
  • Prisma Public Cloud provides continuous visibility, security and compliance monitoring across public multi-cloud deployments. Operated by machine learning, it correlates data and assesses risk over the cloud atmosphere. Beginning today, customers can further reduce their attack surface at the start of the event cycle via a “shift left” method of security. Having the ability to identify vulnerabilities and connect improper configurations in customers’ infrastructure-as-code templates, developers can help to eliminate risk without having to sacrifice agility.
  • Prisma SaaS is really a multi-mode cloud access security broker (CASB) service that securely enables SaaS application adoption. New integrations brings improved administration experience across IT-sanctioned also it-unsanctioned SaaS applications with unified visibility and management.
  • VM-Series may be the virtualized form factor from the Palo Alto Systems Next-Generation Firewall that may be deployed in public and private cloud-computing environments, including Amazon . com Web Services (AWS®), GCP, Microsoft Azure®, Oracle Cloud®, Alibaba Cloud®, and VMware NSX®.

Friday 28 June 2019

A Holistic Cloud Security Strategy: The Big Cloud 5

Whether it’s the rapid pace of cloud provider innovation, the fluid shared responsibility model or even the constantly evolving compliance mandates, cloud security appears challenging for a lot of organizations.

But what happens puts many organizations in harm’s way? (Hint: it isn't lack of security tools.) It isn't getting a definite security technique for public cloud. According to our use countless clients, we developed The Large Cloud 5. Whilst not intended to be exhaustive, when resourced appropriately, it can help your team form an all natural cloud security strategy.

1. Gain awareness and deep cloud visibility.


The initial step for making cloud security and compliance simpler would be to know how your developers and business teams are utilizing cloud today. This is when you are making shadow IT your friend. Rather to be the bane of the existence, shadow It might be the critical insight needed to maneuver beyond conjecture to data-driven decision-making. Where’s a good option to consider these details? Firewall and proxy logs. While cloud usage via shadow It's the first degree of needed detail, it’s essential to go much deeper. Following a 80/20 rule allows your team to understand which cloud platform to pay attention to first. However, security teams must realize not just which cloud platforms have been in use but additionally what’s running included. This is when cloud provider APIs arrived at the save.

APIs are among the key technologies which make cloud not the same as most on-premise environments. This really is about getting and looking after situational understanding of what’s happening inside your cloud environments. Consider understanding not just what cloud apps your business is applying but leveraging cloud provider APIs to constantly track changes lower towards the metadata layer. This isn't a 1-time event but something that needs to be constantly reviewed and monitored. Awareness becomes intrinsically harder unless of course your team uses cloud provider APIs. Consider it: developers are coding towards the cloud providers APIs every day, but most security teams don't leverage them. What this means is there's a significant gap when it comes to visibility and control. Make certain a main tenet of the cloud security program entails harnessing the cloud provider APIs.

2. Set guardrails to instantly avoid the most serious of cloud misconfigurations.


Think about, do you know the configurations (misconfigurations or antipatterns) which should never appear in our atmosphere? Consider these as the dirty dozen. A good example will be a database receiving direct traffic from the web. Regardless of this as being a “worst practice,” Unit 42 threat studies have proven this happening in 28% of cloud environments. An excellent place to begin building your list could be Unit 42’s Cloud Security Trends report. Build up your initial list and expand these as the cloud security program matures with time. Two important caveats: whenever protections are automated, it's strongly encouraged to begin with small experiments to make sure there aren’t unintended effects (e.g., a self-inflicted denial and services information). Another area is working carefully together with your development teams. Don't attempt to place automated protections in position without gaining buy-in out of your development teams. Use development teams from the first day, begin small and ramp rapidly.

3. Standards would be the precursor to automation.


It’s very hard to automate that which you haven’t standardized upon. Do not begin on your own. The Middle for Internet Security Software, or CIS, has benchmarks for those major cloud platforms. Many teams discuss automation without getting a burglar standard in position. A great goal would be to target automating 80% of those with time. As the program settles on standards, the automation part will end up more straightforward. Don’t be prepared to move from no automation to full automation in 3 months unless of course you're a startup. This method often takes enterprise organizations a minimum of nine several weeks before they hit their stride. One factor to notice: automating your standards is tough to attain should you not have security engineers who understand how to code.

4. Train and hire security engineers who code.


Unlike most traditional data centers, public cloud environments are impelled by APIs. Effective risk management within the cloud mandates that security teams leverage APIs. APIs take time and effort to make use of without getting engineers in your security team who understand how to code and automate security processes. Standards are wonderful but without automation continuously enforcing them via policy they become one-time checks.

With respect to the size your business, begin with an exam from the skills that already exist today. Do you have team people who understand how to code just like Python or Ruby? If that's the case, invest heavily during these team people and align goals for your automation maturity timeline. Don’t curently have someone around the team? Then you've a number of options. Search for individuals who wish to learn and survey your team of developers for people who've proven a desire for security. Both could be trained to safeguard the developers and coding for that security engineer, if goals around training are aligned and resourced correctly.

In case your organization isn't strong in coding, this can be an excellent task for a brief-term consultant that has carried this out in lots of organizations before. If you opt to follow this path, make sure to include understanding transfer like a key deliverable within the statement of labor. You shouldn't have scripts your teams don’t understand how to modify or use. After you have this method going ahead, you’ll anticipate to fully embed peace of mind in your development pipeline.

5. Embed peace of mind in the event pipeline.


This really is about mapping the who, what, where and when of methods your business pushes code in to the cloud. Once this is accomplished, your ultimate goal ought to be to locate minimal disruptive insertion points for security processes and tools. Getting early buy-in from development teams is crucial. Your North Star with this final step would be to minimize human interaction with time. This gets to be more straightforward as the organization moves to infrastructure as code (IaC). Take into account that while you organizationally limit the amount of human hands touching what adopts your cloud atmosphere, misconfigurations naturally get minimized.

Tuesday 21 May 2019

Palo Alto Networks Integrates RedLock and VM-Series With Amazon Web Services Security Hub

Palo Alto Networks helps organizations confidently move their applications and data to AWS with inline, API-based and host-based protection technologies that work together to minimize risk of data loss and business disruption. Building on native AWS security capabilities, these protection technologies integrate into the cloud application development lifecycle, making cloud security frictionless for development, security and compliance teams.



AWS Security Hub is designed to provide users with a comprehensive view of their high-priority security alerts and compliance status by aggregating, organizing and prioritizing alerts, or findings, from multiple AWS services, such as Amazon GuardDuty™, Amazon Inspector, and Amazon Macie™ as well as from other APN security offerings. The findings are then visually summarized on integrated dashboards with actionable graphs and tables. Our joint customers can use these collaborative efforts to help verify that their applications and data are secure.

  • RedLock integration: RedLock by Palo Alto Networks further protects AWS deployments with cloud security analytics, advanced threat detection and compliance monitoring. RedLock continuously collects and correlates log data and configuration information from AWS Config, AWS CloudTrail®, Amazon Virtual Private Cloud (Amazon VPC®) flow logs, AWS Inspector and Amazon GuardDuty to uncover and send security and compliance alerts to the AWS Security Hub console. The RedLock integration with AWS Security Hub provides additional context and centralized visibility into cloud security risks, allowing customers to gain actionable insights, identify cloud threats, reduce risk and remediate incidents, without impeding DevOps.
  • VM-Series integration: The VM-Series next-generation firewall complements AWS security groups by first reducing the attack surface through application control policies, and then preventing threats and data exfiltration within allowed traffic. The VM-Series integration with AWS Security Hub uses an AWS Lambda function to collect threat intelligence and send it to the firewall as an automatic security policy update that blocks malicious activity. As the IP address information changes, the security policy is updated without administrative intervention.

"The Palo Alto Networks product integrations help customers verify that their users, applications, and data are secure through a single pane of glass. The RedLock integration allows customers to monitor advanced threats due to common cloud misconfigurations, stolen credentials, and malicious user and network activities, while the VM-Series integration automates policies to block malicious activity," said Varun Badhwar, senior vice president of products and engineering for public cloud security at Palo Alto Networks. "With more businesses moving to the cloud, it's critical that the alert data they receive provides them with actionable insights to successfully combat cyberattacks."

About Palo Alto Networks


We are the global cybersecurity leader, known for always challenging the security status quo. Our mission is to protect our way of life in the digital age by preventing successful cyberattacks. This has given us the privilege of safely enabling tens of thousands of organizations and their customers. Our pioneering Security Operating Platform emboldens their digital transformation with continuous innovation that seizes the latest breakthroughs in security, automation, and analytics. By delivering a true platform and empowering a growing ecosystem of change-makers like us, we provide highly effective and innovative cybersecurity across clouds, networks, and mobile devices.

Friday 12 April 2019

8 Google Cloud Security Best Practices


Google has been making some great inroads with their cloud expansion. As with AWS and Azure, developers can adopt Google Cloud Platform (GCP) easily, seeking features for use in their application stacks. Also, with the wide adoption of containers and Kubernetes, Google’s leadership in developing container technologies has earned them a reputation as a great cloud option to run these types of workloads. Finally, some organizations are choosing GCP to augment their multi-cloud strategy.

As stated in my previous AWS and Azure blog posts, no two clouds are alike. So, we must be mindful of what the basic security settings are for GCP. While there are significant differences in the details of how to secure GCP compared to other cloud platforms, one tenet remains the same: security is a shared responsibility. You can’t assume Google will secure the cloud for you. Educating yourself is key. I recommend the following resources for in-depth information on security-centric and other cloud-focused best practices to help you get the most out of Google Cloud:

  • Google Security Whitepaper
  • Best Practices for Enterprise Organizations
  • A Security Practitioners Guide to Best Practice GCP Security (Cloud Next ’18)

With that, let’s dive into the fundamentals. The following are eight challenges and best practices to help you mitigate risk in Google Cloud.

1. Visibility


Like other clouds, GCP resources can be ephemeral, which makes it difficult to keep track of assets. According to our research, the average lifespan of a cloud resource is two hours and seven minutes. And many companies have environments that involve multiple cloud accounts and regions. This leads to decentralized visibility, and since you can’t secure what you can’t see, this makes it difficult to detect risks.

Best Practice: Use a cloud security offering that provides visibility into the volume and types of resources (virtual machines, load balancers, virtual firewalls, users, etc.) across multiple projects and regions in a single pane of glass. Having visibility and an understanding of your environment enables you to implement more granular policies and reduce risk. While GCP’s native Cloud Security Command Center works well, monitoring at scale or across clouds requires third-party visibility from platforms such as RedLock by Palo Alto Networks.

2. Resource hierarchy


One of the basic principles in GCP is the resource hierarchy. While other clouds have hierarchical resource systems, GCP’s is very flexible, allowing admins to create nodes in different ways and apply permissions accordingly. This can create sprawl very quickly and confusion when it comes to determining at which level in the hierarchy a permission was applied. To demonstrate, GCP allows the creation of Folders, Teams, Projects and Resources under an Organization.

Best Practice: Create a hierarchy that closely matches your organization’s corporate structure. Or, if you currently don’t have a well-defined corporate structure, create one that makes sense and take into account future growth and expansion.

3. Privilege and scope


GCP IAM allows you to control access by defining who has what access to which resource. The IAM resources in play are Users, Roles and Resources. Understanding how to apply policies to these resources is going to be important to implement least-privilege access in your GCP environment.

Best Practice: Instead of applying permissions directly to users, add users to well-defined Groups and assign Roles to those Groups, thereby granting permission to the appropriate resources only. Make sure to use custom roles, as built-in roles could change in scope.

4. Identity management


Lost or stolen credentials are a leading cause of cloud security incidents. It is not uncommon to find access credentials to public cloud environments exposed on the internet. Organizations need a way to detect these account compromises.

Best Practice: Strong password policies and multi-factor authentication (MFA) should always be enforced. GCP supports MFA for both Cloud Identity and corporate entities. Additionally, you can integrate Cloud Identity support with SSO for your corporate identities so that you inherit corporate MFA policies.

5. Access


It goes without saying that humans aren’t the only users of GCP resources. Development tools and applications will need to make API calls to access GCP resources.

Best Practice: Create descriptive Service Accounts, such that you know the purpose of those accounts. Also, be sure to protect service account keys with Cloud KMS and store them encrypted in Cloud Storage or some other storage repository that doesn’t have public access. Finally, ensure that you are rotating your keys on a regular basis, such as 90 days or less.

6. Managing firewalls and unrestricted traffic


VPC firewalls are stateful virtual firewalls that manage network traffic to VPC networks, VMs, and other compute resources in those networks. Unfortunately, admins often assign IP ranges to firewalls, both inbound and outbound, which are broader than necessary. Adding to the concern, research from Unit 42’s cloud threat intelligence team found that 85% of resources associated with security groups don’t restrict outbound traffic at all. Further, an increasing number of organizations are not following network security best practices, and as such had misconfigurations or risky configurations. Industry best practices mandate that outbound access should be restricted to prevent accidental data loss or data exfiltration in the event of a breach.

Best Practice: Limit the IP ranges that you assign to each firewall to only the networks that need access to those resources. GCP’s advanced VPC features allow you to get very granular with traffic by assigning targets by tag and Service Accounts. This allows you to express traffic flows logically in a way that you can identify later, such as allowing a front-end service to communicate to VMs in a back-end service’s Service Account.

7. Setup and review of activity logs


Organizations need oversight into user activities to reveal account compromises, insider threats and other risks. Virtualization – the backbone of cloud networks – and the ability to use the infrastructure of a very large and experienced third-party vendor affords agility as privileged users can make changes to the environment as needed. The downside is the potential for insufficient security oversight. To avoid this risk, user activities must be tracked to identify account compromises and insider threats as well as to assure that a malicious outsider hasn’t hijacked an account. Fortunately, businesses can effectively monitor users when the right technologies are deployed. GCP records API and other admin activity in Stackdriver Admin Activity Logs as well as captures other data access activity in Data Access Logs.

Best Practice: Monitoring Admin Activity Logs is key to understanding what’s going on with your GCP resources. Admin Activity Logs are stored for 400 days, Data Access Logs for 30 days; so make sure to export logs if you’d like to keep them around longer for regulatory or legal purposes. RedLock ingests alerts based on activity log issues.

8. Managing VM image lifecycles


It is your responsibility to ensure the latest security patches have been applied to hosts within your environment. The latest research from Unit 42 provides insight into a related problem: traditional network vulnerability scanners are most effective for on-premises networks but miss crucial vulnerabilities when they’re used to test cloud networks. In GCP, however, patching running VMs may not be the ideal approach.

Best Practice: Use the power of automation to manage your VM image lifecycles. Create a custom image that’s either been patched or blessed from a security or compliance perspective, and then deny access to non-custom (trusted) images using a Resource Manager Constraint. Additionally, you can remove obsolete, older images to ensure that you are using the latest and greatest VM image.