Thursday, August 1, 2019

AWS S3 Access Control Options


Understanding which access control mechanism to employ in order to control and audit access to your S3 buckets and objects is tricky. This is because the method to be chosen really depends on how you intend to use the buckets and also the way you work within your organization.

I went through several resources, blogs, forums and Amazon's own resources to make it understandable and easy to remember. It helped me understand many points, I hope it helps you as well.

There are mainly 3 ways of regulating the access to the buckets and objects in S3 which are namely:
  • Bucket Policies
  • Bucket ACLs
  • IAM Policies

Bucket Policies: A “Bucket Policy” is an internal regulation structure specific to S3 which means that bucket policies can only be employed within S3 and nowhere else. They are applied at the bucket level, which also means that a same policy should be manually applied to each and every bucket for the same controls.

It allows AWS admins to apply enforcement actions (allow or deny) per users/ groups (principals) for specific actions (put, delete, read, etc.).

Typical Use Cases
  • When granting cross-account access to S3 resources in a simple way, without using IAM.

You can use ACLs to grant cross-account permissions to other accounts, but ACLs support only a finite set of permission (List, Read, Write), these don't include all Amazon S3 permissions. For example, you cannot grant permissions on bucket sub-resources using an ACL. Although both bucket and user policies support granting permission for all Amazon S3 operations, the IAM policies are for managing permissions for ONLY users in your account. For cross-account permissions to other AWS accounts or users in another account, you must use a bucket policy.
  • When there is a need to write bigger policies in size. Bucket Policies can be up to 20 KB, (IAM policies can be up to 2 KB for users, 5 KB for groups and 10 KB for roles).
  • When you prefer keeping the access controls within S3.
IAM Policies: An IAM policy is the de facto way of regulating access control for all the resources in AWS, therefore they are more general.

An interesting difference between S3 Bucket Policies and IAM Policies is that in the Bucket Policies JSON document, there is a “Principal” field to be filled detailing to which user or group the actions are going to be applied. The principal field does not exist in IAM policies because in order to be functional, they already have to be assigned to a user or a group.

Typical Use Cases
  • Creating centrally managed, user-based access policies and control everything from IAM.
  • Manage a bigger number of buckets.

Bucket ACLs: The Bucket ACLs are the legacy way of controlling access to buckets and objects in S3. They are more granular compared to bucket policies as they can be applied per object and not per bucket.

Bucket ACLs use an Amazon S3–specific XML schema and do not look like bucket policies or IAM policies which are JSON files.

There are currently only 3 actions supported by Bucket ACLs which are List, Read and Write. Detailed permissions such as in bucket policies or IAM policies are not possible with Bucket ACLs.

There are limits to managing permissions using ACLs. For example:
  • You can grant permissions only to other AWS accounts; you cannot grant permissions to users in your account.
  •  You cannot grant conditional permissions, nor can you explicitly deny permissions.

ACLs are suitable for specific scenarios. For example, if a bucket owner allows other AWS accounts to upload objects, permissions to these objects can only be managed using object ACL by the AWS account that owns the object.

Typical Use Cases
  • Cross-account access.
  • Object level permission setting requirements within a bucket.
  • The only recommended use case for the bucket ACL is to grant write permission to the Amazon S3 Log Delivery group to write access log objects to your bucket.

Bucket Policies and IAM Policies are User-based policies while Bucket ACLs are resource based.



If you’re still unsure of which to use, consider which audit question is most important to you:
  • If you’re more interested in “What can this user do in AWS?” then IAM policies are probably the way to go. You can easily answer this by looking up an IAM user and then examining their IAM policies to see what rights they have. 
  • If you’re more interested in “Who can access this S3 bucket?” then S3 bucket policies will likely suit you better. You can easily answer this by looking up a bucket and examining the bucket policy.
Avoid using Bucket ACLs except for the specific cases mentioned above.


Tuesday, July 16, 2019

Account compromise incident response in AWS

Account compromise incident response in AWS

In case of account compromise, the suggested actions to take are:
  • Change the root password and delete root access keys if you haven’t done that before.
  • Add MFA to the root account  if you haven’t done that before.
  • Change all user account passwords ( I strongly doubt about this one but the documentation says so, for certification exam purposes consider this one true)
  • Delete or  rotate potentially compromised account access keys.
  • Delete unrecognized/unauthorized instances and IAM users through the help of AWS Config and CloudTrail.

Realizing Security Assessments in AWS

Realizing Security Assessments in AWS

I tried to resume very shortly what services in AWS can be assessed for security under which conditions. Security assessments cover application and infrastructure penetration tests, DDoS tests and other network stress tests.
Pentesting is only allowed for below given 8 AWS services:
  • EC2 instances, NAT Gateways, ELBs
  • RDS
  • CloudFront
  • Aurora
  • API Gateways
  • Lambda and Lambda Edge
  • Lightsail
  • Elastic Beanstalk
Prior to the pentest pen-test-nda@amazon.com shoud be contacted for a private preview and NDA.
Following activities are prohibited:
  • DNS  zone walking via Route53 Hosted Zones
  • DoS, Simulated DoS and DDoS
  • Port Flooding
  • Protocol Flooding
  • Request Flooding
Scans are suggested to be limited to 1 Gbps or 10K Requests per Second.
Below given instance types are recommended to be excluded from security assessments.
  • T3.nano
  • T2.nano
  • T1.micro
  • M1.small
IP addresses to be used during the security assessment should be sent to aws-security-simulated-event@amazon.com
Following events are considered as simulated events:
  • Security simulations or security game days
  • Support simulations or support game days
  • War game simulations
  • White cards
  • Red team and blue team testing
  • Disaster recovery simulations.
  • Other simulated events
AWS must be informed about these events through aws-security-simulated-event@amazon.com and a detailed examination takes place before approval.
For Network stress testing such as DDoS tests, customers are  supported via pre-approved vendors noted below.
For more information, you can consult https://aws.amazon.com/security/penetration-testing page.

Thursday, March 15, 2018

Reassessment Of SIEM Solutions On The Market


It has been so long since the last time I have written about SIEM solutions in this blog! How long exactly? Well, since the last blog, I have changed 2 companies, 4 functions and relocated in another country type of long.

SIEM and Log Management topics are always very dear to me, although my focus on IT and Information Security has widened, to the point to include Information Risk Management as well.

I am going to resume the landscape in SIEM area in the last 3 years with the main actors involved.

HP ArcSight became Microfocus ArcSight with HP’s spin-off from the software business. The decision seems to be taken a long time ago already. The ambiguity in the way merger/selling operations were dealt between HP and Microfocus pushed the customers to look for alternatives. The acquisition of HP Software division including ArcSight product family by MicroFocus was announced on the 7th of September 2016 and was completed officially on the 1st of September 2017. But people close to the subject know that long before the acquisition announcement, product development activities were severely slowed down if not completely stopped.

HP ArcSight was already having some difficulties addressing long term storage of data on the platform itself other than lacking advanced features proposed by competitors. Running even simple queries on the old-fashioned Logger was taking ages, even when tried on the command line with scripts.

The problems were since addressed ArcSight Data Platform (I will later provide a dedicated post on that) and some advanced features are presented to the customers such as User Entity and Behavior Analysis (UEBA) but the damage I think is done. ArcSight has lost an important part of its customers other than the big accounts which really invested too much on the solution to leave it.

The arch-rival of ArcSight was IBM QRadar at the time we left. QRadar was somehow less customizable comparing to ArcSight but was a strong competitor in regards to the integrations it had such as Network Packet Flow Analysis (QFlow) being the most important. Other than this, platform was and still is, capable of indexing all log fields comparing to limited indexing capability of ArcSight, which can be considered as a huge advantage.

Moreover, architecture-wise, QRadar supported scaling out (increasing the performance/capacity by adding new devices), therefore allowing a much better retention of logs online, without sending logs to external storage while since recently ArcSight’s Logger only supported scaling up (increasing the performance/capacity by increasing system resources).

Relative simplicity of IBM QRadar also helped the solution’s overall stability, which on ArcSight’s side required a separate management appliance (ArcSight Management Center) and sometimes some 3rd party appliances for connector health management.

Another advantage of QRadar is its integrated additional capabilities, such as the User Behavior Analysis module which is not as efficient as a full blown UEBA solution from Exabeam for example to compare with but still does the essentials for enriching the bare log data. When talking about data enrichment ability to consume Vulnerability Management data also should not be left behind.

IBM QRadar seems, in my humble opinion, the best option for large environments and for on premise use.

McAfee ESM used to be the 3rd major actor behind ArcSight and Q-Radar. McAfee had the advantage of being simpler to configure and to license solution within a vendor-controlled package. Not much has changed since other than providing a big-data approach to McAfee ESM’s architecture, transforming the front-end to HTML5. McAfee’s SIEM solution was and is one of the least appealing SIEM solutions for me as it never got the attention it deserved from the organization, always lagging behind McAfee’s flagship products.

It can be advised to small-to-medium size organizations having with strong relations to McAfee.

Thursday, March 8, 2018

GDPR Awareness Training and Assessment Questions


With European Union’s General Data Protection Regulation (GDPR) being effective on 25th of May 2018, organizations speed up their preparations in order to be compliant.

If adapting its systems and practices in terms of privacy to the GDPR requirements is an arduous task, keeping them compliant is another. It requires the attention of all employees starting from IT and HR all way to the Facility Management teams as personal data of both customers and employees is being process in a daily basis.

In order to keep the employees engaged, they must be provided trainings on GDPR. To complete the learning process, their understanding of the subject is better to be measured with an assessment so that employees who still have confusions or hesitations are identified and informed clearly.

Gartner expects that, until 2020, there will at least be one company who will be fined in scale of million euros for non-compliance with GDPR.

I am aiming to give you some ideas with this GDPR Awareness presentation which can be used as a starting point. The questions in the end of the presentation can be used within internal GDPR E-Learning modules.

Subjects like who is who in the GDPR (Data Subject, Data Controller, Data Processor), what is private data and what is sensitive data, Data Subjects’ rights, consequences of non-compliance must be clearly understood by everybody as a minimum.

You can get in contact with me for the powerpoint version and more.




Sunday, August 2, 2015

SIEM Deployment - Configuring Filtering on SmartConnectors

One of the big obstacles security analysts face when deploying SIEM solutions is that, once you ask system owners to send their "security related" logs to your log collectors, they misunderstand you and save everything.

There are also times that system owners really cannot eliminate junk logs (security wise of course).

Because of the fact that more logs mean more money spent (resources, licensing, storage, etc.) we have to eliminate those logs at same point. In this article, I will detail filtering out logs on SmartConnectors in this article, which is the best place to filter logs because it is the closest to the log source.

The first step of filtering should be deciding which logs you are going to filter. It is never bad to say it again, when collecting logs, do set your scenarios before and know what logs you need to fire your rules. All the rest is garbage, which you can eliminate at some level (source or SmartConnector).

In this example, I have chosen filtering out Microsoft Windows logs and the criterion, I use for it is the deviceEventClassId. You can filter your logs according to any criterion you want.

In order to start, we should run runagentsetup script under your <Connector_Home>\current\bin directory. For the sake of simplicity, I used a windows based SmartConnector for this demo.


In the next menu we choose "Modify Connector" option.


Then "Add, modify, or remove destinations" option should be chosen.


Step 4 is an important step to be well understood. Filtering operation, just like aggregation and other SmartConnector level modifications is made per destination, which means that filtering settings you made are only valid for the Logger or ESM that you choose at this step. If you want to do the filtering for a second destination, you should start over once more. This however and fortunately does not apply for failover destinations.


At step 5, "Modify destination settings" option is chosen.


The next menu is where we actually choose the operation we want to configure.


In the final configuration screen, we enter the parameters according to which we are going to filter the incoming logs. For this example, we are filtering out logs in which deviceEventClassId field contains "Microsoft-Windows-Security-Auditing:4674" or "Microsoft-Windows-Security-Auditing:5447".


If you want to learn more about Microsoft Windows Audit Events, I'd suggest you to visit this website and read this blog article.

Once this step is done, we click next and reach the final configuration screen.


Do not forget to restart your SmartConnector service in order to apply the filtering settings.


SIEM Deployment - Installing ArcSight Systems on Amazon EC2

SIEM systems can be easily considered as “Big Data” systems as the resources they use and the amount of information they process fit into Big Data systems scale. Running SIEM components require serious computing resources not only RAM, CPU and hard disk space but also a high IOPS rate.

For those reasons, if you do not have a chance to have a test environment at work or at home (my home server with i7 CPU (8 thread) and 16GB Ram with 256 GB SSD did not really satisfy me honestly) you should look for another solution. After considering my options about buying a better home server, I found out that cloud solutions such as Amazon’s EC2 offer a way better TCO and ROI comparing to owning a home server.

In this article, I’ll guide you through setting up your own Red Hat Enterprise Linux 6.5 server on which you can either install ArcSight ESM, ArcSight Logger or ArcSight Management Center.

For those who are new to Amazon’s Elastic Cloud 2 (EC2) service, it is basically an Infrastructure as a Service (IaaS) offering where you can build a server with the resources you like and pay as you go. Nothing is charged when you keep your system shut. What’s even better is that, Amazon offers entry level machines for free if you create an account an share your credit card information. As long as you use free tier servers, not even a dime is charged.

Actually these free tier servers come installed with the OS you want and they are the best option to use as SmartConnector servers or as log sources. They are ready to be used only in 10 minutes or less.

First we should log into EC2 console from http://aws.amazon.com/ec2 link. (Registration required)



From the next screen, we choose EC2 option under Compute options.


Once successfully logged into EC2 Management Console, we should launch an instance, either using the “Launch Instance” link on the home page or under Instance option on the left-side menu.


On the next screen, we start configuring our server. The first step is to choose the OS (RHEL 6.5), among the Community AMIs proposed by Amazon.


In Step 2, we should choose an Instance Type among many pre-set instances. This is a very important step as how much we are going to pay and the performance we are going to get largely depends on the instance we choose. For ArcSight Logger, an m4.large instance is sufficient for test purposes as well as for ESM starting with m4.2xlarge or higher is a wise choice.


In Step 3, almost all other details not mentioned in Step 2 are chosen. At this step, I highly recommend choosing a /28 subnet for your environment and enabling “Auto-assign Public IP” option. You may choose to use dedicated hardware resources, if you have some bucks to spend, that would provide you better performance for sure (I never used dedicated hardware on EC2). Finally, you may create more than 1 instances if you plan to install more than one component (ESM, Logger,  ArcMC), that would definitely save you some time.


In Step 4, we set up the storage options. For both ESM and Logger, we need at least 50 GB space for /opt/arcsight directory and at least 5 GB free space for /tmp directory. Because of that we add 2 new volumes having slightly more space than the bare minimum demanded, which we will configure later.


After Step 4 we jump to Step 6 and set access rules for accessing our system. Giving inbound permission for tcp 9000, tcp 8443 and tcp 443 ports (in addition to initial tcp 22 for SSH) in a security group will allow us to apply this security group both to ESM and Logger. You should of course limit access to your system only to your public IP, in case you a have static IP.


After Step 6 our configuration is ready to launch. Accessing Linux devices on EC2 may be a new thing, which I will mention in a short article.

Once we log in to our newly installed server with ec2-user account, we should format and mount partitions following the instructions below (after applying sudo su command of course).

1. First we check our partitions with lsblk command and see some output like below:
   
    $ lsblk
    NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    xvda        202:0       0     10G  0   disk
   └─xvda1 202:1       0       6G   0   part   /
    xvdb        202:16    0      52G  0  disk    
    xvdc        202:17    0         8G  0  disk    

2. Then we format xvdb and xvdc drives in ext4 filesystem as below
  
   $ mkfs –t ext4  /dev/xvdb
   $ mkfs –t ext4  /dev/xvdc

3. In the third step, we should mount the partitions to the directories we need as follows:
   
   $ mount  /dev/xvdb  /opt/arcsight
   $ mount  /dev/xvdb  /tmp

4. Basically after this step, we are ready to follow the same steps we do at work or home to install ArcSight systems. However if we miss the following step, the configuration we did will not persist and we may not see the additional partitions mounted after the next reboot. For that reason, we apply following commands:

   $ cp /etc/fstab  /etc/fstab.orig
   $ vi /etc/fstab

Insert the lines below at the bottom of /etc/fstab file and save the file.

/dev/xvdb          /opt/arcsight     ext4       defaults,nofail  0 2
/dev/xvdc          /tmp                  ext4       defaults,nofail  0 2

You can follow AWS documentation link for more detail about this configuration.

Once you finished all these steps, you have a consistent environment for your ArcSight systems.


Saturday, July 25, 2015

SIEM Deployment – Configuring Peering Between ArcSight Loggers


When deploying your SIEM Solution Infrastructure with HP ArcSight products, you may consider installing more than one Logger systems for several reasons.

Without going too much into detail for these reasons, let’s name the 2 major ones, first reaching the computation levels on your system (RAM, CPU or 15000 EPS level indicated in HP ArcSight documents) and second providing redundancy, installing an ArcSight Logger appliance for each datacenter for not consuming too much bandwidth to send logs.

Whatever the reason for using several ArcSight Loggers, the problem of lookup in several databases appears.

The solution for this problem is establishing peering between your Logger appliances. Once peering is established, the pattern you are searching for is executed on all peer Loggers and the result is shown on the Logger you initiated the search.

Below you can find the details on peer configuration between two Loggers.

For peering 2 or more loggers should first authenticate each other. For authentication, 2 methods exist:

  • Authentication with a logger user credentials
  • Authentication with Peer Authorization ID and Code

In this article, we will follow the second method to prevent any problems that may be caused by the user credentials in the first method.

Let's assume, we will initiate the peering on Logger1. To be able to realize it, we should first log in to the Logger2 and generate the Authorization ID and Code for Logger 2.





Once the first step is done, generated values must be entered on Logger1. After successfully saving the configuration Logger Peering is done and logs can be queried through either of the loggers.

UPDATE 29/07/2015: There is something odd about peering config for Loggers. "Add Peer Logger"
option must be configured on both loggers and it is not enough so see one line of peer Logger under Peer Loggers menu. Authorization ID and Code generated on Logger2 for Logger1 must be entered on Logger1 and vice versa. At the end of successfull configuration, you should see 2 identical lines for each Logger you established peering relation under Peer Loggers menu.




Wednesday, July 22, 2015

SIEM Planning - Reference Architecture for Midsize Deployments

After going through several websites and documents, I sadly discovered, like many of you had before, that HP haven’t yet published any reference architecture or certified design documents for different needs.

I decided to write a series blog articles to create reference architectures for SIEM deployments, basically for HP ArcSight, but the fact that solution components are more or less similar in different vendors, I believe they will be applicable to all SIEM environments.

Gartner defines a small deployment as one with around 300 log sources and 1500 EPS. A midsize deployment is considered to have up to 1000 log sources and 7000 EPS. Finally a large deployment generally covers more than 1000 log sources with approximately 15000 EPS. There can of course be larger deployments with over 15000 EPS but architecture-wise they can be considered as very “large” deployments.

In this article, I will give the details of a midsize deployment, covering components both for a primary datacenter and a disaster recovery center, working in an active-passive setup.

The reference architecture for midsize deployment is for a scenario where the company needs both a long term log storage solution (ArcSight Logger) and Security Event Management and SOC capabilities (ArcSight ESM).

The scheme below shows how different components of the architecture are set up.

  • In this setup software SmartConnectors are used to collect the logs. Up to 8 software connectors can be configured on a server and 1 GB of memory should be allocated on the server for each connector instance other than what the server needs for its operation.
  • In case appliances are not used, do pay attention to use built-for-purpose hardware servers where resources are not shared because like other big data solutions, these systems are greedy in terms of resources (CPU, Memory, IOPS rate) and do not perform well on virtual environments.
  • Sources send logs to one SmartConnector only. SmartConnector level redundancy is only possible only for Syslog connectors and that when connectors are put behind a load balancer. This also provides load sharing and scalability and is a best practice. DB and File connectors do not have such options as they pull the logs from sources.
  • When a DB or File collector is down, no log is lost until collector comes back as the logs continue to be written on local resources at the source.
  • For log storage and searching, SmartConnectors in each datacenter send their logs to their respective Logger appliance hosted in the same location, providing  important bandwidth savings. Each logger appliance back up the other one using the failover destinations option configured on the SmartConnectors. Thanks to the peering configuration between loggers, logs can be queried through any of the logger appliances without having to connect on each device.
  • DC ESM is the primary ESM for both datacenters. DRC ESM is only used in case of DR
  • Logs and Alerts are archived daily both on ESM and Logger.
  • In DR case, there is no RPO. Configurations for ESM and Logger are planned to be synchronized manually. ESM and Logger are expected to be operational instantly.
  • Configuration backups for SmartConnectors and Loggers are collected using Arcsight Management Center (Arc MC).
  • SmartConnector statistics and status can be easily followed using Arc MC as well. Realizing SmartConnector updates are also recommended to be done over Arc MC using the GUI.
  • SmartConnector level configuration options (aggregation, filter out, batching etc.) are easier to be configured using Arc MC.
  • Finally it is strongly recommended to use a Test ESM system to test all filters, rules, active lists and other configuration objects before applying them on production systems as a misconfguration in these settings may crash your ESM and make you lose very valuable data.

Tuesday, July 21, 2015

SIEM Deployment - ArcSight Logger 6.0 P2 is out

Logger 6.0 P2 is now available for download from HP Software support download page. Note that it is referred to as Logger 6.02 on the download site.

Logger 6.0 P2 includes:
  • Important security updates (Honestly I could not find what those updates are in the release notes, even though  I went through the document for multiple times)
  • A fix to peer search (LOG-13574).
  • Modifications to SOAP APIs:
  • SOAP API login events in Audit logs
  • SOAP login API now uses the authentication method configured in Logger, which can be an external authentication method, such as Radius. Clients using the SOAP login API must now pass the login credentials for the authentication method configured in Logger (e.g. Radius credentials) instead of the credentials of a local Logger user.

The full release notes can be found HP Protect Website. You may also find the Loggersupport matrix useful.

Some important notes:
  • The data migration tool was updated.
  • Migration from older Logger versions should be done to Logger 6.0 P1 followed by an upgrade to Logger 6.0 P2.
To resume the migration path, it is for most systems 5.0 GA (L5139) > 5.0 Patch 2 (L5355) > 5.1 GA (L5887) > 5.2 Patch 1 (L6307) > 5.3 GA (L6684) > 5.3 SP1 (L6838) > 5.5 (L7049) > 5.5 Patch 1 (L7067) > 6.0 (L7285) > 6.0 Patch 1 (L7307) > 6.0 Patch 2 (L7334).

The Logger trial version is not updated and remains at Logger 6.0 P1.

About OS version supported there are also no changes and RHEL 6.5 and 6.2 as well as CentOS 5.5 and 6.5 are supported.