Overview

Cloudaware is a comprehensive SaaS based, modular IT Management platform. While all of Cloudaware capabilities are applicable to non-cloud use cases, platform is specifically designed to address the needs of customers who rely extensively on cloud computing infrastructure from Amazon Web Services, Microsoft Azure and Google Compute Cloud.

Cloudaware modules are:

  • CMDB

  • Cloud Security

  • Governance and Compliance

  • Cost Management

  • IT Service Management

Getting Started

Starting Trial

To create a CloudAware account please follow this link and select the option you would like to sign in with.

image

If you choose signing up with email, please use your corporate email address and set up a password. Click on “Create Account”.

image

Fill in all the fields required and click on “Update”.

image

Select the modules you are interested in.

image

To finish your registration please add your cloud account (AWS Account, Azure or Google Cloud).

image

AWS Start Guide

Adding AWS Account Using Access And Secret Keys

  1. Go to the Admin console, click the button near Amazon Accounts and click +Add Amazon account.

    img
    img
  2. Check the box Using Access and Secret Keys as an integration type and select one of the IAM policies below in reliance on the functionality you are going to use.

    img
  3. Sign in to your AWS console, go to 'Security & Identity Service' and choose IAM User.

    img
  4. Select 'Users' and click Add User.

    img
  5. Fill in the Account name, check the box 'Programmatic access' and click 'Next: Permissions'

    img
  6. 'Set permissions and 'Add user to group' are optional

  7. 'Add tags' is optional

  8. Review your choice and click Create user

  9. You will receive the Access key and the Secret key for this user. As an option, you may download the credentials.

    img
  10. Go back to the list of users and choose the one you have recently created. Select the tab 'Permissions' and click Add Inline policy. On the next page select the tab 'JSON' and add the code from Cloudaware console (Amazon account details).

    img

    Open the previously downloaded file in any text editor, copy the code and paste it in the Policy Document. Click Review Policy.

    img

    Fill in the name and click Create policy.

    img
  11. Fill in the Account name, Access key and Secret key and click Check. Your AWS Account will be added automatically.

    img

Adding AWS Account Using IAM Role (More Secure)

  1. Go to the Admin console, select 'Configured AWS accounts', click +Add Amazon account.

    img
    img
  2. Select Using IAM Role as an integration type and download the CloudFormation template.

    img
  3. Sign in to your AWS console, go to 'Management & Governance' and choose 'CloudFormation'.

    img
  4. Click Create StackWith new resources (standard)

    img
  5. Select Upload a template file and choose the downloaded template, click Next.

    img
  6. Fill in Stack name and External ID*.

    *You can generate it in the Cloudaware console by clicking img
    img

    In the section 'Policies' enable preferred features, click Next.

    img
  7. Optional: set up tags and permissions on the Options page.

  8. On the Review page check the details, check the box I acknowledge that AWS CloudFormation might create IAM resources with custom names. Click Create Stack.

    img
  9. Wait until the stack is created.

  10. Open the tab 'Outputs' for the created stack. Copy the IAM Role ARN value.

    img
  11. Go back to the Admin console. Fill in the Account Name, select Trusted Account, paste Role ARN and insert External ID. Click CheckAdd.

    img

AWS Organizations

About AWS Organizations

AWS Organizations is a Policy-based management for multiple AWS accounts. AWS Organizations offers policy-based management for multiple AWS accounts. With Organizations, you can create groups of accounts, automate account creation, apply and manage policies for those groups. Organizations enables you to centrally manage policies across multiple accounts, without requiring custom scripts and manual processes.

Using AWS Organizations, you can create Service Control Policies (SCPs) that centrally control AWS service use across multiple AWS accounts. You can also use Organizations to help automate the creation of new accounts through APIs. Organizations helps simplify the billing for multiple accounts by enabling you to setup a single payment method for all the accounts in your organization through consolidated billing. AWS Organizations is available to all AWS customers at no additional charge.

More information can be found here.

img

Benefits Of Using AWS Organizations In Cloudaware

  1. No need to manually add every AWS account

  2. Automate on-boarding of your AWS Accounts into CloudAware

  3. Ability to see which AWS Organization Accounts exist but are not in CloudAware CMDB as AWS Accounts.

Requirements

  1. CloudAware AWS Organization Master account has been added to CloudAware CMDB.

  2. CloudAware has the following IAM permission on AWS Organization Master Account:

organizations:DescribeOrganization
organizations:ListRoots
organizations:ListOrganizationalUnitsForParent
organizations:ListAccountsForParent

Overview Checklist

  1. Ensure AWS Organization Master Account has a green status indicator in the Admin Panel.

  2. Deploy CloudAware CloudFormation template to all AWS Organization Accounts

  3. Request auto-adding of all AWS Organization Accounts to CloudAware CMDB

  4. AWS Organization Accounts are now visible as AWS Account objects.

STEP 1 - Cloudaware Access To AWS Organizations Master Account

  1. Log in to your CloudAware account and navigate to AWS Organizations.

    img
  2. You should see at least one AWS Organization and N number of AWS Organizational Accounts.

    img

    If you do not see any AWS Organizations, there are two possible reasons:

    1. Insufficient permissions on AWS Organization Master Account.

    2. AWS Organization Master Account has not been added to CloudAware.

    Double check Requirements and Overview Checklist sections above.

STEP 2. CloudAware Access To AWS Organizations Sub-Accounts

  1. Download CloudAware Cloudformation Template with IAM policy from the CloudAware Admin Panel or use your custom template with policy.

  2. Deploy Cloudformation template on every AWS Organizations Sub-Account.

    When granting CloudAware access to AWS Organization Sub-Account IAM External ID must be either same value or blank for all AWS Organizational Sub Accounts. See screenshot below.
    img

If you need instructions on how to download template and execute CloudFormation Stack, click here.

Adding multiple AWS accounts with CloudFormation StackSets

A stack set can be used to deploy CloudAware CloudFormation template to multiple AWS accounts at once. Since stack sets perform stack operations across multiple accounts, you should have the necessary permissions defined in your AWS accounts before you create your first stack set.

Self-Managed Permissions
  1. Log in to your AWS Console and locate the root account where the stack set is to be created.

  2. In the root account, create an IAM role AWSCloudFormationStackSetAdministrationRole using this template: https://s3.amazonaws.com/cloudformation-stackset-sample-templates-us-east-1/AWSCloudFormationStackSetAdministrationRole.yml

  3. In each (!) target account where individual stacks are to be created, create a service role named AWSCloudFormationStackSetExecutionRole that trusts the root account using this template: https://s3.amazonaws.com/cloudformation-stackset-sample-templates-us-east-1/AWSCloudFormationStackSetExecutionRole.yml

    When creating the trust relationship between each target account and a customized administration role, you can control which users and groups can perform stack set operations in which target accounts. You can also define:

    • Which resources users and groups can include in their stack sets.

    • Which stack set operations specific users and groups can perform. Read more

  4. Ensure that the root account has been added to CloudAware. Any new AWS account where the stack set is deployed will show up in CloudAware automatically.

Service-Managed Permissions

AWS Organizations provides you with the centralized governance over your AWS accounts creation and management. Before creating a stack set in your AWS Organizations Master Account:

  1. Sign in to the AWS Console as an administrator of the master account. Select AWS Organizations under Management & Governance.

  2. Enable all features in AWS Organizations: go to Settings tab → select Begin process to enable all features.

    img
    img
    This action is irreversible! Read more
  3. Enable trusted access with AWS Organizations:

    1. Open AWS Console as administrator of your AWS Organizations Master Account.

      The IAM service-linked role created in the organization master account has the suffix CloudFormationStackSetsOrgAdmin. You can modify or delete this role only if trusted access with AWS Organizations is disabled.

      The IAM service-linked role created in each target account has the suffix CloudFormationStackSetsOrgMember. You can modify or delete this role only if trusted access with AWS Organizations is disabled, or if the account is removed from the target organization or organizational unit (OU).

    2. Select CloudFormation under Management & Governance.

    3. Select StackSets. Click Enable trusted access.

      img

Once it is done, StackSets creates the necessary IAM roles in the AWS Organizations master account and target accounts to which stack instances will be deployed. Otherwise, check Requirements.

STEP 3. Notify CloudAware Support

  1. Provide External ID or indicate whether it was left blank via support@cloudaware.com.

  2. Once the request has been resolved, all AWS Organization Sub-Accounts will show up in the Admin panel.

STEP 4. Identify AWS Organization Accounts That Didn’t Get Onboarded Successfully

  1. Navigate to CloudAware CMDB → AWS Organizations → AWS Organizational Accounts.

  2. Click Browse Objects.

    img

Paste the following query and click Search:

`Deleted From AWS` equals null -> `AWS Organization Account Name` ASC, `Account`.`Account Name` as "Actual Account", `Account ID`, `Email`, `Joined Method`, `Joined Timestamp`, `Parent Root ARN`, `Status`

Any AWS Organizational Account where Actual Account is blank will not not be automatically added since CloudAware is unable to assume IAM role in it.

Enabling AWS Detailed Billing

Enabling AWS Billing Reports In Amazon Console

  1. Navigate to My Billing Dashboard under your username in your Amazon console.

    img
  2. Select Preferences → Billing Preferences on the left to view Billing Reports options.

  3. In the section 'Detailed Billing Reports' check the box "Turn on the legacy Detailed Billing Reports feature to receive ongoing reports of your AWS charges". Select Receive Billing Reports, Monthly report, Detailed billing report, Cost allocation report, and Detailed billing report with resources and tags.

    img
  4. Select a S3 bucket to store your billing files in: ConfigureVerify → Valid Bucket.

  5. Ensure you include tags in your billing reports. Click Manage report tags to review the list of all the tags you have created and activate the ones relevant to your AWS usage.

    img
    img
  6. Go back to Billing preferences. Click Save preferences.

Setting Up AWS Billing Integration In Cloudaware

  1. Add your Amazon Payer account to Cloudaware. See more here.

    img
  2. Cloudaware automatically discovers the bucket your billing data is stored in. Allow some time for the bucket to be listed in CMDB Navigator → S3 to proceed.

  3. Go to AdminN Configured in AWS accounts → select the Payer account → click three dots and select "Download CloudFormation Template" to download the template where the billing bucket info is input by Cloudaware.

    img
    img
  4. Update the template in your AWS console to provide Cloudaware with read-access to your billing files.

  5. The billing data will be collected by Cloudaware within several hours.

Azure Start Guide

Microsoft Azure Setup

Create Application
  1. Select Azure Active Directory in the Azure Portal.

    screenshot
  2. Select App registrations+New registration.

    screenshot
  3. Insert the following information for your Azure Application:

    Name: e.g. cloudaware-api-access-test

    Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant)

    Redirect URL: Web - https://cloudaware.com

    screenshot

    Click Register.

Configure Permissions
  1. Select the application that you have just created

    screenshot
  2. Select 'API permissions'. Click +Add a permission.

  3. Select the tab 'Microsoft APIs'. Select 'Azure Service Management'.

    screenshot
  4. Select 'Delegated permissions' and check the box 'user_impersonation. Access Azure Service Management as organization users (preview)'. Click Add permissions.

    screenshot
  5. Select Microsoft Graph.

    1. Select APPLICATION / Read directory data

    2. Select DELEGATED / Read directory data, Sign in and read user profile (as shown in the screenshot below)

      Microsoft Graph will be added by default with 'DELEGATED / Sign in and read user profile'.
  6. Click +Add a permission to choose one more API: Azure Active Directory Graph.

    1. Select APPLICATION / Read directory data

    2. Select DELEGATED / Read directory data, Sign in and read user profile (as shown in the screenshot below)

      screenshot
      You should have three APIs added totally: Windows Azure Service Management API, Microsoft Graph and Azure Active Directory Graph.
  7. Having added APIs, click Grant admin consent for Default Directory to populate them.

    screenshot
    Microsoft takes up to 30 minutes to populate the permissions added in previous steps.
Configure Keys
  1. Select 'Certificates & secrets' → +New client secret

  2. Enter the description: ca-api-key

  3. Set the EXPIRES to: Never

  4. Click: Add

    screenshot
    Don’t forget to save the Secret as it is required for the Cloudaware integration. The retrieval will not be possible.
  5. Save the secret value in a secure location.

    screenshot
Getting The Active Directory ID (Tenant ID)

Open Azure Active Directory, click Properties and find Directory ID. Copy its value as it is required for the CloudAware integration.

screenshot
Assigning Role to Subscriptions
  1. From the Azure Portal Dashboard use the Azure Search window to navigate to Subscriptions:

    screenshot
  2. Select your subscription:

    screenshot
  3. Select Access control (IAM):

    screenshot
  4. Click + AddAdd role assignment

    screenshot
  5. In 'Add role assignment':

    1. Select 'Role': Reader

    2. Select 'Assign Access to': Azure AD user, group, or service principal

    3. Type the application name: cloudaware-api-access (in the example below: cloudaware-api-access-test)

      screenshot
      These steps are required for each subscription that will be integrated into Cloudaware.
Grant Access To Key Vaults

Сloudaware has to be granted access to Key Vault to be able to check the expiration date of keys and secrets. Set up the access policy for Cloudaware application on the Key Vault level.

CloudAware retrieves metadata only ('Azure Key Vault Key' and 'Azure Key Vault Secret' objects). No keys or secrets are accessible to CloudAware.

Select All servicesKey Vaults.

  1. Select 'Access Policies':

    screenshot
  2. Click +Add new

  3. 'Select principal': cloudaware-api-access

  4. Select 'Key permissions': List

  5. Select 'Secret permissions': List

    screenshot
  6. Repeat steps 1-5 for each Key Vault.

Cloudaware Setup

Adding Azure Active Directory to Cloudaware
  1. Log in to your CloudAware account. Select Admin → Azure Active Directories & Subscriptions. Click +Add.

    screenshot
  2. Fill in the values and click Save:

    1. Name

    2. Active Directory ID (Tenant ID)

    3. Application ID (Client ID)

    4. Select the Azure environment (Azure, Azure China, Azure Government, Azure Germany)

    5. Client Secret

      screenshot
      * - check the box 'Automatically Discover Subscriptions' to allow CloudAware automatically discover and add all the subscriptions it has been granted access to in Azure Active Directory. Leave it unchecked if you would like to add your Azure subscriptions manually.
  3. The green light in 'Status' means that Azure Active Directory has been added successfully. If there is a red light, please contact support@cloudaware.com.

    screenshot
Adding Azure Subscription to Cloudaware

In case you haven’t checked the checkbox 'Automatically Discover Subscriptions' as described in the previous section, follow these steps to manually add subscriptions:

  1. Log in to your Cloudaware account and go to Admin → Azure Active Directories & Subscriptions. Click +Add.

  2. Click Azure Active Directory link above the integration details and select the tab 'Subscriptions'. Click +Add Azure Subscription.

  3. Fill in the values and click Save:

    1. Name of the subscription

    2. Subscription ID

    3. Select the appropriate Active Directory from the list.

      screenshot
  4. Review all subscriptions under the tab 'Subscriptions'. The green light in 'Status' means that the Azure Subscription has been added successfully. If there is a red light, please contact support@cloudaware.com.

    screenshot
  5. Given the checkbox ‘Automatically Discover Subscriptions’ is checked, the tab ‘Untracked Subscriptions’ shows all Azure subscriptions Cloudaware has discovered in your Active Directory but is not able to collect due to insufficient access caused by an incorrect role assigned (check p. 5 in Assigning Role to Subscriptions - Reader by default or higher).

Enabling Azure Billing

When you enroll your Azure account to Enterprise Agreement (EA), you get an Enrollment Number allowing you to use Azure Billing. To add Azure Billing to your Cloudaware account, follow these steps:

  1. Log in to your Azure EA account at https://ea.azure.com.

    Access to the EA portal is given to one person within your organization during enrollment.
  2. Select ReportsDownload UsageAPI Access Key. Copy the Primary Key and save it for later use.

    screenshot
    The key is valid for 6 months. When expired, you will need to generate it once again.
  3. Select ManageEnrollment. Copy the Enrollment number and save it for later use.

    screenshot
  4. Sign in to your Cloudaware account. Select Admin, select Azure Billing and click +Add.

    screenshot
  5. Fill in your Enrollment number and the API key. Click Save.

    screenshot
  6. The green light in 'Status' means that Azure Billing has been added successfully. If there is a red light, please contact support@cloudaware.com.

    screenshot

Google Cloud Platform Start Guide

Setting Up Service Account

  1. In the Google console go to IAM & admin.

    image
  2. Go to Service accounts. Click Create Service Account.

    image
  3. Enter the name for the service account, e.g. "cloudaware-service-account". Click Create.

    image
  4. Specify the Project role as 'Viewer'. Click Continue.

    image
  5. Click +Create key. Select 'JSON' → Create.

    image
  6. A .json file will be automatically downloaded by the browser.

    image

Creating Custom Role (optional)

A custom role is necessary if you are going to use backups and labels.

  1. Go to IAM & admin, select "Roles" and click +Create Role.

    image

    Add the name and the description of the custom role. Set 'Role launch stage' as General Availability and click + Add Permissions.

    image
  2. Select the following permissions:

    For backups For labels
    • compute.disks.get

    • compute.disks.createSnapshot

    • compute.disks.list

    • compute.disks.setLabels

    • compute.snapshots.create

    • compute.snapshots.delete

    • compute.snapshots.get

    • compute.snapshots.list

    • compute.snapshots.setLabels

    • compute.zones.get

    • compute.zones.list

    • bigquery.datasets.update

    • bigquery.tables.update

    • cloudsql.instances.update

    • compute.addresses.setLabels

    • compute.disks.setLabels

    • compute.forwardingRules.setLabels

    • compute.globalAddresses.setLabels

    • compute.globalForwardingRules.setLabels

    • compute.images.setLabels

    • compute.instances.setLabels

    • compute.snapshots.setLabels

    • compute.targetVpnGateways.setLabels

    • compute.vpnTunnels.setLabels

    • dataproc.clusters.update

    • dataproc.jobs.update

    • cloudkms.cryptoKeys.update

    • storage.buckets.update

  3. Assign the custom role to the service account you have just created (IAM & adminIAM → select the service account).

    image

Enabling Google APIs on Google Project

  1. Go to APIs & Services.

    image
  2. Click +ENABLE APIS AND SERVICES.

    image
  3. Using the search bar, locate and enable:

    • Compute Engine API

    • Identity and Access Management (IAM) API

    • Cloud Resource Manager API

      image
      image

Google Organizations

If you use Google Organizations, you should assign the role 'Viewer' to the service account for Cloudaware to consume your Organization data. Assign the following roles to the service account created earlier:

  • Organization Role Viewer

  • Folder Viewer

  • Organization Viewer

  • Organization Policy Viewer

    image

Click Save.

Assign the 'Project Viewer' role on the organization level for CloudAware to automatically add and collect Google Projects within a Google Organization:
image

Adding Service Account to Cloudaware

  1. Log in to your Cloudaware account and select Admin.

    image
  2. Select ''Google Service Accounts & Projects'' and click +Add.

    image
  3. Click +Add Google Service Account. Fill in the Service Account Name and click Load credentials from file to upload credentials from the file you have downloaded before (see p. 1.6).

    image
  4. Check the tab 'Service Accounts'. The green light in 'Status' means that your Google Service Account has been added successfully. The blue light means that the integration is ok but Cloudaware doesn’t have access to your Google Resource Manager. If there is a red light, please contact support@cloudaware.com.

    image
  5. Go to the tab 'Projects'. Assign the service account you added to a project or any object higher in the hierarchy to define the level on which your Google Resource Manager objects will be collected by Cloudaware. Check the guide Managing Google Projects & Service Accounts for more details.

Managing Google Projects and Service Accounts in Cloudaware

Service Accounts

The tab 'Service Accounts' shows the list of all Google service accounts added. You can edit any service account’s details if necessary, clicking triple dots on the right.

image

Projects

The tab 'Projects' is designed to show the list of Google Resource Manager Projects discovered and provide the information on their performance (Lifecycle State is received directly from GCE; Status marks the status of the integration on the Cloudaware side).

image

The tab has 2 view options: Table (default) and Tree. The Table view displays the list of all Google projects discovered.

Switch to the Tree view to see a hierarchical structure of your Google Resource Manager objects (the organization, folders and projects) available under the service account1 added to Cloudaware.

The column 'Service Account Assignment' shows the type of a service account assignment to an object. Initially all objects have the state ‘none’ and are not being collected by Cloudaware.

image

Select the objects that you would like to be collected. You can assign a service account manually to each individual project or enable auto-collection of Google projects on the organization or folder level.

Assign a service account1 to a parent object (a folder or an organization) in the tree structure in order to enable auto-collection for all child objects listed under this node.

image
1 Cloudaware must be granted access at organization level or at folder level to be able to display the tree structure of your GCP environment.

Once the Google project is collected by Cloudaware, the state changes.

image

All states available in the column 'Service Account Assignment':

  • none - no service account was assigned

  • auto - a service account was assigned automatically from a parent one (only for the projects collected automatically)

  • manual - a service account was assigned manually (for folders or the projects with a manually assigned service account)

  • inherited - a service account is being inherited from a parent one though the process is incomplete yet due to child objects still being collected or due to a technical error2 occurring

2 The error message is received directly from Google. Fix the error in your Google console and refresh the page. Once the project is collected by Cloudaware, the state changes from 'inherited' to ‘auto’.

Using the Assign button you can also re-assign or unbind the service account:

‘Unbind and disable projects’ auto-creation’ (for organizations)

‘Unbind and inherit from parent’ (for folders)

‘Unbind and stop collecting’ (for projects)3

image
3 Clicking ‘Unbind and stop collecting’ you send a removal request for a project. The project will be marked with 'Delete Requested' label.

If any service account is assigned to the object higher in the hierarchy, the removed Google Project will be collected by Cloudaware and displayed in the tree again. To prevent collection of a removed project, you should blacklist it first using the tab 'Projects Blacklist' and then request a removal.

Projects Blacklist

The tab 'Projects Blacklist' allows adding filters to exclude certain projects from being collected in Cloudaware.

image

Click Add Exception and insert regular expressions setting up the filter logic.

image
image

Click Save.

regexs refer to Project IDs, not names.

Enabling GCP Billing

File Export

When enabling GCP billing integration, select a type of file export for your Google service account added to Cloudaware in order to avoid duplication of the billing data. Note that BigQuery is more preferable.

  1. Log in to your Google console. Go to Billing on the left.

  2. Go to Billing export and select File export to set up a bucket for storing the billing reports. Select .csv format.

    image
  3. Go to StorageBrowser and click the bucket you selected before. Billing reports will be available in the 'Objects' section within 24 hours since the reporting was enabled.

    image
  4. Go to the tab 'Permissions'. Ensure the service account assigned to the appropriate project has the role ‘Storage Object Viewer’ for Cloudaware to have getObject access for reading billing reports.

    image
  5. Log in to your Cloudaware account. Go to Admin.

    image
  6. Select Google Project Billing and click +Add.

    image
  7. Select Storage and insert the required details. Click Save.

    image
  8. The green light in 'Status' means that the Google billing integration has been added successfully. If there is a red light, please contact support@cloudaware.com.

    image

BigQuery Export (preferable)

Since BigQuery export type is being continuously updated by Google, we recommend it for working with the billing data in Cloudaware.

  1. Log in to your Google console. Go to Billing on the left.

  2. Go to Billing export and select BigQuery Export to have BigQuery datasets enabled.

    image
  3. Go Big DataBigQuery. In the section 'Resources' select the GCP project the billing datasets are consolidated under.

    image
  4. Select the dataset 'Billing' and pick a table under question. Click Share Dataset.

    image
  5. Type the name of the service account the GCP project for billing is managed by. Click AddBigQuery → assign the role 'BigQuery Data Viewer' → Done.

    image
  6. Log in to your Cloudaware account. Go to Admin.

    image
  7. Select Google Project Billing and click +Add.

    image
  8. Select BigQuery and insert the required details. Click Save.

    image
  9. The green light in 'Status' means that the Google billing integration has been added successfully. If there is a red light, please contact support@cloudaware.com.

    image

Adding Cloud Accounts via CloudAware API

CloudAware uses OAuth standard to identify users who send requests to CloudAware API. A current user should get a specific token to be able to add or see accounts (depending on their permissions).

Get OAuth Token

  1. Log into your Cloudaware account → Admin.

    img
  2. OAuth tab → Create Token.

    img
  3. Copy and save the token using Copy To Clipboard button on the right.

    img
  4. Click Got It, Сlose Window to proceed.

    img
  5. Check the list of tokens provided. NOTE: You can have only 4 active tokens at time! The column 'Use Count' shows the number of times the token has been in use.

  6. If a token has been compromised, it must be revoked. Click triple dots → RevokeYes, Revoke

    img
  7. Click +Create New Token to have get a new token.

    img

ATTENTION: In case you have IP restrictions enabled in your environment, check your OAuth policies settings:

Setup → Manage Apps → Connected Apps → CloudAware OAuth2 → Edit Policies

img

Select 'Relax IP restrictions' in 'IP Relaxation' → Save

img

Get API Key

The API key is required for performing requests to Cloudaware API.

  1. Create a Google Cloud Project account

  2. Submit an access request to support@cloudaware.com providing the full e-mail of the associated Google Cloud Project account

  3. Activate access to CloudAware API using API Manager: go to https://console.cloud.google.com → API Manager → Library → Private APIs

    img
  4. Select external.endpoints.cloudaware-vm.cloud.googEnable

    img
    img
  5. Go back to API Manager → Credentials → Create credentials → API Key

    img
  6. Copy the API Key to the clipboard

    img

Add Cloud Account Using API

The Google APIs Explorer is a tool allowing to explore and test APIs.

  1. Go to Google APIs Explorer using this link

  2. Select Set API key / OAuth 2.0 Client ID. Click Save

    img
    img
  3. Services → External API v1 → select external.amazon.account.create

    img
  4. Insert the OAuth token generated before and fill out the request body as below

    img
  5. Click Authorize And Execute

Cloudaware REST API

Overview

This tutorial shows you how to activate access to the CloudAware REST API and invoke the API methods using HTTP requests.

Use Case: you want to create incidents when a specific instance is overutilized.

Getting started

Google Cloud Project
  1. get a Google Cloud Project account (for this purposes, free trial account will works);

  2. submit an access request by sending an e-mail to support@cloudaware.com with the full e-mail address of the associated Google Cloud Project account;

  3. when the access will be granted, you can enable the API:

    img
    • go to https://console.cloud.google.com

    • choose API Manager

    • choose Library

    • choose a new tab Private APIs, and press external.endpoints.cloudaware-vm.cloud.goog

      img
    • click Enable

      img
  4. Generating API key:

    • go to your API Manager directory

    • choose Credentials

      img
    • select Create credentials - API Key and then, copy the API Key to the clipboard.

      img
Configuring Incident Webhook in CloudAware
  1. Log in to your org and go the Admin tab

    img
  2. Scroll down to the Other integrations and choose CloudAware Incident Webhook:

    img
  3. Then choose Add Integration

    img
  4. Type the Name for your Integration (This name will be populated to Incident Source Provider field on the Incident Object. It will also help you to find the source provider of each created incident, if you have several accounts).

    img
  5. Replace {API_KEY} with API Key from your Google Cloud project (which was generated previously). This will be the URL for the HTTP requests.

    img
    img
API Explorer
  1. Go to the https://external-dot-cloudaware-vm.appspot.com/_ah/api/explorer and select Set API key / OAuth 2.0 Client ID

    img
  2. Select Custom credentials and insert your API key and press Save

    img
  3. Select one of the available APIs

    img
  4. Insert the parameters and the request body (in our Use Case we will create incidents for the overutilized instances, so the subject will be “CPUUtilization is too high on eoduat-eu-central-db.RDS.eu-central (instance name)”). You can also see the tips, under the question mark on every field from the Request body, this will help you to insert the right values.

    img
    NOTE

    your Webhook key was generated in CloudAware

    img
  5. Execute the request without OAuth (or Authorize and execute if it is required by your API method).

  6. Check new incident at your org

    img

CMDB

Configuration Item (CI) Data Model

Cloudaware has an object model to reflect different types of artifacts such as cloud instances, firewall policies and operating system users. Cloudaware updates data model frequently as cloud providers roll out new features and services.

In order to see what latest data model is, perform following steps:

  • Go to Setup - Objects (type Objects in the Quick find box)

    image

You will see the list of Custom objects.

For example, let’s drill down to the AWS Account object:

image

Here is the list of all custom fields on this object:

image

Creating and Managing Custom Objects

Users can customize Cloudaware by adding custom objects. Custom objects can be Applications, Requests or any type of record that is not already covered by a standard object.

image Refer to Salesforce Trailhead Module to learn how to create custom objects.

Customizing Standard Objects

Another customization option that is available to users is customizing Standard objects. For example, a user may choose to add a new field to a Standard object such as AWS EBS Volume.

Creating List Views

List view is a convenient way to filter out unnecessary information. Perhaps we want to see AWS Instances that are running in a certain region and belong to a certain VPC. Such filters should be created within a list view.

image Refer to Salesforce Documentation to learn more about creating List Views.

Editing Page Layouts

Using Page Layouts, users can add or remove fields from any custom or standard object. In addition using page layout editors users can add or remove object related lists.

image Refer to Salesforce Trailhead Module to learn how to edit page layouts.

All standard objects are searchable by default.

image Refer to Salesforce Documentation to learn more about searching custom objects.

Cloud Security

Governance and Compliance

Cost Management

IT Service Management

Cost Management

Description

Cloud cost management begins with the ability to view usage and costs across your portfolio of applications. Dive deeper to understand usage across development and production environments, within application tiers, and among infrastructure types.

image

image To learn more about AWS Cost Management please refer to the following AWS documentation

Features

Most capabilities of Cloudaware Cost Management are derived from force.com Analytics API functionality.

  • Force.com report and dashboard builder

  • Budget Alerts

  • Spending Breakdown

  • Daily Email Reports

  • Reserved Instances Planner

  • Chargebacks and allocation by service line

  • Open API Access

  • Enterprise Security

  • Advanced Export Options

  • Features specifically for Resellers, MSPs and CSBs

  • Analytics of Blended vs. Unblended rates

image Full details are here.

Easy To Share

Sharing cost analysis and spending reports in CloudAware is easy. From scheduled email PDF or Excel reports to mobile notifications via Chatter, CloudAware will deliver the reports to the users you want in whatever format necessary.

Waste Detection

CloudAware automatically detects waste in your accounts. For example if CloudAware notices that an EBS drive has been unattached for over 10 days, cost management module will issue an alert indicating the amount of potential savings. There are over 100 waste seeking policies. Using Trusted Advisor policy designer, users can customize policies to increase or decrease number of days a resource is considered idling before an alert is issued. User can also create new policies altogether. On average CloudAware detects more than $40,000 in annual savings after observing an account for at least two weeks.

Blended and Unblended Rates

AWS resellers and cloud service brokers depend on detailed billing files from Amazon Web Services. These files are difficult to parse due to their complexity and size often requiring resellers, CSBs and large AWS consumers to invest heavily in building in-house applications just to process AWS invoices. CloudAware not only supports "giant" billing files but also provides advanced analytics, customization and invoicing based on the data derived from detailed billing files.

image To learn more about AWS Consolidated Bills please refer to the following AWS documentation.

Advanced Report and Dashboards Editor

Using force.com extremely powerful report builder, CloudAware customers can create very specific account and application reports. Any attribute of any AWS object can be used to create filters for detailed billing dashboards.

image

Tagging

Follow these steps to tag resources from Cloudaware:

  1. Сhoose any object type (e.g AWS EC2 Instance) and open the appropriate list view.

    img
  2. Select resources you want to tag and click Tag Selected.

    img
  3. Click +Add Tag and enter the tag name and the tag value. Click Save.

    img
  4. It’s also possible to tag resources one by one. Open any resource e.g. AWS EC2 Instance and click Tag Instance.

    img
  5. New tags and their values will immediately appear in your AWS Console.

Backup User Guide

Backup and Replication

Cloudaware Backups module is capable of performing scheduled backups of EC2, RDS, RDS Clusters, S3 Buckets, Google Discs. Additional plugins are available for backing up MySQL, MongoDB and DynamoDB databases.

Source

Supported Target(s)

Regional Replication Support

Multi-Account Support

Supported Frequencies

AMI

Y

N

Daily, Weekly, Monthly

RDS

RDS Snapshots

Y

N

Daily, Weekly, Monthly

MySQL

S3, Glacier, AWS Storage Gateway

N

N/A

Hourly, Daily, Weekly, Monthly

MongoDB

AMI, S3

Y for AMI

N/A

Hourly, Daily, Weekly, Monthly

DynamoDB

S3

N

Y

Hourly, Daily, Weekly, Monthly

S3

S3

Y

Y

Hourly, Daily, Weekly, Monthly

In order to perform MySQL and MongoDB backups to S3, Cloudaware will need access to the operating system via SSH.

EC2 Instance Backup

Enabling EC2 Instance Backup

Sign in to your Cloudaware account. Go to Admin → Backup. To set up a global backup policy, go to the "Settings" tab and select the account.

screenshot

To enable scheduled backups, select the time backups will start on from the list in 'Start at'.

A backup task will back up all non-terminated instances, including instances in a 'stopped' state.

You can also select any instance from the list under the 'EC2 instances' tab and set the backup policy for it. This will override the global backup policy set on the account.

Disabling EC2 Instance Backup

To disable scheduled backups for EC2 Instance, select the instance under question and click ''Remove Policy'':

screenshot
Existing backups will be stored until they expire according to the backup policy rules. If an instance is terminated, backups are stored as well until they expire.

Backup Metadata

AMI Description will contain the following information about the original instance from which a backup is taken:

  • Instance ID

  • Key

  • Security Group(s)

  • Instance Type

  • Availability Zone

  • Public IP Address

In addition AMI will contain 2 backups tags:

  • Instance ID

  • Policy Type

Cost allocation tags can be applied to AMIs as well. Contact support@cloudaware.com for additional details.

Viewing EC2 Instance Backup Status

By looking at any instance in Cloudaware, we can tell what backup policies are in effect and check their status.

The following fields mark the backup policy and the backup status:

  • Daily Backups (the number of daily snapshots specified by the policy)

  • Daily Backups Status (the actual number of available consecutive AMIs starting from today’s date)

    For example, today is the 10th day of a month and Daily Backups field has value of 7 but Daily Backup Status has value of 3. It means that Cloudaware only found backups for the last 3 days and the backups for the other 4 days that should be there are not present. Whenever the backup policy does not equal the backup policy’s status, a backup is missing.
  • Weekly Backups

  • Weekly Backups Status

  • Monthly Backups

  • Monthly Backups Status

In order to view EC2 Instance backups, navigate to an instance of choice. Scroll down until you see a related list called EC2 Instance Backups.

Backup Tags

EC2 backup functionality relies on AWS tags to advise the backup scheduled task on how an instance should be backed up.

AWS backup tag name is: gig-backup

AWS backup tag value is: Nd-Nw-Nm, where N is number of days/weeks/months

For example, a backup policy 3d-10w-12m creates daily backups every day and rotates them over the period of 3 days, a weekly backup which will be rotated over 10 weeks and a monthly backup which will be rotated over 12 months. Hyphens can be omitted.

Examples of valid backup tag values:

1. 3d-10w-12m
2. 4d10w12m
3. 3d-10w
4. 3d
5. 3

3 and 4 are equivalent due to backwards compatibility.

Orphaned Backup Retention

The Orphaned Backup Retention allows to manage an AMI clearing process. When setting up the global backup policy, insert the number of days in the 'Orphaned Retention (Days)' field.

For example, Orphaned Retention = 10 means that backups will be deleted 10 days after the source instance is terminated:
screenshot
If an instance is stopped or has the backup tag removed, the images will still be stored.

EC2 Images Replication

To enable EC2 images replication, add the tag gig-replicate in the following format to an instance:

AWS backup tag name is: gig-replicate AWS backup tag value is: N@REGION (multiple policies should be separated by semicolon) where N is a number of images to replicate and REGION is the destination region.

For example, a replication policy "2@us-west-1; 3@us-west-2" will replicate 2 latest AMIs of the instance to the us-west-1 region and 3 AMIs – to the us-west-2.

Examples of valid replicate tag values:

10@us-west-1
5@us-west-2; 3@eu-west-1

RDS Instance Backup

RDS Instance backup works exactly the same as EC2 Instance Backup described above with one exception: RDS Snapshots only have tag-based metadata.

S3 Bucket Backup

S3 buckets can be backed up to another S3 bucket within the same or another AWS account. Both source and target accounts have to be added to Cloudaware.

Cloudaware IAM user which is used to access the source bucket must have write access to the target bucket in the target AWS account.

screenshot

S3 Backups are using tags. The buckets that need to be backed up must have tags applied in the following format: 1d0w0m@cf-templates-1ajskw0nz6e8-us-east-1

AWS backup tag name is: gig-backup AWS backup tag value is: Nd-Nw-Nm@BucketName-region

For example, the tag 1d0w0m@mybucket-us-east-1 takes the source bucket and copies its content to the target bucket once a day.

Cloudaware creates a directory gig-backup in the target bucket. Inside this directory you can see directories for each source bucket.

Cloudaware automatically deletes folders with data that falls outside the backup policy retention criteria.

  1. Create a target bucket.

  2. Apply the following IAM policy to the target bucket.

BUCKET_NAME is the target bucket. It is assumed that a Cloudaware user already has access to the source buckets.

Ensure that every source Cloudaware account is granted access to the target bucket.

Sample
{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "Backup Bucket Policies",

  "Resources" : {
    "S3BucketPolicy" : {
      "Type" : "AWS::S3::BucketPolicy",
      "Properties" : {
        "Bucket" : "BUCKET_NAME",
        "PolicyDocument" : {
          "Statement" : [
            {
              "Principal" : { "AWS" : [
                "arn:aws:iam::ACCOUNT_ID_A:user/CloudawareIAMUserA",
                "arn:aws:iam::ACCOUNT_ID_B:user/CloudawareIAMUserB"
              ] },
              "Resource" : [ "arn:aws:s3:::BUCKET_NAME" ],
              "Effect" : "Allow",
              "Action" : [ "s3:List*" ]
            },
            {
              "Principal" : { "AWS" : [
                 "arn:aws:iam::ACCOUNT_ID_A:user/CloudawareIAMUserA",
                "arn:aws:iam::ACCOUNT_ID_B:user/CloudawareIAMUserB"
              ] },
              "Resource" : [ "arn:aws:s3:::BUCKET_NAME/*" ],
              "Effect" : "Allow",
              "Action" : [ "s3:Get*", "s3:Put*", "s3:Delete*" ]
            }
          ]
        }
      }
    }
  }
}

Integration Guides

Breeze Agent Installation Guide

Description

Breeze is an agent technology that streams OS-level data into CloudAware CMDB and seamlessly enables other CloudAware subscription services such as: Intrusion Detection (IDS), Patch Management, Vulnerability Scanning, CIS Benchmarking, Event Monitoring. Customers can also deploy additional plugins for Breeze allowing them to extend CMDB functionalities as well as to run their own services on hosts with Breeze enabled.

Key Design Goals

  • Simple deployment (installation via a single command)

  • Portability (run on any device with minimal network dependencies or no OS)

  • Low resource utilization (no breakdowns)

  • Extendibility (options of pluggable framework and self-upgrade to accommodate future requirements, allowing users to deploy their own plugins)

  • Reliable and reviewable security architecture (leverage standards x.509 and SSL)

Supported Operating Systems

  • Amazon Linux (2012.09 and newer);

  • CentOS/RHEL (5, 6, 7);

  • SUSE (11, 12);

  • Debian (7, 8, 9);

  • Ubuntu (10.04, 12.04, 14.04, 16.04, 18.04);

  • Microsoft Windows Server (64-bit) (2008, 2008R2, 2012, 2012R2, 2016, 2019).

Required Network Dependencies

  • Breeze requires outbound internet access on the port TCP 443 only.

  • Breeze does not require any inbound connections and can be deployed on private networks and servers with no public IP addresses.

  • Breeze supports IPv4 and IPv6.

  • If you need to lock down outbound access to a specific IP address, contact your Technical Account manager.

  • On AWSAzure and GCE the instance metadata must be accessible to the Breeze agent (access to 169.254.169.254 should be granted).

Supported Breeze Subscription Services

  • IDS

  • Vulnerability Scanning

  • Patch Management

  • CIS Benchmarking

  • Event Monitoring

If a customer subscribes to any of the services mentioned above, they are enabled on every server after Breeze installation.

Breeze Agent Manual Installation

Linux
  1. If the Breeze agent was previously installed on a current instance, remove the directory /opt/breeze-agent with all its contents.

    # sudo rm -fR /opt/breeze-agent
  2. Download the agent installer from CloudAware web console (Admin → Breeze → SHOW INFO → Linux Agent) to the temporary directory. Run the following commands from the temporary directory:

    # tar xvzf breeze-agent*.tgz
    # cd breeze-agent
  3. Run the installation script:

    # sudo ./install.sh
  4. Verify that installation was correct:

    1. Check if the directory /opt/breeze-agent exists

      # ls /opt/breeze-agent
    2. Check if the cron job has been created, by running the following command:

      # cat /etc/cron.d/breeze-agent
Windows
  1. If the Breeze agent was previously installed on a current instance, remove the directory C:\Program Files\Breeze with all its contents. Administrator rights are required.

  2. Download the agent installer from CloudAware web console (Admin → Breeze → SHOW INFO → Windows Agent) to the temporary directory, then run it (*.exe file) from a user with administrator rights.

  3. Verify that installation was correct:

    1. Check if the directory C:\Program Files\Breeze exists.

    2. Check if the scheduler task has been created. Go to Start → Control Panel → Administrative Tools → Scheduler and verify the task named Breeze Agent exists.

Breeze Agent Automated Installation

Chef
  1. Clone the public-utilities repo to your server:

    git clone https://github.com/cloudaware/public-utilities.git
  2. Put your breeze agent installation files to the public-utilities/chef-modules/breeze-agent/files directory. Note that your files should be called breeze-agent-linux.tgz and breeze-agent-windows.exe.

  3. Copy breeze-agent cookbook to your cookbook directory and upload it to the server:

    cp -r public-utilities/chef-modules/breeze-agent ~/cookbooks/
    knife cookbook upload breeze-agent
  4. Create the breeze-agent role:

    export EDITOR=vim #any other editor can be selected, like nano for instance
    knife role create breeze-agent

    Once in the editor, replace everything with the next content and save:

    {
      "name": "breeze-agent",
      "description": "",
      "json_class": "Chef::Role",
      "default_attributes": {},
      "override_attributes": {},
      "chef_type": "role",
      "run_list": [ "recipe[breeze-agent]" ],
      "env_run_lists": {}
    }
  5. Add the role to the nodes that you need or to all nodes using your web interface or using the next command:

    knife node run_list add $NODE_NAME 'role[breeze-agent]' #Where $NODE_NAME is the name of the actual node

    To add the role to all of the nodes you can run:

    for node in `knife node list`;do knife node run_list add $node 'role[breeze-agent]';done;

Next chef-client will have to apply the changes on the nodes, this will just take some time.

Puppet
  1. Put Puppet module breeze_agent to the /etc/puppetlabs/code/environments/production/modules/.

  2. Put breeze agent installation files to the /etc/puppetlabs/code/environments/production/modules/breeze_agent/files directory.

  3. Attach breeze_agent class to the necessary group in the Puppet Dashboard.

  4. Add required variables breeze_package_linux and breeze_package_windows.

Ansible
  1. Put breeze agent installation files to the 'files' directory.

  2. Specify installation file name and hosts in breeze_agent_linux.yml and breeze_agent_windows.yml

    - hosts: linux
      vars:
        linux_agent: linux-breeze-agent.tgz
AWS Elastic Beanstalk

Installing Breeze Agent on AWS Elastic Beanstalk can be done using .ebextension configuration files. In this example we will use EB CLI (command line interface) to deploy new configuration.

  1. Upload Breeze Agent installer file somewhere your ElasticBeanstalk environment can reach. We recommend using any S3 bucket with restricted access or the one created by Elastic Beanstalk (used in the example below).

  2. Create configuration file in the .ebextension directory that is located in your project directory.

    Windows-based environment:

    files:
      "C:\\breeze-agent.exe":
        source: https://elasticbeanstalk-us-east-1-123456789098.s3.amazonaws.com/breeze-agent.exe
        authentication: S3Auth
    
    commands:
      install_breeze:
        command: IF NOT EXIST "C:\Program Files\Breeze\app.bat" (C:\breeze-agent.exe)
    
    Resources:
      AWSEBAutoScalingGroup:
        Metadata:
          AWS::CloudFormation::Authentication:
            S3Auth:
              type: "s3"
              buckets: ["elasticbeanstalk-us-east-1-123456789098"]
              roleName:
                "Fn::GetOptionSetting":
                  Namespace: "aws:autoscaling:launchconfiguration"
                  OptionName: "IamInstanceProfile"
                  DefaultValue: "aws-elasticbeanstalk-ec2-role"

    Linux-based environment:

    files:
      "/tmp/breeze-agent.tgz":
        source: https://elasticbeanstalk-us-east-1-123456789098.s3.amazonaws.com/breeze-agent.tgz
        authentication: S3Auth
    
    commands:
      "install breeze agent":
        test: test ! -d /opt/breeze-agent
        command: tar -xf /tmp/breeze-agent.tgz -C /tmp && /tmp/breeze-agent/install.sh
    
    Resources:
      AWSEBAutoScalingGroup:
        Metadata:
          AWS::CloudFormation::Authentication:
            S3Auth:
              type: "s3"
              buckets: ["elasticbeanstalk-us-east-1-123456789098"]
              roleName:
                "Fn::GetOptionSetting":
                  Namespace: "aws:autoscaling:launchconfiguration"
                  OptionName: "IamInstanceProfile"
                  DefaultValue: "aws-elasticbeanstalk-ec2-role"

    This configuration file contains 3 sections: Files, Commands and Resources (AWS).

    Files section delivers Breeze Agent installer to the instance from S3 bucket.

    Commands section installs the Breeze Agent.

    Resources creates an authentication role that allows access to the bucket with Breeze Agent installers to the Elastic Beanstalk.

    For more configuration options see AWS documentation.

  3. Deploy application with new .ebextension config using EB CLI.

    # eb deploy

Breeze Agent Installation via CloudAware (for Azure)

  1. Ensure the Azure user through which CloudAware has access to Azure has the following permission - Microsoft.Compute/virtualMachines/extensions/write (follow the instructions in Azure Custom Role Creating guide).

  2. Add the Breeze Agent installation button in your Organization (CMDB → Windows Azure → Virtual Machines).

    4
  3. Choose any Azure Virtual Machine you want the Breeze Agent to be installed on.

  4. Click on “MORE” button and choose the “BREEZE INSTALLATION” button.

    6
  5. Choose “Breeze Integration” and click on “INSTALL BREEZE” button.

    1

Proxy support

To enable a proxy on the Breeze agent you need to edit the startup script.

Linux
  1. Open the file /opt/breeze-agent/app.sh

  2. Add the following line before the string ruby ./app.rb

    export http_proxy="http://1.2.3.4:3128"
Windows
  1. Open the file C:\Program Files\Breeze\app.bat

  2. Add the following line before the string ruby app.rb >> agent.log 2>&1

    set http_proxy=http://1.2.3.4:3128

Breeze list of plugins

Enabled by default
  1. Block_devices: shows info about block devices on Linux, Windows.

  2. Facter: shows info about server parameters such as IP, hostname, package updates etc., on Linux, Windows.

  3. Tags: Azure specific plugin.

Optional
  1. Certificates: shows info about server SSL certificates (HTTP/SMTP/FTP/POP3/IMAP) on Linux, Windows.

  2. Open_ports: shows info about server open ports.

  3. Repos_info: shows info about package repositories on Linux only.

  4. Users: shows info about system users on Linux only.

Additional Breeze Plugins

Plugin Name

Description

Type

Instance Facts

Retrieves basic information about the host.

Discovery

AWS Facts

AWS Specific Data including EC2 User Data

Discovery

Azure Facts

Azure specific data

Discovery

Performance Data

Available Memory, Disk, Processor Models, etc.

Discovery

Storage, Mount Points, LVM

Provisioned vs. Utilized Storage

Discovery

OS Packages

All Packages Installed on OS

Discovery

OS Upgradeable Packages

All Upgradeable Packages

Discovery

OS Users and Groups

All Users and Groups

Discovery

OS Package Repositories

All Package Repositories

Discovery

SSH Settings

All SSH Settings

Discovery

Splunk

Shows Splunk Version and Status

Discovery

Apache Tomcat

Shows information about Tomcat App Server

Discovery

Apache ActiveMQ

Shows information about ActiveMQ Messaging Server

Discovery

Apache Hadoop

Discovery

Apache CloudStack

Discovery

Apache Mesos

Discovery

Microsoft SQL Server

Shows information about SQL Server

Discovery

Microsoft IIS Server

Shows information about IIS

Discovery

Microsoft Sharepoint

Shows information about Sharepoint

Discovery

HIDS OSSEC

Installs and configures Host Based Intrusion Detection Agent

Agent

HIDS TrendMicro Deep Security

Shows Agent Version, Status and Last Connect Date

Discovery

Nessus

Installs, configures and registers Nessus Vulnerability Scanning Agent

Agent

Qualys

Installs, configures and registers Qualys Vulnerability Scanning Agent

Agent

NewRelic

Shows agent status, version and last connect timestamp, performance telemetry, incident statistics

Discovery

Nagios

Shows agent status, version and last connect timestamp, performance telemetry, incident statistics

Discovery

Pingdom

Shows agent status, version and last connect timestamp, performance telemetry, incident statistics

Discovery

Sensu

Shows agent status, version and last connect timestamp, performance telemetry, incident statistics

Discovery

StackDriver

Shows agent status, version and last connect timestamp, performance telemetry, incident statistics

Discovery

Wormly

Shows agent status, version and last connect timestamp, performance telemetry, incident statistics

Discovery

Datadog

Shows agent status, version and last connect timestamp, performance telemetry, incident statistics

Discovery

Solarwinds

Shows agent status, version and last connect timestamp, performance telemetry, incident statistics

Discovery

Zabbix

Installs, configures and registers Zabbix monitoring agent

Agent

Nagios

Shows agent status, version and last connect timestamp, performance telemetry, incident statistics

Discovery

Chef

Shows agent status, version and last connect timestamp

Discovery

Puppet

Shows agent status, version and last connect timestamp

Discovery

Ansible

Shows agent status, version and last connect timestamp

Discovery

Yara

Runs a custom Yara scan of vulnerabilities that are hard to detect such as GrizzlySteppes and WannaCry

Command

ClamAV

Installs and deploys Clam anti-virus agent

Agent

Oracle WebLogic

Discovers all data about WebLogic configuration

Discovery

Oracle MySQL

Discovers info about MySQL Configuration

Discovery

PostgreSQL

Discovers info about PGSQL Configuration

Discovery

IBM WebSphere

Discovers all data about WebSphere configuration

Discovery

Adobe Experience Manager

Discovers information about AEM configuration

Discovery

SAP Hybris

Discovers all data about SAP server configuration

Discovery

Magento Ecommerce

Discovers all data about Magento server configuration

Discovery

WordPress

CMS Configuration

Discovery

Drupal

CMS Configuration

Discovery

Joomla

CMS Configuration

Discovery

Containers

Discovery information about Docker, Rocket and LXC containers

Discovery

GitHub

Discovery information about repos, users, branches, etc.

Discovery

Architecture

1
  1. At the host level the Breeze agent runs every 15 minutes as a scheduled task on Windows machines and a cron task on Linux hosts.

  2. The Breeze agent connects to CMDB. During connection both verify each other using pre-created SSL certificates. The agent will entrust to pre-configured SSL certificates only and CMDB will only establish connections with clients that can present SSL certificates signed.

  3. Once CMDB recognizes a client which is connecting, it looks up for plugins and services which are available for this customer and provides them to the agent. For example, if a customer subscribed for IDS, CloudAware will deploy the IDS plugin to the Breeze Agent.

CMDB tracks data about all hosts and timestamps of the Breeze agent’s connections to CMDB.

2

Matching and Sensing

The Breeze agent self-detects whether it is running on a physical server, an AWS EC2 instance, a Beanstalk or Azure instance. When the agent sends data to CMDB, CMDB attempts to match the agent data to a specific instance within the cloud provider.

If no match is found, CloudAware assumes that the agent is running on a non-cloud instance and creates new entity object called CloudAware Physical Server in CloudAware CMDB. If AWS, GCE or Azure instance is matched, all the agent-based data is recorded into the existing record.

Similarly the Breeze agent detects whether it is being executed inside a container (i.e a docker), in this case the agent data will be associated with the container record in CMDB.

Breeze troubleshooting

Troubleshooting Breeze agent proceeds in automatic and manual ways:

Automatic
Linux

Run the troubleshooting script from the Breeze agent directory:

# cd /opt/breeze-agent/
# ./troubleshoot.sh
Windows
  1. Open command-line (Win+R → cmd)

  2. Run the troubleshooting script from the Breeze agent directory:

    C:\> cd C:\Program Files\Breeze\
    
    C:\Program Files\Breeze> troubleshoot.bat
If you do not have the troubleshoot script file in the Breeze agent directory, please update Breeze agent to the last version from Cloudaware Admin Console.
Manual
Linux
  1. Check the log file /var/log/breeze-agent.log for errors. It should be empty if everything is working as expected.

  2. If the log file is empty, set the value of “log_level” parameter in the agent config file to “1” to increase the output verbosity level of Breeze agent

    1. cd /opt/breeze/agent/etc

    2. open agent.conf file and edit the value of “log_level” parameter as follows

      "log_level" : "1"
    3. run Breeze agent manually or wait for cronjob

      NOTE

      If the agent runs manually, the log data will be printed to STDOUT. If the agent runs by the cronjob, the log data will be printed to log file (/var/log/breeze-agent.log).

  3. Verify that the Breeze server is responding:

    1. telnet breeze-server.cloudaware.com 443

    2. curl -v https://breeze-server.cloudaware.com

      NOTE

      telnet will only work when direct internet connection is used. If you have proxy connection, please use curl.

Windows
  1. Check the log file C:\Program Files\Breeze\agent.log for errors. It should be empty if everything is working as expected.

  2. If the log file is empty, set the value of “log_level” parameter in the agent config file to “1” to increase the output verbosity level of Breeze agent

    1. go to the agent config directory C:\Program Files\Breeze\etc

    2. open agent.conf file and edit the value of “log_level” parameter as follows

      "log_level" : "1"
    3. run Breeze agent manually from Task Scheduler or wait for scheduled task and then check the log file (C:\Program Files\Breeze\agent.log).

  3. Verify that the Breeze server is responding:

    1. CMD

      telnet breeze-server.cloudaware.com 443
    2. PowerShell

      Launch PowerShell from the Start menu or press Win+R and type “PowerShell”. Then run the following command:

      [System.Net.WebRequest]::Create("https://breeze-server.cloudaware.com/").GetResponse() | select StatusCode,StatusDescription,ResponseUri | fl
      NOTE

      This is a single line command.

      If the server is reachable, the result will be as shown below:

      RESPOND:
      
      StatusCode        : OK
      StatusDescription : OK
      ResponseUri       : https://breeze-server.cloudaware.com/

Create a Custom Role in Microsoft Azure

The Azure Built-in Role of a "Reader" has no default access to the Storage Account keys which are required for collecting data about VHDs, therefore another custom role should be created. You can create it using this How-To.

Name the new role - CloudAware Custom Policy. This role will use List Keys action that grant read access:

"Microsoft.Storage/storageAccounts/listKeys/action"

If you are going to set up the Breeze Agent, you need to use one more action in your role - “Microsoft.Compute/virtualMachines/extensions/write“.

For creating a new role use this JSON template. Fill your subscription id in the subscription-Id field.

{
  "IsCustom": true,
  "Name": "CloudAware Collector Extended",
  "Description": "For collecting data about Blob Containers and VHDs we need to get access to the Storage Account keys as the default role Reader does not provide API access to these keys.",
  "Actions": [
    "Microsoft.Compute/virtualMachines/extensions/write",
    "Microsoft.Storage/storageAccounts/listKeys/action"
  ],
  "notActions": [],
  "assignableScopes": [
    "/subscriptions/{subscription_id}"
  ]
}

Here are the well-known guides of commonly used built-in roles:

Reader: acdd72a7-3385-48ef-bd42-f606fba81ae7
Contributor: b24988ac-6180-42a0-ab88-20f7382dd24c
Virtual Machine Contributor: d73bb868-a0df-4d4d-bd69-98a00b01fccb
Virtual Network Contributor: b34d265f-36f7-4a0d-a4d4-e158ca92e90f
Storage Account Contributor: 86e8f5dc-a6e9-4c67-9d15-de283e8eac25
Website Contributor: de139f84-1756-47ae-9be6-808fbbe84772
Web Plan Contributor: 2cc479cb-7b4d-49a8-b449-8c00fd0f0a4b
SQL Server Contributor: 6d8ee4ec-f05a-4a1d-8b00-a9b17e38b437
SQL DB Contributor: 9b7fa17d-e63e-47b0-bb0a-15c516ac86ec

In case a custom role already exists, you can use it as well. JSON body of your role should look like the template below. Replace {your-existing-role-definition-id} with your role definition id. In “assignableScopes” section add string

"/subscriptions/{subscription-id}" with your {subscription-id}.

{
  "name": "{your-existing-role-definition-id}",
  "permissions": [
    {
      "actions": [
        "Microsoft.Compute/virtualMachines/extensions/write",
        "Microsoft.Storage/storageAccounts/listKeys/action"
      ],
      "notActions": []
    }
  ],
  "AssignableScopes": [
    "/subscriptions/{subscription-id}",
     "/subscriptions/{subscription-id}",
    "/subscriptions/{subscription-id}"
  ],
  "roleName": "{your-role-name}",
  "roleType": "CustomRole",
  "type": "Microsoft.Authorization/roleDefinitions"
}

Then you will need to assign this custom role to a user in case you are adding a Native application, or to the application in case you are adding a Web app/API.

Creating a custom role in Azure Portal is an asynchronous operation. This means that a time lag may take place.

By performing this action, you confirm access to your virtual machines to be granted to the appropriate user for potential data modification.

Updating an Existing CloudAware Custom Policy

CloudAware may regularly introduce new capabilities which require addition of new actions and permissions. In cases a CloudAware custom role already exists, you can update this role without updating it for every subscription. If updating an existing CloudAware Custom Policy role is required, your Technical Account manager will provide you with instructions on how to perform this action.

Creating a custom role in Azure Portal is an asynchronous operation. This means that a time lag may take place between the creation of a role and time when this role becomes available.

AWS and AWS Gov Cloud

Heroku

Generate Heroku API keys

  1. Log in to Heroku.com and select your profile picture in the top right corner

    img
  2. Scroll down to the API Key section and click Reveal button

    img
  3. Copy the Value of the API Key section.

Add Cloudaware integration with Heroku

  1. Log in to your Cloudaware account and select your profile picture in the top right corner

    img
  2. Navigate to the Admin section

    img
  3. Scroll down to Heroku Accounts and select Add

    img
  4. Provide Account details and click Save:

    • Project Name (can be any)

    • API Key (Paste value generated earlier)

      img

Slack

Getting Started

Cloudaware supports two types of Slack integration:

Possible use cases:

  • 30-Day spending exceeds certain amount

  • IDS detects an attack against an instance

  • Server health deteriorates

  • Incident management

Slack Cloudaware Application

Use case: You would like to be notified about any high and critical vulnerability detected.

Creating Application In Cloudaware
  1. Log in to your Cloudaware account → Admin

    screenshot
  2. Select Slack CloudAware Application in Other integrations. Click +Add

    screenshot
  3. Name your integration → Save

    screenshot
  4. Click Add to Slack

    screenshot
  5. Grant CloudAware App necessary permissions and select a Slack channel the notifications will be posted to. Click Allow

  6. You will be notified about CloudAware App set up for your Slack Workspace via email and Slack channel

  7. The green light in 'Status' means that Slack CloudAware Application has been created successfully. If there is a red light, please contact support@cloudaware.com

    screenshot
Configuring Slack Notifications
  1. Select Slack Notification in Other integrations. Click +Add

    screenshot
  2. Select 'Integrated Slack Application'. Fill in the integration details* selecting the app you set up before. Click Save

    screenshot
    • Name, Message are mandatory fields.

      In our use case we are creating the Upgrade Request notification. Use sObject.CA10__risk__c where CA10__risk__c is API Name of the object.

      Use Fields List to determine fields you’d like to exclude from being displayed in your Slack message (Name in this example).

      Object Name, Message, Color, Url Base are expression fields. The data type of the value returned depends on the elements used in the expression. Here are sample outputs:

      Expression Output

      test

      test

      sObject.Name

      ‘the name of this field in your org’

      ‘sObject.Name ’

      sObject.Name

      ‘Hey, ‘+sObject.Name ’

      Hey, ‘the name of this field in your org’

      Hey, +sObject.Name

      Hey, +sObject.Name

      ‘good’

      good

      sObject.colour_c

      ‘the name of this field in your org’

      sObject.CA10_severity_c== “High”? ‘danger’ : ‘good’

      danger/good

  3. Copy and save the URL as it is required for further configuration in Cloudaware (see Configuring Cloudaware Workflow To Invoke Slack Action).

Incoming Slack Webhook URL

Use case: You would like to get the notification in Slack channel when package upgrade is requested by your customer (custom object: CloudAware Upgradable package).

Webhook Configuration In Slack
  1. Enter your Apps Directory in Slack → Incoming Webhooks → toggle 'Activate Incoming Webhooks' to On → Add New Webhook to Workspace (see more here)

    screenshot
  2. Grant your App necessary permissions and select a Slack channel the notifications will be posted to. Click Allow

    screenshot
  3. Copy the Webhook URL and save it for later use

    screenshot
Adding Slack Webhook To Cloudaware
  1. Log in to your Cloudaware account → Admin

  2. Select Slack Notification in Other integrations. Click +Add

    screenshot
  3. Select 'Slack Webhook URL'. Fill in the integration details* → Save

    screenshot
    • Name, Slack Webhook URL, Message are mandatory fields.

      In our use case we are creating the Upgrade Request notification (replace Test by the customer name). To understand which instance the package needs to be upgraded on, insert sObject.CA10__Instance__c where CA10__Instance__c is API Name of the object (Cloudaware Upgradable Package in this example).

      Use Fields List to determine fields you’d like to exclude from being displayed in your Slack message (Name in this example).

      Object Name, Message, Color, Url Base are expression fields. The data type of the value returned depends on the elements used in the expression. Here are sample outputs:

      Expression Output

      test

      test

      sObject.Name

      ‘the name of this field in your org’

      ‘sObject.Name ’

      sObject.Name

      ‘Hey, ‘+sObject.Name ’

      Hey, ‘the name of this field in your org’

      Hey, +sObject.Name

      Hey, +sObject.Name

      ‘good’

      good

      sObject.colour_c

      ‘the name of this field in your org’

      sObject.CA10_severity_c== “High”? ‘danger’ : ‘good’

      danger/good

      You can apply standard Slack formatting in the field Message. In our use case we apply bold message format by adding * before the text of the message.

  4. The green light in 'Status' means that Slack Notification Webhook has been configured successfully. If there is a red light, please contact support@cloudaware.com

    screenshot
  5. Copy and save the URL as it is required for further configuration in Cloudaware (see Configuring Cloudaware Workflow To Invoke Slack Action)

Configuring Cloudaware Workflow To Invoke Slack Action

  1. In your Cloudaware account click Setup in the main menu

  2. In the Quick Find box start typing workflows to select Workflows & Approvals → Workflow Rules → New Rule

    screenshot
  3. Select the object for the rule to be applied to (in our case CloudAware Upgradable Package) and click Next

  4. Add Rule Name (1), set Evaluation Criteria(2) and Rule Criteria(3)*. Add Filter Logic if necessary. Click *Save & Next*

    screenshot

    *In our use case we are creating a workflow rule will send out alerts when the field 'Upgrade' (checkbox) is true on CloudAware Upgradable Package object.

  5. Add Workflow Action → New Outbound Message. Paste the copied Endpoint URL. Click Save

    screenshot
  6. Review the workflow. Click Done to activate it

  7. Once this workflow rule is triggered, the following Slack notification may be sent:

    screenshot

New Relic

Follow these steps to integrate New Relic account in Cloudaware.

  1. Sign in to your org and choose Admin tab.

    1
  2. Find New Relic in the list of Monitoring tools and click Add.

    1
  3. On a new page there is a link which will explain you how to generate your New Relic API key.

  4. Afterwards fill in the name and the API key and click Add.

    1
  5. The green light in Status means New Relic account has been successfully added. If there is red light please contact support@cloudaware.com.

    1

New Relic Insights

New Relic Insights allows you to collect real-time data about utilization of your resources.

Getting Account ID

The Account ID consisting of 6 digits can be found in the web link.

1
Getting Query Key
  1. Go to Insights.

    1
  2. Click Manage data.

    1
  3. Select API Keys and check the Query Keys section. Click Show and copy the key.

    1
Adding сredentials to Cloudaware
  1. Log in to Cloudaware and go to Admin.

    1
  2. Scroll down to Other integrations and select New Relic Insights. Click Add.

    1
  3. Type the Name for your Integration (for example, Cloudaware). Type your New Relic Account ID and Query Key. Click Save.

    1
  4. The green light in Status means New Relic Insights integration has been successfully added. If there is red light, please contact support@cloudaware.com.

    1

CloudWatch

Zabbix

Zabbix API

The Zabbix API is a web based API and it is always enabled because this is a part of Zabbix web frontend. The API is available via the following URL: https://zabbix.company.com/api_jsonrpc.php

CloudAware Zabbix Integration

  1. Create a new zabbix user called api and include it to the group No access to the frontend

    1
  2. Change the permissions to Zabbix Super Admin

    1
  3. Add Zabbix Integration in CloudAware:

    1

Source IP address can be provided upon request. Contact your technical account manager via tam@cloudaware.com.

SolarWinds

  1. In your Solarwinds account create a custom property called AwsInstanceId based on Nodes. Choose Text as a format.

    add custom property
  2. Choose a node you are going to add to Cloudaware and fill in the value - it should be the instance ID for the appropriate node, e.g i-11ab222cd.

    example
  3. Afterwards this custom property will be added to the list of all custom properties.

    Please note that the custom property value AwsInstanceId - i-11ab222cd should be added to all the nodes, you are going to add to Cloudaware.
    custom propertry to all nodes
  4. In your Cloudaware org choose Admin tab, find Solarwinds in the list of Monitoring tools and click Add.

    choose solarwigs
  5. Fill in all the fields and click Add. The URL should indicate the TCP port 17778 which is the default one for SolarWinds.

    add details

Chef

  1. Sign in to your org and choose Admin tab.

    1
  2. Find Chef in the list of DevOps tools and click Add.

    1
  3. On a new page fill in all the required information and click Add.

    1

    Bucket region is shown in bucket properties

    File prefix is shown in bucket properties if this feature is enabled

  4. The green light in Status means Chef has been successfully added. If there is red light please contact support@cloudaware.com.

    1

Puppet

Prerequisites

  • Make sure that puppet server has Internet access.

  • Make sure that git client is installed on the puppet server.

  • Make sure that mcollective is working. Run mco ping command as root.

If you can’t run this command as root user, you should add symlink by the following command: ln -s /var/lib/peadmin/.mcollective /root/.mcollective

Configure Puppet Server

  1. Login to your puppet server via SSH and get root rights: sudo su -

  2. Clone repository from GitHub:

  3. Copy cloudaware module to directory with your puppet modules.

    We use ‘production’ environment. Please use your environment to copy Cloudaware module.

    cp -r public-utilities/cloudaware /etc/puppetlabs/code/environments/production/modules

  4. Go to your puppet dashboard and create cloudaware group:

    1
  5. Click on cloudaware group which was created. And pin your puppet server node at the Rules tab

    1
  6. Go to Classes tab, and add cloudaware::facts2ca class:

    1
  7. Add parameters for cloudaware::facts2ca class and click on Commit changes button. If you use IAM role then you don’t need to specify fact2ca_access_key and facts2ca_secret_key parameters.:

    1
  8. Now you should run the puppet agent on the puppet server node from dashboard or linux console:

    puppet agent -vt

  9. To check that all is working fine you can run ‘mco facts2ca’ command. After the command is finished you will be able to see files with facts in your S3 bucket.

    mco facts2ca

Cloudaware configuration

  1. Sign in to your org and choose Admin tab.

    1
  2. Find Puppet Facts in the list of DevOps tools and click Add.

    1
  3. On a new page fill in the required information and click Add.

    1

    Bucket region is shown in bucket properties

    File prefix is shown in bucket properties if this feature is enabled

  4. The green light in Status means Puppet has been successfully added. If there is red light please contact support@cloudaware.com.

    1

Ansible

  1. To begin with you should run this playbook but preliminary specify the required parameters as it’s shown in a Readme doc.

  2. Sign in to your org and choose Admin tab.

    1
  3. Find Ansible Facts in a list of DevOps tools.

    1
  4. On a new page fill in all the required information and click Add.

    1

    Choose the AWS Account from the list of added AWS accounts

    Specify the bucket name which contains the Ansible facts

    Bucket region is shown in bucket properties

    File prefix is shown in bucket properties if this feature is enabled

  5. The green light in Status means Ansible facts have been successfully added. If there is red light please contact support@cloudaware.com.

    1

Jira

Integration Capabilities

Cloudaware offers several key capabilities of integrating with Atlassian JIRA:

  • Autodiscovery of issues related to CMDB objects and importing them from JIRA to Cloudaware

  • Creating a JIRA issue whenever any specific criteria is met in Cloudaware (e.g. new incident, policy violation, vulnerability scan, etc.) and posting a comment to the existing JIRA issue (e.g. new vulnerability has been detected as fixed, etc.)

Configuring Autodiscovery of JIRA Issues and Importing to CMDB

Adding JIRA Integration to Cloudaware

Cloudaware can discover your JIRA issues and automatically add them to the corresponding CMDB objects.

img

Follow these steps to integrate your JIRA account with Cloudaware:

  1. Log in to your Cloudaware account → Admin

    img
  2. Find JIRA in the list of Issue Management tools, click +Add

    img
  3. Fill in the required information, click Save

    img

    *URL - insert your Company JIRA URL from the browser address bar, e.g. http(s)://jira.cloudaware.com

    **Password - use a token instead of a password if you are using cloud version of JIRA

    ***Trust Certificate - check this box only if your JIRA runs on a private network and the tunneling has been set up by Cloudaware

  4. The green light in 'Status' means your JIRA account has been successfully added. If there is a red light, please contact support@cloudaware.com

    img
Adding Custom Fields to JIRA

For Cloudaware to associate a JIRA issue with a CMDB object, a JIRA issue should have two custom fields:

  • Object Type

  • Object Identifier

In order to view a list of CMDB objects, navigate to Setup → type Objects in the Quick search bar.

img
img

Object identifier is a cloud provider ARN for that object, for example:

Object Type: AWS ELB Load Balancer

Object Identifier: arn:aws:elasticloadbalancing:us-east-1:231469678781:loadbalancer/admin-s1-Elb-122VUH2MDDWYO

Add the two fields to JIRA issues using the instructions provided by Atlassian here.

The custom JIRA field you create should be a text/string.

Сonfiguring Automatic Creation of JIRA Issues from Cloudaware

Cloudaware can send an outbound message to JIRA whenever a specific criteria is met, which will automatically create a JIRA issue in the appropriate JIRA project. Cloudaware can also post a comment to your JIRA issue once the conditions that have triggered the ticket creation change.

Let’s review the following use case. Your company’s seсurity team wants a JIRA issue to be created in a specific project every time when a high risk vulnerability is found by Cloudaware. However, in order to avoid lots of manual work with checking and closing JIRA issues, they also need to be notified if a vulnerability has been recognized as fixed. A comment posted to the issue may be a good marker for bulk closing tickets.

Manage Permissions and Create Fields in JIRA
  1. Grant Cloudaware with user/project: ADD_COMMENTS permission (see JIRA documentation).

  2. Log in to your Cloudaware account → Setup → Objects.

  3. Select an object you want JIRA issues to be created for (in this example, CloudAware Vulnerability Scan).

  4. Review the section 'Custom Fields & Relationships' to define fields that should be displayed in an issue’s description. In our use case we are using the following fields: Priority, Risk, Severity, CVSS Number, Host, Port, Protocol, Description, Disappearance Time.

Use Cloudaware Field Label names when creating fields in your JIRA as fields names MUST match.
Setting Up JIRA Notifications Integration in Cloudaware
  1. Log in to your Cloudaware account → Admin.

  2. Find JIRA Notifications in the list of Issue Management tools, click +Add.

  3. Fill in the integration details:

    Name - name for your JIRA Notification integration

    JIRA Integration - select the pre-configured JIRA integration.

    Project - select your JIRA project issues will be created in.

    Issue Type - select the issue type (managed in your JIRA).

    Summary - add summary for your JIRA issues. In this example we use sObject.Name to display full CloudAware Vulnerability Scan name.

    Description - add a description that will be added into JIRA issue body.

    Comment - add a text that will be displayed as a Cloudaware comment. In our use case we are using sObject.CA10disappearanceTimec where CA10disappearanceTimec is API Name of the field showing the date and time when the vulnerability was deleted from the scanner.

    Pay attention to using expression fields. The data type of the value returned depends on the elements used in the expression. Here are sample outputs:
    Expression Output

    test

    test

    sObject.Name

    ‘the name of this field in your org’

    ‘sObject.Name’

    sObject.Name

    ‘Hey, ’ + sObject.Name

    Hey, ‘the name of this field in your org’

    Field List - determine the fields you’d like to be displayed in a JIRA issue and Cloudaware comment.

    Use API Names of fields. The integration must have all fields you are planning to use in notifications. The sequence you choose for the fields' order will be reflected in a JIRA issue.
    img
  4. Click Save.

  5. Copy and save the URL as it is required for further configuration in Cloudaware.

    img
Configuring Cloudaware Workflows to Create Issues and Post Comments in JIRA

Having one JIRA Notifications Integration configured, you will need to set up two different workflow rules - for creating a JIRA issue (1) and adding a comment (2).

  1. Workflow Rule for JIRA issue creation:

    • From Cloudaware Admin go to Setup → Create → Workflows & Approvals → Workflow Rules → New Rule:

      img
    • Select the object. In our use case we are using CloudAware Vulnerability Scan. Click Next.

    • Add Rule Name, set Evaluation Criteria and Rule Criteria as shown below:

      img
    • Click Save&Next.

    • Add Workflow Action → New Outbound Message:

      Object: CloudAware Vulnerability Scan

      Name: Jira Notification: New Vulnerability Detected

      Endpoint URL: paste the URL copied from the integration’s details

    • Select the fields to be displayed as set up in the Integration details:

      img
    • Click Done. Click Activate to activate your workflow.

  2. Workflow Rule for posting a comment in JIRA issue:

    • Go back to Workflow Rules → New Rule.

    • Select the object. In our use case we are using CloudAware Vulnerability Scan. Click Next.

    • Add Rule Name, set Evaluation Criteria and Rule Criteria as shown below:

      img
    • Click Save&Next.

    • Add Workflow Action → New Outbound Message:

      Object: CloudAware Vulnerability Scan

      Name: Jira Notification: Vulnerability Fixed

      Endpoint URL: paste the URL copied from the integration’s details

    • Select the fields to be displayed as set up in the Integration details:

      img
    • Click Done. Click Activate to activate your workflow.

JIRA issue sample:

img

Cloudaware comment sample:

img
Bulk Closing Issues with Cloudaware Comments in JIRA

Based on our use case, we can consider all issues with a comment as not requiring further actions since a vulnerability the JIRA issue informs of has been fixed.

Follow these steps to configure automatic change of issue status to 'Resolved':

  1. In your Service Desk project select Project settings → Automation.

  2. Select Add rule.

  3. Select Custom rule from the list, then select Next.

  4. Give your custom rule a name and a description.

  5. Configure your rule by defining the WHEN, IF, and THEN fields:

    5.1. When comment added

    5.2. If comment contains This vulnerability is fixed and deleted on'

    5.3. Then transition issue to status "Resolved"

    img

ServiceNow

Introduction

Cloudaware is cloud management platform with extensive support for Amazon Web Services ranging from cost management, to security and usage analytics. Cloudaware supports all of currently available Amazon Web Services.

Cloudaware is built on top of force.com environment and is a native force.com application. Cloudaware is capable of both pushing data on-demand into ServiceNow and acting as a repository ServiceNow can poll on a regular basis.

While there are dozens off the shelf tools offering integration between ServiceNow and Salesforce, none of them are required for Cloudaware\Salesforce and ServiceNow to exchange data.

Architecture

1

This architecture can support following use cases:

  • Cloudaware forwards change request into ServiceNow

  • ServiceNow retrieves list of violations from Cloudaware and saves them as incidents.

Follow SOAP roles are supported on both sides.

soap

Can perform all SOAP operations.

soap_create

Can insert new records.

soap_delete

Can delete existing records.

soap_ecc

Can query, insert, and delete records on the qs

soap_query

Can query record information.

soap_query_update

Can query record information and update record

soap_script

Can run scripts that specify a .do endpoint.

soap_update

Can update records.

ServiceNow Initiating

This is by far the most common way Cloudaware data is populated into CMDB. It allows ServiceNow administrators to have fine control over mapping of objects collected from Cloudaware into ServiceNow. The steps to configure this are as follows:

  1. Create a user in Cloudaware with Standard User Profile. Name the user appropriately e.g. ServiceNow Collector

  2. Export Partner WSDL from Cloudaware\Salesforce

  3. Upload WSDL to ServiceNow System SOAP Outbound Messages

  4. Configure scripts to export whatever from Cloudaware into ServiceNow

Create Salesforce User

Create user using instructions described here .

Export Partner WSDL
1

Save generated WSDL XML file to your workstation.

Upload WSDL To Service Now
1

Once in the new Outbound SOAP Message

1

If WSDL has been imported successfully, you will see SOAP Message Functions. Number of SOAP message functions can change over time.

1
Getting Cloudaware\Salesforce SessionID

All data retrieval operations from Cloudaware will begin with obtaining a session id. Nothing can be retrieved from Cloudaware until client has session id. To obtain session id, client must execute login, outbound SOAP message.

Find login outbound message under cloudaware partner from the step above and click on it.

1

Once in the login message.

1

After clicking test, you should receive an HTTP 200 response.

1

Do not proceed any further if you’re not able to get a 200 HTTP status.

Querying and Inserting Data

If you’re able to login, you’re now able to retrieve any data stored in Cloudaware\Salesforce into ServiceNow into any ServiceNow object.

For example you can retrieve objects such as:

  • EC2 Load Balancers as EC2 Load Balancer in ServiceNow

  • Cloudaware Policy Violations as ServiceNow Incidents

  • Cloudaware Changes as ServiceNow Change Requests

Under Outbound SOAP Messages, find query SOAP Message

1

Once inside the query message, fill out the query string and click Test.

1

You can use different queries to extract different piece of data from Cloudaware. Here are some examples.

SELECT Id, Name, CA10_subject_c FROM CA10_CaPolicyViolation__c LIMIT 10

SELECT Id, Subject FROM Case

SELECT Id, Name, CA10_accountId_c FROM CA10_CaAwsAccount_c

SELECT Id, Name, CA10_arn_c, CA10_mfaEnableDate_c, CA10_lastLoginDateTime_c FROM CA10_CaAwsUser_c

Cloudaware Initiating

Cloudaware can also initiate API calls to ServiceNow from inside Cloudaware workflows, triggers and actions. For example when a change request is made in Cloudaware, using an outbound message within Cloudaware we can submit it in real time to ServiceNow.

The outbound message functionality of Cloudaware is described here.

Contrary to previous example now Inbound SOAP Messages would be configured in ServiceNow and reference in Cloudaware.

Tenable Nessus

CMDB Tenable Nessus

Introduction

Cloudaware is a multi-sourced CMDB is built on top of force.com platform and custom object functionality. It supports bare metal, virtualized infrastructure and leading IaaS cloud providers:

  • Amazon Web Services

  • Microsoft Azure

  • Google Compute Engine

  • Heroku

Besides integrations with leading computing infrastructure cloud providers, Cloudaware also supports over 50 other API integrations with common enterprise and security tools including Tenable Nessus. These integrations allow Cloudaware users to have access to a single point of access to cross referenced data from multiple data sources.

Common Use Cases
  • Identify hosts that exist according to cloud provider but are not registered with Nessus

  • Instruct Nessus to scan newly discovered hosts

  • Route scan results or take actions based on any instance property stored in CMDB such as tag, team, project, application, etc.

  • Analyze, report and group scan results data by team, business unit, account and other custom dimensions

  • Measure and report vulnerability resolution times by team, business unit or based on any other data in CMDB.

  • Adjust vulnerability severity based on cloud provider data such as presence of public IP address, VPC, subnet, inbound and outbound security group rules

  • Execute custom workflows to open tickets in JIRA, Service Cloud or send notifications via Slack, Hipchat, Quip or other collaboration platforms

  • Provide convenient access to vulnerability scan results, patch, compliance, network security all in one place

  • Establish risk profile and assess threats faster and more accurately


Leverage Salesforce capabilities such as workflows, dashboards, reports and custom fields to meet specific IT management needs.
Data about Non-Cloud infrastructure is loaded into Cloudaware via DevOps integrations (Chef, Puppet, Ansible or Breeze)

Integration Modes

Cloudaware supports two integration modes with Tenable Nessus:

Read Only Mode

In Read Only mode, a Cloudaware collector is deployed to customer Nessus servers. The collectors will periodically dump scan results into AWS S3 bucket or Google Storage bucket of customer choice. Nessus collector on CMDB (Salesforce) side will consume data from the bucket and download scan results. Cloudaware will then attempt to match scan results with inventory in CMDB such as AWS EC2 Instances and MS Azure Virtual Machines.

When Cloudaware API collector retrieves scan results from Nessus, it performs following key steps:

  • Insert vulnerability record into Cloudaware

  • Link record to an infrastructure resource e.g. AWS EC2 Instance

  • Update infrastructure resource record properties e.g. Last Scan Date

Nessus vulnerability scan results are copied in their entirety as a record in Cloudaware. Cloudaware Vulnerability Scan Result object schema is available here.

Once Cloudaware inserts and links the record, it will update custom fields on the infrastructure resource object such as AWS EC2 Instance These custom fields are:

  • Last Scan Date

  • Last Scan Result

  • Vulnerability Open, Critical

  • Vulnerability Open, High

  • Vulnerability Open, Medium

  • Vulnerability Open, Low


Read/Write Mode

In Read/Write mode, Cloudaware can become authoritative source on what hosts need to be scanned and when. Since Cloudaware has most up-to-date inventory of scannable infrastructure, it presents users both with manual as well as workflow based ways to mark host for scanning. Read/Write mode allows makes following use cases possible:

  • User manually marks host as scannable

  • User creates field update workflow to mark hosts as scannable based on host attributes VPC, Public IP Address or Tag Value.

The API layer periodically runs on Nessus server and queries CMDB for the latest inventory of scannable hosts. After each iteration, API layer will register or de-register agents to match scannable inventory within CMDB.

Additional Cloudaware API Layer software will need to be installed on Nessus server in order to support Read/Write mode.

Deploying Nessus Agents

By cross referencing cloud provider and Tenable Nessus data, Cloudaware not only detects hosts that are missing Nessus agents but also offers deployment functionality to make it easier for the provisioning teams to install Nessus agents with minimal efforts. Cloudaware leverages popular DevOps technologies such as Chef, Puppet, Ansible and Breeze to provide cookbooks, recipes, play books and plugins to make Nessus agent deployment easy and scalable. Cloudaware provided Nessus installation DevOps plugins offer several key capabilities:

  • Installation plugins not only install and uninstall Nessus agent but also can register and deregister the agent

  • Registration plugins can load balance registrations across multiple Nessus servers depending on availability of licenses, Nessus server proximity and current load

  • Installation plugins can query CMDB to decide at runtime whether to deploy Nessus agent or not based on flags set in CMDB

Actual deployment steps for Chef, Puppet, Ansible or Breeze are outside of the scope of this document. See your technical account manager for details.

CIS Benchmarks

Cloudaware supports CIS Benchmark scan results. Similiar to standard scan results, CIS results are inserted into Cloudaware and linked to matching infrastructure resources. Cloudaware enhances CIS functionality from Nessus by visually rendering the data in a more user-friendly and intuitive way. Additionally, users can create Salesforce style reports and dashboards using CIS Benchmark scan result records.

Matching

Cloudaware will attempt to match scan results to a cloud infrastructure record in CMDB based on the most reliable identifier available in the following order:

  • AWS/Google Instance ID or Azure VMID

  • Concatenation of IP Address + MAC Address

  • IP only

Matching by IP may lead to incorrect matching in case overlapping subnets exist across multiple VPCs, Accounts, etc.

Screenshots

image
Figure 1. Admin Panel
image
Figure 2. Security Integrations
image
Figure 3. Tabular View
image
Figure 4. Details Tab (Data From Cloud Provider)
image
Figure 5. Linked Scan Result to an AWS Instance
image
Figure 6. Vulnerabilities Scan Report
image
Figure 7. CIS Benchmark Result Rendering
image
Figure 8. CIS Scan Report
image
Figure 9. CIS Benchmark Checks Wave Dashboard

Pingdom Integration

To integrate your Pingdom account in Cloudaware follow these steps:

  1. Log in to your Cloudaware account and select Admin.

    img
  2. Scroll down to Monitoring and select Pingdom. Click +Add.

    img
  3. Fill in the required information and click Save.

    img
    • Login: use your Pingdom login

    • Password: use your Pingdom password

    • Application Key: you need to generate an Application Key to identify your application. This can be done from Sharing → The Pingdom API in your Pingdom account. The key is a combination of alphabetic and numeric symbols (example: zoent8w9cbt810dagobah23vcxb87zrt5541).

  4. The green light in Status means your Pingdom account has been successfully added. If there is a red light, please contact support@cloudware.com.

    img

Pingdom Webhook Integration

To integrate your Pingdom account in Cloudaware via webhook follow these steps:

  1. Log in to your Cloudaware account and select Admin.

    1
  2. Scroll down to Monitoring and select Pingdom Webhook. Click +Add

    1
  3. Enter the name of your integration and click Save.

    1
  4. A new integration will be created. Copy its URL.

    1
  5. Go to your Pingdom account, select 'Integrations' section and click Add Integration.

    1
  6. Select Webhook as a type. Add a name and paste the URL that you copied from your Cloudaware account. Save the integration.

    1
  7. Assign the new integration to your Pingdom checks. Go to 'Experience Monitoring' → 'Uptime' and select the appropriate check. Click Edit.

    1
  8. Scroll down to the Webhook section and check the box against the webhook integration that has been added. Click Modify check to save changes.

    1
  9. Click Test check to send a test alert which will create a new incident in Cloudaware.

Sentinel Whitehat

Trend Micro

Splunk

GitHub

Okta Salesforce SSO

To create an application in Okta for Salesforce you need to enable SAML support in Salesforce and assign a Federation ID to a user.

It is a good idea to open any text editor where you will be keeping some notes.
  1. Login to Salesforce with an Administrator account.

  2. Navigate to: Setup > Manage Users > User.

  3. Click Edit next to a user you want to assign a Federation ID.

    img
  4. Scroll down to a Single Sign On Information and type the Federation ID to an appropriate field (you can use Username from above, e.g. cloudaware@cloudaware.com).

    img
    Save the Federation ID that you’ve just set. You will need it at the end of this guide.
  5. Click Save.

  6. Open a new tab, login to Okta with your administrator account and click Admin in the top right corner.

  7. Navigate to Applications > Add Application.

    img
  8. In a search field type Salesforce and click Add next to a Salesforce.com option.

    img
  9. Fill the Application label with your application name and click Next.

    img
  10. In Sign-On Options choose SAML 2.0 and click View Setup Instructions button below.

    img
  11. Scroll down Okta Setup Instructions to Paragraph 6 and note:

    • Issuer

    • Identity Provider Certificate

    • Identity Provider Login URL

    • Identity Provider Logout URL

    • Entity ID

  12. Now go back to the Salesforce tab and navigate to Security Controls > Single Sign-On Settings.

    img
  13. Check if SAML is enabled (1). If not, click Edit (2), enable a checkbox and click Save.

  14. Then click New (3) to create a new SAML SSO setting.

    img
  15. Here you should fill in the fields with data that you’ve saved previously:

    • Name of this settings (1)

    • API Name (2) must be without spaces (it will be filled in automatically after you set the Name)

    • Issuer (3) from Okta Setup Instructions

    • Identity Provider Certificate: click Choose File (4) and choose the certificate you downloaded before

    • Set SAML Identity Type as shown on the screenshot above (5)

    • Set Identity Provider Login URL (6) and Custom Logout URL (7) from Okta Setup Instructions

    • Set Entity ID (8) as https://saml.salesforce.com if you do not have a custom domain. Or to https://[customDomain].my.salesforce.com if you have it.

  16. When everything is set up click Save button.

  17. Now you can see Endpoints section. Copy a Login URL.

    img
  18. Switch to the Okta tab and fill in the Login URL field with the link from previous step and the Custom Domain if needed.

    img
  19. Then set an Application username format as Custom and enter the Federation ID that you set at the beginning.

    img
    In case when sign-in of multiple users is required, the 'Federation ID' should be individual for each user. The Okta parameter 'Application username' should be configured to a some dynamic parameter, e.g. the default value 'Okta username'. That means Okta will send the username as Federation ID.
  20. Click Next.

  21. Click Next again.

  22. Choose users you want to assign to this app.

  23. Click Next.

  24. Click Done.

Now your Okta application is ready to sign in to Salesforce.

Centrify Salesforce SSO

This is a simplified guide that assumes the possibility of authorization by only one SF user. See the official Centrify documentation for the full setup.

Open Salesforce and Centrify pages in different browser tabs.

Centrify

  1. In the Admin portal click Apps, then click Add Web Apps

    img
  2. In the Search field type “salesforce” (1) and choose Salesforce with SAML support (2). Confirm your choice and click Close(3).

    img
  3. Then switch to Salesforce tab.

Salesforce Setup

  1. Set Federation ID for user:

    • Click Setup in the right-up corner of the SF page.

      img
    • Then navigate to Manage user - Users on the left menu

      img
    • Choose user which you want to configure for SSO and press Edit.

      img
    • Copy Username field from General Information to the notepad file, for example. You will need it in future steps.

      img
    • Scroll down to Single Sign On Information and paste this username to Federation ID field.

      img
  2. Setup SAML:

    • Open Security ControlsSingle Sign-On Settings on the left menu

      img
    • Click New to create new SAML SSO configuration for Centrify

      img
    • In the next page fill the next fields

      • Name (1) for this configuration (e.g. cloudaware.centrify )

      • API Name (2) (e.g. cloudaware_centrify)

      • For Entity ID (3) type https://saml.salesforce.com

      • SAML Identity Type (4) switch to Federation ID

        img
    • Switch back to Centrify tab and copy Issuer to appropriate field in Salesforce.

      img
    • Copy Identity Provider Login URL from Centrify to appropriate field in Salesforce

      img
    • At the bottom of the Centrify page click Download button and save the certificate.

      img
    • On the Salesforce tab click Choose file and choose previously saved certificate.

      img
    • Now it is time to save configuration. Click Save.

    • From the Endpoints section copy Login URL to Assertion Consumer Service URL in your Centrify.

      img
      img
    • Click Account Mapping, choose Everybody shares a single user name and paste Federation ID, that we noted above

      img
    • Navigate to User Access and chose role you want to assign this web app.

      img
    • Click Save at the bottom.

Now application is ready.

AWS SNS

Sample Use Cases

  • EC2/RDS instance stopped

  • 30-Day spending exceeds certain amount

  • IDS detects an attack against an instance

  • Server health deteriorates

Creating Outbound Message URL

  • Sign in to your org and select Admin tab.

    img
    img
  • Find SNS Integration in the list of Other Integrations and click 'Open Wizard'.

    img
  • Fill out the following form and click 'Create':

    img
    • Amazon Account: choose the required one from the list

    • Amazon Region: region where this AWS account is located, shown in SNS topic details as well

    • Topic ARN: shown in the topic details in the Simple Notification Service of your Amazon console

    • Subject Field: optional

    • Message Field: specify a message

      A new Outbound Message URL will be created. Click 'Copy URL to Buffer'.

      img

Configuring Cloudaware Workflow To Invoke SNS Action

  • Go to Setup - Workflows & Approvals - Workflow Rules - New Rule and create a workflow rule that will post notifications to SNS. Here is the sample workflow criteria that will send out alerts when any EC2 instance is stopped.

    img
  • Click 'Add Workflow Action' and select 'New Outbound Message'.

    img
  • Specify:

    • Name

    • Endpoint URL: taken from the step Creating Outbound Message URL

    • User to sent as

      Select the fields shown below to be displayed in the outbound message. Click 'Save'.

      img
  • After your workflow is triggered, you will receive an SNS notification.

Sumo Logic Webhook

  1. In Cloudaware go to Admin, find Sumo Logic Webhook in the list of Monitoring tools. Click Add.

    img
  2. Type in the integration name (e.g. Sumo Logic To Cloudaware), click Save.

    Use meaningful name as it will be displayed as ‘Incident Source Provider’ on Cloudaware Incidents.
    img
  3. Use payload and URL to create the integration between Sumo Logic and Cloudaware.

    img
  4. Login to Sumo Logic, go to Manage Data → Settings → Connections.

    img
  5. Click New and choose Webhook as a connection type.

    img
    img
  6. Type in:

    • Name

    • Description (optional)

    • URL (paste the URL from the step 3)

    • Authorization Header (optional)

    • Payload (paste the payload from the step 3)

      Click Save.

      img
  7. Click Test Connection to make sure the integration is working. This will create a test incident in Cloudaware.

  8. All Sumo Logic alerts will appear in Cloudaware as Incidents.

EC2/RDS Instance Schedule

Description

The EC2/RDS scheduler starts and stops EC2/RDS instances based on the cloudaware:scheduler tag value. Schedule tag value may consist of several fields separated by a space, each field in the format key=value.

IAM Policy

EC2

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "*"
        }
    ]
}

RDS

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds:DescribeDBInstances",
                "rds:ListTagsForResource",
                "rds:StartDBInstance",
                "rds:StopDBInstance"
            ],
            "Resource": "*"
        }
    ]
}

The list of the keys

tz (required key)

Timezone to be used by scheduler.

Can be POSIX style abbreviation or “Area/Location” style from IANA Time Zone Database.

Example 1. Timezone POSIX style:

tz=EST

Example 2. Timezone Area/Location style:

tz=America/New_York

default (required key)

default instance work hours

Can be either numeric or range with a hyphen separated by the slashes /. It also can be the word off which mean that instance should be stopped all that day or on which mean that instance should be running. The work hours should be specified in 24-hour format.

mon, tue, wed, thu, fri, sat, sun

instance work hours for current day.

Format is the same as key default.

Example 3. Work hours #1:

default=9-16 sat=off sun=off

Instance will be running from 9:00 to 16:59 all days except Saturday and Sunday.

Example 4. Work hours #2:

default=off wed=9/14-17

Instance will be running only on Wednesday from 9:00 to 9:59 and from 14:00 to 17:59.

policy (optional key)

scheduler mode

Can be start_at_time or stop_at_time which mean that the scheduler will check and change instance status only at time defined with keys default and mon .. sun.

Full examples

tz=UTC default=9-17 sat=9-11/15-17 sun=off
schedule timezone
  • UTC;

instance work hours
  • from Monday to Friday: 9:00-17:59;

  • Saturday: 9:00-11:59 and 15:00-17:59;

  • Sunday: completely off;

tz=America/New_York default=off wed=0-5
schedule timezone
  • America/New York;

instance work hours
  • Wednesday: 00:00-5:59;

Compatibility