About Nessus Agents
Nessus Agents are lightweight, low-footprint programs that are installed locally on hosts to supplement traditional network-based scanning or to provide visibility into gaps that are missed by traditional scanning. Nessus Agents collect vulnerability, compliance, and system data, and report that information back to a manager for analysis. With Nessus Agents, you extend scan flexibility and coverage. You can scan hosts without using credentials, as well as offline assets and endpoints that intermittently connect to the internet. You can also run large-scale concurrent agent scans with little network impact.
Why Use Nessus Agents?
Nessus Agents help you address the challenges of traditional network-based scanning, specifically for the assets where it’s impossible or nearly impossible to consistently collect information about your organization’s security posture. Traditional scanning typically occurs at selected intervals or during designated windows and requires systems to be accessible when a scan is executed. If laptops or other transient devices are not accessible when a scan is executed, they are excluded from the scan, leaving you blind to vulnerabilities on those devices. Nessus Agents help reduce your organization’s attack surface by scanning assets that are off the network or powered-down during scheduled assessments or by scanning other difficult-to-scan assets.
Once installed on servers, portable devices, or other assets found in today’s complex IT environments, Nessus Agents identify vulnerabilities, policy violations, misconfigurations, and malware on the hosts where they are installed and report results back to the managing product. You can manage Nessus Agents with Nessus Manager or Tenable.io.
Nessus Agents report to ISO at UT
You can view the ISO page on Nessus Agents here: https://security.utexas.edu/nessus-agents
Once installed, the agents will report back to the Nessus Manager which collects the information for ISO to analyze.
See into any cloud such as AWS, Google Cloud, Microsoft Azure, Hyper V and VMWare. Public, private, and hybrid cloud systems offer changeable combinations of agility, cost-effectiveness and scalability. CloudLens is primarily focused on protecting sensitive data such as FERPA, HIPAA, and other similar types of data.
However, businesses cannot secure what they cannot see. Without packet-level cloud visibility, your tools are blinded to prevent potential performance problems or protect against vulnerabilities. CloudLens SaaS helps to secure clouds by providing monitoring tools with timely, actionable traffic data. CloudLens is an auto-scaling, cloud-native design and turnkey compatibility with leading security, application performance management (APM), and network performance management (NPM) tools.
AWS CloudTrail Usage by the Emerging Technologies and Architecture Group
CloudTrail is currently used for an IQ project to store logs in a dedicated S3 bucket. CloudTrail is enabled for modular input for each unique SQS > SNS > S3 bucket path. The logs are grabbed by Splunk where the information can be analyzed through a dashboard.
What is AWS CloudTrail?
CloudTrail is a service offered by Amazon Web Services (AWS) that captures logs of all API calls for an AWS account and its services. CloudTrail enables continuous monitoring and post-incident forensic investigations of AWS by providing an audit trail of all activities across an AWS infrastructure. All CloudTrail logs files get stored in a dedicated S3 bucket.
Benefits of AWS CloudTrail
Activity monitoring – CloudTrail provides the raw data that could be used, in conjunction with a Cloud Access Security Broker (CASB), to monitor user and resource activity, to detect insecure or inappropriate changes to services or resources, and automate corrections of security misconfigurations.
Streamlined compliance – CloudTrail standardizes an organization’s compliance requirements by automating the capture and storage of logs of activities and actions taken in an AWS account. This enables recognition of events that may be out of compliance with internal policies or external regulations.
Security auditing – CloudTrail helps discover changes made to an AWS account that have the potential of putting the data or the account at heightened security risk while expediting AWS audit request fulfillment.
AWS CloudTrail best practices for security and compliance
Below are Eight CloudTrail best practices that all AWS customers should be following.
1) Ensure CloudTrail is enabled across all AWS globally
This is the first step needed to take advantage of CloudTrail. By enabling global CloudTrail logging, it will be able to generate logs for all AWS services including those that are not region specific, such as IAM, CloudFront, etc.
2) Turn on CloudTrail log file validation
When log file validation is turned on, any changes made to the log file itself after it has been delivered to the S3 bucket will be identifiable. This functionality provides an additional layer of protection and ensures the integrity of the log files.
3) Enable CloudTrail multi-region logging
The AWS API call history provided by CloudTrail allows security analysts to track resource changes, audit compliance, investigate incidents, and ensure security best practices are followed. By having CloudTrail enabled in all regions, organizations will be able to detect unexpected activity in otherwise unused regions.
4) Integrate CloudTrail with CloudWatch
CloudWatch can be used to monitor, store, and access log files from EC2 instances, CloudTrail, and other sources. With this integration, real-time and historic activity logging based on user, API, resource, and IP address is facilitated. It also supports setting up alarms and notifications for anomalous or inappropriate account activity.
5) Enable access logging for CloudTrail S3 buckets
CloudTrail S3 buckets contain the log data that is captured by CloudTrail, supporting activity monitoring and forensic investigations. By enabling access logging for CloudTrail S3 buckets, customers can track access requests and identify potentially unauthorized or unwarranted access attempts.
6) Require multi-factor authentication (MFA) to delete CloudTrail buckets
Once an AWS account has been compromised, one of the first steps the hacker will likely take is delete CloudTrail logs to cover his tracks and delay detection. By requiring multi-factor authentication to delete an S3 bucket containing CloudTrail logs, the hacker will find it more difficult to remove the logs and remain hidden.
7) Restrict access to CloudTrail S3 bucket
Unrestricted access to CloudTrail logs should never be enabled for any user or administrator account. While most AWS users and admins will not have any malicious intent to cause harm, they are still susceptible to phishing attacks that could expose their account credentials and lead to an account compromise. Restricting access to CloudTrail logs will decrease the risk of unauthorized and unfettered access to the logs.
8) Encrypt CloudTrail log files at rest
In order to decrypt encrypted CloudTrail log files, a user must have decryption permission by the customer created key (CMK) policy along with permission to access the S3 buckets containing the logs. This means only users whose jobs require it should have both decryption permission and access permission to S3 buckets containing CloudTrail logs.
- Address your VPCs and VNets that may need to communicate with other networks (VPC, VNet, or campus) using unique, campus-routable CIDR blocks
- Protect cloud resources with Security Groups and/or Network Access Control Lists
- Where possible, define separate private and public Subnets, and limit resources deployed in Public Subnets
- To connect VPCs or VNets to campus, register the cloud account with ITS Skyhub
Infrastructure as Code
Infrastructure as code is the process of provisioning and managing your cloud resources by writing a template file that is both human readable, and machine consumable. Infrastructure as code brings a lot of benefits:
Visibility: An infrastructure as code template serves as a very clear reference of what resources are on your account, and what their settings are. You don’t have to navigate to the web console to check the parameters.
Stability: If you accidentally change the wrong setting or delete the wrong resource in the web console you can break things. Infrastructure as code helps solve this, especially when it is combined with version control, such as Git.
Scalability: With infrastructure as code you can write it once and then reuse it many times. This means that one well written template can be used as the basis for multiple services, in multiple regions around the world, making it much easier to horizontally scale.
Security: Once again infrastructure as code gives you a unified template for how to deploy your architecture. If you create one well-secured architecture you can reuse it multiple times, and know that each deployed version is following the same settings.
In working with cloud services, UT strives to find the quickest path with the lowest latency to provide the best user experience for faculty, staff, and students. Below, please keep reading to learn more about UT’s choice of regions by cloud vendor below.
ITS recommends and uses us-east-1 which is the N. Virginia region. Not only does it provide the quickest access speeds but is loaded with lots of AWS services. If the service requires geodiversity, please select a second region to ensure your application and data are protected in the event N. Virginia becomes unavailable. ITS does not have a recommendation for a second region to support geodiversity at this time.
ITS recommends using the South Central US region that is located in the San Antonio, TX area due to its close proximity to Austin.
ITS does not have a recommendation at this time since this cloud vendor is still in the onboarding process. At present, ITS is considering us-central1 located in the Council Bluffs, Iowa area. This section will be updated as the vendor onboarding process continues.