Appearance
AWS S3 Integration
You can use AWS S3 as a source or a destination. Continue below to integrate as a source, or click here to integrate as a destination.
AWS S3 as a Source
Send logs from S3 bucket to Realm
Create SQS Queue for notifications
- Go to SQS Queue
- Create Queue
- Fill out details
Name: rlm-s3-event-notifications
- Copy the ARN of the queue
Create IAM policy
- Go to IAM > Policy
- Create Policy
- Click JSON
- Paste the following and replace s3_bucket_arn with the name of your s3 bucket and sqs_queue_arn with the SQS queue from above
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3ReadObjects",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"<s3_bucket_arn>",
"<s3_bucket_arn>/*"
]
},
{
"Sid": "sqsEventNotifications",
"Effect": "Allow",
"Action": [
"sqs:ReceiveMessage",
"sqs:DeleteMessage"
],
"Resource": [
"<sqs_queue_arn>"
]
}
]
}- Click next.
- Enter policy name: rlm-s3-notifications-and-read
- Enter description: Grant read access to s3 cloud trail bucket as well as receiving s3 notifications from the SQS queue.
- Click Create Policy.
Create User with credentials
- Go to IAM > Users > Create User
- Enter Name: rlm-s3-read-user
- Click Next
- Click Attach Policy Directly
- Select the policy created in the previous step
- Click Next
- Click Create user
- Select the user that was just created
- Copy the ARN of the user, and save it to a safe location - you will need it in the next step
- Go to Security Credentials
- Click Create Access Key
- Select Third party Service
- Check the confirmation checkbox
- Click Next
- Enter description: Credentials for Realm.Security to read Cloudtrail logs from S3 bucket
- Copy and save the Access Key and Secret access key in a safe location, you will need to paste these two values in Realm console when configuring the S3 input feed
Update SQS policy
- Go to SQS Queues
- Select the rlm-event-notification queue
- Go to Queue Policies > Edit Access policy
- Replace the policy JSON with the following
{
"Version": "2012-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "Stmt1737666508309",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "sqs:SendMessage",
"Resource": "<sqs_queue_arn>"
},
{
"Sid": "Stmt1737666814690",
"Effect": "Allow",
"Principal": {
"AWS": "<iam_user_arn>"
},
"Action": [
"sqs:ChangeMessageVisibility",
"sqs:DeleteMessage",
"sqs:ReceiveMessage"
],
"Resource": "<sqs_queue_arn>"
}
]
}- Click Save
Configure notifications for s3 bucket
- Login to AWS console
- Go to S3 buckets
- Select the bucket
- Go to Properties
- Go to Event Notifications and click on Create Event Notification
- Fill out the event notification details
Event name: RlmCreateEvents
Select All object create events checkbox
Destination: Select SQS Queue
In the drop down, select SQS queue that you created in the step above
- Click Save Changes
AWS S3 as a Destination
Send logs from Realm to S3 bucket
- Create S3 Bucket.
- Login to AWS console.
- Go to S3 > Buckets.
- Click Create Bucket.
- Enter bucket name:
rlm-demo-output- Click Create bucket.
- Copy name of the bucket.
Note: Versioning is disabled by default.
Create policy that grants write permission to the above S3 bucket
- In a new Tab, Go to IAM > Policies.
- Click on Create Policy.
- Click JSON.
- Paste the following policy and replace <BUCKET_NAME> with the name of the bucket created above.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3WriteRealmBucket",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListBucket",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::<BUCKET_NAME>",
"arn:aws:s3:::<BUCKET_NAME>/*"
]
}
]
}Create IAM user
- Go to IAM > Users.
- Click Create User.
- Give it a name:
<tenant_name>-realm.security-s3-output- Click Next.
- Click Attach policies directly.
- From the list, search and select Policy created above.
- Click Next.
- Click Create user.
- Select user that was just created.
- Click Security Credentials.
- Click Create Access Keys.
- Click Third Party Service.
- Select "I understand" checkbox.
- Click Next.
- Enter description:
Allows <tenant_name> in stage to write to <bucket_name> s3 bucketStructure
Within the configured S3 bucket, the data will be stored in the following structure by default s3://{bucket_name}/{destination_name}/{source_name}/YYYY/MM/DD/*.log.gz.
To use a different S3 prefix structure, update the Key Prefix field for output feed. For example, if Key Prefix field is set to foo, the bucket structure will be s3://{bucket_name}/foo/*.log.gz.
You can include date with key_prefix using special place holders %Y, %m, %d. For example
- when
key_prefixis set tofoo/year=%Y/month=%m/day=%d/, it will create following structure in S3s3://{bucket_name}/foo/year=2025/month=10/day=23/. - when
key_prefixis set tofoo/%Y/%m/%d/, it will create following structure in S3s3://{bucket_name}/foo/2025/10/23/.
Each file uses zstd compression and the file contains events that are separated by new lines. The format of each event (RAW vs JSON) is determined by the selected Format for the AWS S3 output feed.