AWS SSM Patch Failed – No IMDS credentials – S3 Access Denied – Solution

Another impossible to debug AWS issue:

I ran an AWS Systems Manager Patch Manager RunPatchBaseline association on a managed instance. My managed instance uses a custom IAM Role slash Instance Profile, but I read the documentation and added the proper permissions.

I ran RunPatchBaseline and got an Association Failed status on the instance with the custom role. The Run Command Output showed the following error:

No IMDS credentials found on instance.failed to run commands: exit status 156

I connected to the instance and verified that IMDS does work fine on the instance, following the AWS documentation here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html

Further details under the Run Command Output showed a completely different error:

ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden

Further up in the logs I found that the SSM agent was trying to download the baseline_overrides.json from a dedicated S3 bucket operated by Amazon, starting with aws-quicksetup-patchpolicy-ACCOUNTID-QUICKSETUPCONFIGID. My IAM profile, even though it is setup to allow access to this bucket, and can list the baseline_overrides.json object, is blocked from downloading it.

Solution: by comparing the custom role to the Amazon generated QuickSetup role, I figured out that I have to manually add a Tag to the custom IAM role for this to work. S3 is checking if the ROLE has a special Tag associated before allowing access to the object. It is documented on AWS here: https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-patch-manager.html

“You must tag your IAM instance profile or IAM service role with the following key-value pair.
Key: QSConfigId-quick-setup-configuration-id, Value: quick-setup-configuration-id”

Hope This Helps,

imre Fitos

j j j

Solution to BOTO3 S3 CreateMultipartUpload Access Denied problem

We ran into a weird problem when we tried to stream to an S3 file using boto3, and all the posts on Stack Overflow had wildly inaccurate and generally non-working solutions, so I’m posting this hoping that maybe it will save someone some time.

The problem: You write an S3 upload in python, and it gives you the following error:

ValueError: the bucket 'XXX' does not exist, or is forbidden for access (ClientError('An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied'))

The error clearly spells out that this is a permission problem, so you spend some time trying to add the proper permissions. You learn that there is no such thing as an S3:CreateMultipartUpload permission – boto3 uses the normal s3:PutObject permission. So you google some more.

Then you think it’s an ACL permission – nope.

Then you think maybe your encrypted S3 bucket is the problem and you need to add kms:GenerateDataKey permission? But no, you use encryption with Amazon S3 managed keys (SSE-S3) and it does not require extra kms permissions. Another dead end. How did it ever work for other people?

Then you throw all the permissions that exist on the user and it’s still failing. What gives?

You enable boto3 debug logs with boto3.set_stream_logger('') but the log looks okay, except that it gets a 403 access denied from Amazon.

Then your brilliant colleague Fatih Elmali reads the code and says that regardless of all the examples Amazon has published, the following is not enough:

client = boto3.client('s3', aws_access_key_id=...)

The proper way to setup authentication for a boto3 s3 client is the following:

session = boto3.Session(aws_access_key_id=...)
client = session.client('s3')

This will set up the proper session authentication and streaming to an S3 file object will work.

j j j

AWS Chatbot custom message – solution

Most DevOps people who set up AWS Chatbot integrations with other AWS services eventually start wondering how to send custom messages through Chatbot.

At this point I would to remind you that your life will be much easier if you give up on the idea and instead send your message directly to Slack using a web hook.

But if you want to see this to the end:

Tom Stroobants documented the general SNS message format that Chatbot expects and it looks like this:

{
  "version": "0",
  "time": "1970-01-01T00:00:00Z",
  "id": "00000000-0000-0000-0000-000000000000",
  "account": "[your real account id]",
  "region": "[a real region]",
  "source": "aws.[a service prefix e.g. ec2]",
  "detail-type": "[you can use this field for your message]",
  "resources": [],
  "detail": {}
}

As long as these fields are present in the message AWS Chatbot will forward the message to Slack, but will not display any more details other than the text in the “detail-type” field, and doubles up that text.

To make AWS Chatbot deliver a more detailed message, one has to format the message according to the AWS Events that Chatbot supports. Which means our messages will have to have a predefined “detail-type” and “source”.

To see examples of all message formats that Chatbot can display, to find one that we could co-opt for our purposes:

  1. Open the EventBridge console at https://console.aws.amazon.com/events/.
  2. In the navigation pane, choose Rules.
  3. Choose Create rule.
  4. Enter a name and description for the rule.
  5. For Define pattern, choose Rule with an event pattern.
  6. Hit Next.
  7. For Event source, leave it on AWS events
  8. Now you can browse all available events under Sample Event / AWS events.

You will quickly notice that the event names are quite specific, and you might not want to use “VoiceId Batch Fraudster Registration Action” for your custom message.

I found that the “AWS Health Event” is innocent enough to be reusable, and now I am able to send free form paragraphs using the following:

{
    "version": "0",
    "id": "00000000-0000-0000-0000-000000000000",
    "account": "[my AWS account number]",
    "time": "1970-01-01T00:00:00Z",
    "region": "us-east-1",
    "source": "aws.health",
    "detail-type": "AWS Health Event",
    "resources": [],
    "detail": {
      "eventDescription": [{
        "language": "en_US",
        "latestDescription": "Long form message\nMore lines"
      }]
    }
}

I hope somebody with good enough connections to the AWS Chatbot team will get more details out of them, right now their official line is “AWS Chatbot only supports AWS Services”. Help?

HTH, imre

j j j

AWS Force MFA example policy doesn’t work on Administrators – Fix

There are several example policies written by Amazon itself, and also by other security providers like Yubico that claim to enforce MFA use, but simply do not work on users who have AdministratorAccess policy.

Here is an actual example policy written by Amazon that actually works: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_my-sec-creds-self-manage.html

j j j

LambdaAccessDenied error in AWS Load Balancer – Solution

Permission handling in ELB and Lambda is somewhat magical, some of the tools autoprovision permissions behind the scene, and some of them sometimes mess up.

I had a Lambda that I was invoking from a load balancer and it simply did not work. The only hint was “LambdaAccessDenied” in the ALB logs.

I had everything configured correctly. I have added a lambda permission for the entire elasticloadbalancing.amazonaws.com service to invoke my function. I had the proper target groups. I had even enabled AWS SAM to autoprovision the IAM roles. The Lambda function was firing correctly, I had logs to show that it was executing.

But I kept getting “502 Bad Gateway” from the load balancer and the logs kept showing LambdaAccessDenied.

I removed all the custom stuff I created. I removed the alias. I removed and re provisioned the entire lambda function. I removed and recreated the target group.

Eventually I removed the target group and the permission I created,
and provisioned an “Application Load Balancer” Trigger from the Lambda console. This created a new target group and a new resource-based policy under Permissions, and suddenly everything started working, even though the new entries looked exactly the same as the entries I created.

Since there are only five entries on Google that even mention this error message, I figured you might want to save some time and learn from my experience.

j j j