SCPs / AI AWS – protecting against the unwanted

Not so long-ago AWS has updated their Terms of Service (https://aws.amazon.com/service-terms/) and since AI is the current hot topic, we’ve decided to look a bit closer into those sections. 

50.3 (b), we may store such AI Content in an AWS region outside of the AWS region where you are using such AI Service

50.13.1. (..) we may store Amazon Q Content in an AWS region outside of the AWS region where you are using Amazon Q

50.13.2. You agree and instruct that we may also use Amazon Q Content that does not contain personal data to develop and improve AWS and affiliate machine-learning and artificial intelligence technologies including to train machine-learning models.

The above are just a few examples of what security aware organizations might want to look at when reviewing AWS policies for AI and planning their usage. 

For organizations that want to have sovereignty of where their data the above is of course “less than ideal” to say the least. For some regulated industries this could even mean a breach of compliance and trouble with their respective regulatory body. 

Luckily there are options for such organizations that wish to remain in control of how their data is used. AWS created an option to Opt-Out of various features inside of their platform. The first step is to flip a switch and enable “AI Opt-Out” policy in the Management account. This is a blanket ban on all of the services AWS has labeled as “AI based”. At the time of writing this article the count is 25 services. This will prevent all member accounts under this management account from using the AI services.

However, if you would wish to accept the ToS and use some of the tools provided by AWS a few extra steps are required. You can apply a specially crafted policy to control which services the developers can use. For this use case AWS has created a different policy language that we are normally used to. Here is the JSON structure that’s a little bit cleaned up and prepared to apply:

{

 "services": {

        "@@operators_allowed_for_child_policies": ["@@none"],

        "default": {

       "@@operators_allowed_for_child_policies": ["@@none"],

       "opt_out_policy": {

       "@@operators_allowed_for_child_policies": ["@@none"],

       "@@assign": "optOut"

       }

        }

 }

}

If you manage your AWS Organization setup with Terraform there is a pretty easy way to do it ( eg. https://github.com/gblues/aws-ml-opt-out)

Managing this via the AWS Console is a bit trickier but AWS has a decent description on how to do it.  https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_ai-opt-out_attach.html

Lets have a look at how a policy you use could look like:

{

   "services": {

       "default": {

           "opt_out_policy": {

                "@@assign": "optOut"

           }

       },

       "lex": {

           "opt_out_policy": {

               "@@operators_allowed_for_child_policies": ["@@none"],

                "@@assign": "optOut"

           }

       },

       "securitylake ": {

           "opt_out_policy": {

                "@@assign": "optIn"

           }

       }

    }

}

First of all the evaluation logic is a bit different in this policy. If you attach this to your organization Root it will apply the restrictions accordingly. However, it is also possible to manage exceptions (a rare case in AWS) and attach different (more permissive) policies directly to an OU or a Account. 
The above policy by default Opts Out all of the current and future AI services. The second block presented additionally prevents usage of Lex even in case it is allowed by a OU/account level policy. The last segment additionally Opts In all accounts into Security Lake.

 

Not sure what to do?